diff --git a/.gitattributes b/.gitattributes
new file mode 100644
index 000000000..1bccc1fa8
--- /dev/null
+++ b/.gitattributes
@@ -0,0 +1 @@
+*.h5 filter=lfs diff=lfs merge=lfs -text
diff --git a/README.md b/README.md
index 0671349a6..33b881b4a 100644
--- a/README.md
+++ b/README.md
@@ -21,7 +21,7 @@ The SDK consists of:
* [Spot API protocol definition](protos/bosdyn/api/README.md). This reference guide covers the details of the protocol applications used to communicate to Spot. Application developers who wish to use a language other than Python can implement clients that speak the protocol.
* [Spot SDK Repository](https://github.com/boston-dynamics/spot-sdk). The GitHub repo where all of the Spot SDK code is hosted.
-This is version 2.3.5 of the SDK. Please review the [Release Notes](docs/release_notes.md) to see what has changed.
+This is version 3.0.0 of the SDK. Please review the [Release Notes](docs/release_notes.md) to see what has changed.
## Contents
diff --git a/VERSION b/VERSION
index cc6c9a491..4a36342fc 100644
--- a/VERSION
+++ b/VERSION
@@ -1 +1 @@
-2.3.5
+3.0.0
diff --git a/choreography_protos/bosdyn/api/choreography_reference.md b/choreography_protos/bosdyn/api/choreography_reference.md
new file mode 100644
index 000000000..3547d1c0f
--- /dev/null
+++ b/choreography_protos/bosdyn/api/choreography_reference.md
@@ -0,0 +1,1891 @@
+
+
+
+
+
+
+# spot/choreography_params.proto
+
+
+
+
+
+### AnimateParams
+
+
+
+| Field | Type | Description |
+| ----- | ---- | ----------- |
+| animation_name | [string](#string) | The name of the animated move. There are no default values/bounds associated with this field. |
+| body_entry_slices | [google.protobuf.DoubleValue](#google.protobuf.DoubleValue) | How many slices to smoothly transition from previous pose to animation. |
+| body_exit_slices | [google.protobuf.DoubleValue](#google.protobuf.DoubleValue) | How many slices to return from animation to nominal pose. Zero indicates to keep final animated pose. |
+| translation_multiplier | [bosdyn.api.Vec3Value](#bosdyn.api.Vec3Value) | Multiplier for animated translation by axis to exaggerate or suppress motion along specific axes. |
+| rotation_multiplier | [EulerZYXValue](#bosdyn.api.spot.EulerZYXValue) | Multiplier for the animated orientation by axis to exaggerate or suppress motion along specific axes. |
+| arm_entry_slices | [google.protobuf.DoubleValue](#google.protobuf.DoubleValue) | How many slices to smoothly transition from previous pose to animation. |
+| shoulder_0_offset | [google.protobuf.DoubleValue](#google.protobuf.DoubleValue) | Joint angle offsets in radians for the arm joints. |
+| shoulder_1_offset | [google.protobuf.DoubleValue](#google.protobuf.DoubleValue) | |
+| elbow_0_offset | [google.protobuf.DoubleValue](#google.protobuf.DoubleValue) | |
+| elbow_1_offset | [google.protobuf.DoubleValue](#google.protobuf.DoubleValue) | |
+| wrist_0_offset | [google.protobuf.DoubleValue](#google.protobuf.DoubleValue) | |
+| wrist_1_offset | [google.protobuf.DoubleValue](#google.protobuf.DoubleValue) | |
+| gripper_offset | [google.protobuf.DoubleValue](#google.protobuf.DoubleValue) | |
+| speed | [google.protobuf.DoubleValue](#google.protobuf.DoubleValue) | How fast to playback. 1.0 is normal speed. larger is faster. |
+| offset_slices | [google.protobuf.DoubleValue](#google.protobuf.DoubleValue) | How late into the nominal script to start. |
+| gripper_multiplier | [google.protobuf.DoubleValue](#google.protobuf.DoubleValue) | Multiply all gripper angles by this value. |
+| gripper_strength_fraction | [google.protobuf.DoubleValue](#google.protobuf.DoubleValue) | How hard the gripper can squeeze. Fraction of full strength. |
+| arm_dance_frame_id | [google.protobuf.Int32Value](#google.protobuf.Int32Value) | Which dance frame to use as a reference for workspace arm moves. Including this parameter overrides the animation frame. |
+| body_tracking_stiffness | [google.protobuf.DoubleValue](#google.protobuf.DoubleValue) | How hard to try to track the animated body motion. Only applicable to animations that control both the body and the legs. On a scale of 1 to 10 (11 for a bit extra). Higher will result in more closely tracking the animated body motion, but possibly at the expense of balance for more difficult animations. |
+
+
+
+
+
+
+
+
+### ArmMoveParams
+
+Parameters specific to ArmMove move.
+
+
+
+| Field | Type | Description |
+| ----- | ---- | ----------- |
+| shoulder_0 | [google.protobuf.DoubleValue](#google.protobuf.DoubleValue) | Joint angles in radians for the arm joints. |
+| shoulder_1 | [google.protobuf.DoubleValue](#google.protobuf.DoubleValue) | |
+| elbow_0 | [google.protobuf.DoubleValue](#google.protobuf.DoubleValue) | |
+| elbow_1 | [google.protobuf.DoubleValue](#google.protobuf.DoubleValue) | |
+| wrist_0 | [google.protobuf.DoubleValue](#google.protobuf.DoubleValue) | |
+| wrist_1 | [google.protobuf.DoubleValue](#google.protobuf.DoubleValue) | |
+| easing | [Easing](#bosdyn.api.spot.Easing) | How the motion should be paced. |
+| gripper | [google.protobuf.DoubleValue](#google.protobuf.DoubleValue) | Movement for the gripper. |
+
+
+
+
+
+
+
+
+### BodyHoldParams
+
+Parameters specific to the BodyHold move.
+
+
+
+| Field | Type | Description |
+| ----- | ---- | ----------- |
+| rotation | [EulerZYXValue](#bosdyn.api.spot.EulerZYXValue) | The robot will rotate its body to the specified orientation (roll/pitch/yaw) [rad]. |
+| translation | [bosdyn.api.Vec3Value](#bosdyn.api.Vec3Value) | The positional offset to the robot's current location [m]. |
+| entry_slices | [google.protobuf.DoubleValue](#google.protobuf.DoubleValue) | How many "slices" (beats or sub-beats) are allowed before reaching the desired pose. |
+| exit_slices | [google.protobuf.DoubleValue](#google.protobuf.DoubleValue) | How many "slices" (beats or sub-beats) are allowed for the robot to return to the original pose. |
+
+
+
+
+
+
+
+
+### BourreeParams
+
+Parameters for the Bourree move.
+
+
+
+| Field | Type | Description |
+| ----- | ---- | ----------- |
+| velocity | [bosdyn.api.Vec2Value](#bosdyn.api.Vec2Value) | The speed at which we should bourree [m/s]. X is forward. Y is left. |
+| yaw_rate | [google.protobuf.DoubleValue](#google.protobuf.DoubleValue) | How fast the bourree should turn [rad/s]. |
+| stance_length | [google.protobuf.DoubleValue](#google.protobuf.DoubleValue) | How far apart front and hind feet should be. [m] |
+
+
+
+
+
+
+
+
+### ButtCircleParams
+
+Parameters specific to the ButtCircle DanceMove.
+
+
+
+| Field | Type | Description |
+| ----- | ---- | ----------- |
+| radius | [google.protobuf.DoubleValue](#google.protobuf.DoubleValue) | How big a circle the robutt will move in. Described in meters. |
+| beats_per_circle | [google.protobuf.DoubleValue](#google.protobuf.DoubleValue) | The number of beats that elapse while performing the butt circle. |
+| number_of_circles | [google.protobuf.DoubleValue](#google.protobuf.DoubleValue) | The number of circles that will be performed. If non-zero, takes precedence over beats_per_circle. |
+| pivot | [Pivot](#bosdyn.api.spot.Pivot) | The pivot point the butt circles should be centered around. |
+| clockwise | [google.protobuf.BoolValue](#google.protobuf.BoolValue) | Which way to rotate. |
+| starting_angle | [google.protobuf.DoubleValue](#google.protobuf.DoubleValue) | Where to start. Zero is up. |
+
+
+
+
+
+
+
+
+### ChickenHeadParams
+
+Parameters specific to the chicken head move.
+
+
+
+| Field | Type | Description |
+| ----- | ---- | ----------- |
+| bob_magnitude | [bosdyn.api.Vec3Value](#bosdyn.api.Vec3Value) | Bobs the head in this direction in the robot footprint frame. |
+| beats_per_cycle | [google.protobuf.Int32Value](#google.protobuf.Int32Value) | How fast to bob the head. |
+| follow | [google.protobuf.BoolValue](#google.protobuf.BoolValue) | Should we move the frame when the robot steps? |
+
+
+
+
+
+
+
+
+### ClapParams
+
+Parameters specific to clapping.
+
+
+
+| Field | Type | Description |
+| ----- | ---- | ----------- |
+| direction | [bosdyn.api.Vec3Value](#bosdyn.api.Vec3Value) | Direction in a gravity-aligned body frame of clapping motion. A typical value for the location is (0, 1, 0). |
+| location | [bosdyn.api.Vec3Value](#bosdyn.api.Vec3Value) | Location in body frame of the clap. A typical value for the location is (0.4, 0, -0.5). |
+| speed | [google.protobuf.DoubleValue](#google.protobuf.DoubleValue) | Speed of the clap [m/s]. |
+| clap_distance | [google.protobuf.DoubleValue](#google.protobuf.DoubleValue) | How far apart the limbs are before clapping [m]. |
+
+
+
+
+
+
+
+
+### Color
+
+
+
+| Field | Type | Description |
+| ----- | ---- | ----------- |
+| red | [google.protobuf.DoubleValue](#google.protobuf.DoubleValue) | |
+| green | [google.protobuf.DoubleValue](#google.protobuf.DoubleValue) | |
+| blue | [google.protobuf.DoubleValue](#google.protobuf.DoubleValue) | |
+
+
+
+
+
+
+
+
+### CrawlParams
+
+Parameters for the robot's crawling gait.
+
+
+
+| Field | Type | Description |
+| ----- | ---- | ----------- |
+| swing_slices | [google.protobuf.DoubleValue](#google.protobuf.DoubleValue) | The number of slices (beats/sub-beats) the duration of a leg swing in the crawl gait should be. |
+| velocity | [bosdyn.api.Vec2Value](#bosdyn.api.Vec2Value) | The speed at which we should crawl [m/s]. X is forward. Y is left. |
+| stance_width | [google.protobuf.DoubleValue](#google.protobuf.DoubleValue) | The distance between the robot's left and right feet [m]. |
+| stance_length | [google.protobuf.DoubleValue](#google.protobuf.DoubleValue) | The distance between the robot's front and back feet [m]. |
+
+
+
+
+
+
+
+
+### EulerRateZYXValue
+
+Euler Angle rates (yaw->pitch->roll) vector that uses wrapped values so we can tell which elements are set.
+
+
+
+| Field | Type | Description |
+| ----- | ---- | ----------- |
+| roll | [google.protobuf.DoubleValue](#google.protobuf.DoubleValue) | |
+| pitch | [google.protobuf.DoubleValue](#google.protobuf.DoubleValue) | |
+| yaw | [google.protobuf.DoubleValue](#google.protobuf.DoubleValue) | |
+
+
+
+
+
+
+
+
+### EulerZYXValue
+
+Euler Angle (yaw->pitch->roll) vector that uses wrapped values so we can tell which elements are set.
+
+
+
+| Field | Type | Description |
+| ----- | ---- | ----------- |
+| roll | [google.protobuf.DoubleValue](#google.protobuf.DoubleValue) | |
+| pitch | [google.protobuf.DoubleValue](#google.protobuf.DoubleValue) | |
+| yaw | [google.protobuf.DoubleValue](#google.protobuf.DoubleValue) | |
+
+
+
+
+
+
+
+
+### FadeColorParams
+
+
+
+| Field | Type | Description |
+| ----- | ---- | ----------- |
+| top_color | [Color](#bosdyn.api.spot.Color) | |
+| bottom_color | [Color](#bosdyn.api.spot.Color) | |
+| fade_in_slices | [google.protobuf.DoubleValue](#google.protobuf.DoubleValue) | |
+| fade_out_slices | [google.protobuf.DoubleValue](#google.protobuf.DoubleValue) | |
+
+
+
+
+
+
+
+
+### FidgetStandParams
+
+
+
+| Field | Type | Description |
+| ----- | ---- | ----------- |
+| preset | [FidgetStandParams.FidgetPreset](#bosdyn.api.spot.FidgetStandParams.FidgetPreset) | |
+| min_gaze_pitch | [google.protobuf.DoubleValue](#google.protobuf.DoubleValue) | |
+| max_gaze_pitch | [google.protobuf.DoubleValue](#google.protobuf.DoubleValue) | |
+| gaze_mean_period | [google.protobuf.DoubleValue](#google.protobuf.DoubleValue) | |
+| gaze_center_cfp | [bosdyn.api.Vec3Value](#bosdyn.api.Vec3Value) | |
+| shift_mean_period | [google.protobuf.DoubleValue](#google.protobuf.DoubleValue) | |
+| shift_max_transition_time | [google.protobuf.DoubleValue](#google.protobuf.DoubleValue) | |
+| breath_min_z | [google.protobuf.DoubleValue](#google.protobuf.DoubleValue) | |
+| breath_max_z | [google.protobuf.DoubleValue](#google.protobuf.DoubleValue) | |
+| breath_max_period | [google.protobuf.DoubleValue](#google.protobuf.DoubleValue) | |
+| leg_gesture_mean_period | [google.protobuf.DoubleValue](#google.protobuf.DoubleValue) | |
+| gaze_slew_rate | [google.protobuf.DoubleValue](#google.protobuf.DoubleValue) | |
+| gaze_position_generation_gain | [bosdyn.api.Vec3Value](#bosdyn.api.Vec3Value) | |
+| gaze_roll_generation_gain | [google.protobuf.DoubleValue](#google.protobuf.DoubleValue) | |
+
+
+
+
+
+
+
+
+### Figure8Params
+
+
+
+| Field | Type | Description |
+| ----- | ---- | ----------- |
+| height | [google.protobuf.DoubleValue](#google.protobuf.DoubleValue) | |
+| width | [google.protobuf.DoubleValue](#google.protobuf.DoubleValue) | |
+| beats_per_cycle | [google.protobuf.DoubleValue](#google.protobuf.DoubleValue) | |
+
+
+
+
+
+
+
+
+### FrameSnapshotParams
+
+
+
+| Field | Type | Description |
+| ----- | ---- | ----------- |
+| frame_id | [google.protobuf.Int32Value](#google.protobuf.Int32Value) | |
+| fiducial_number | [google.protobuf.Int32Value](#google.protobuf.Int32Value) | |
+| include_front_left_leg | [FrameSnapshotParams.Inclusion](#bosdyn.api.spot.FrameSnapshotParams.Inclusion) | |
+| include_front_right_leg | [FrameSnapshotParams.Inclusion](#bosdyn.api.spot.FrameSnapshotParams.Inclusion) | |
+| include_hind_left_leg | [FrameSnapshotParams.Inclusion](#bosdyn.api.spot.FrameSnapshotParams.Inclusion) | |
+| include_hind_right_leg | [FrameSnapshotParams.Inclusion](#bosdyn.api.spot.FrameSnapshotParams.Inclusion) | |
+| compensated | [google.protobuf.BoolValue](#google.protobuf.BoolValue) | |
+
+
+
+
+
+
+
+
+### FrontUpParams
+
+Parameters specific to FrontUp move.
+
+
+
+| Field | Type | Description |
+| ----- | ---- | ----------- |
+| mirror | [google.protobuf.BoolValue](#google.protobuf.BoolValue) | Should we raise the hind feet instead. |
+
+
+
+
+
+
+
+
+### GotoParams
+
+
+
+| Field | Type | Description |
+| ----- | ---- | ----------- |
+| absolute_position | [bosdyn.api.Vec2Value](#bosdyn.api.Vec2Value) | |
+| absolute_yaw | [google.protobuf.DoubleValue](#google.protobuf.DoubleValue) | |
+| step_position_stiffness | [google.protobuf.DoubleValue](#google.protobuf.DoubleValue) | |
+| duty_cycle | [google.protobuf.DoubleValue](#google.protobuf.DoubleValue) | |
+| link_to_next | [google.protobuf.BoolValue](#google.protobuf.BoolValue) | Should we combine with the next move into a smooth trajectory. |
+
+
+
+
+
+
+
+
+### GripperParams
+
+Parameters for open/close of gripper.
+
+
+
+| Field | Type | Description |
+| ----- | ---- | ----------- |
+| angle | [google.protobuf.DoubleValue](#google.protobuf.DoubleValue) | Angle in radians at which the gripper is open. Note that a 0 radian angle correlates to completely closed. |
+| speed | [google.protobuf.DoubleValue](#google.protobuf.DoubleValue) | Speed in m/s at which the gripper should open/close to achieve the desired angle. |
+
+
+
+
+
+
+
+
+### HopParams
+
+Parameters specific to Hop move.
+
+
+
+| Field | Type | Description |
+| ----- | ---- | ----------- |
+| velocity | [bosdyn.api.Vec2Value](#bosdyn.api.Vec2Value) | The velocity of the hop gait (X is forward; y is left)[m/s]. |
+| yaw_rate | [google.protobuf.DoubleValue](#google.protobuf.DoubleValue) | How fast the hop gait should turn [rad/s]. |
+| stand_time | [google.protobuf.DoubleValue](#google.protobuf.DoubleValue) | How long the robot should stand in between each hop. |
+
+
+
+
+
+
+
+
+### IndependentColorParams
+
+
+
+| Field | Type | Description |
+| ----- | ---- | ----------- |
+| top_left | [Color](#bosdyn.api.spot.Color) | |
+| upper_mid_left | [Color](#bosdyn.api.spot.Color) | |
+| lower_mid_left | [Color](#bosdyn.api.spot.Color) | |
+| bottom_left | [Color](#bosdyn.api.spot.Color) | |
+| top_right | [Color](#bosdyn.api.spot.Color) | |
+| upper_mid_right | [Color](#bosdyn.api.spot.Color) | |
+| lower_mid_right | [Color](#bosdyn.api.spot.Color) | |
+| bottom_right | [Color](#bosdyn.api.spot.Color) | |
+| fade_in_slices | [google.protobuf.DoubleValue](#google.protobuf.DoubleValue) | |
+| fade_out_slices | [google.protobuf.DoubleValue](#google.protobuf.DoubleValue) | |
+
+
+
+
+
+
+
+
+### JumpParams
+
+Parameters for the robot making a jump.
+
+
+
+| Field | Type | Description |
+| ----- | ---- | ----------- |
+| yaw | [google.protobuf.DoubleValue](#google.protobuf.DoubleValue) | The amount in radians that the robot will turn while in the air. |
+| flight_slices | [google.protobuf.DoubleValue](#google.protobuf.DoubleValue) | The amount of time in slices (beats) that the robot will be in the air. |
+| stance_width | [google.protobuf.DoubleValue](#google.protobuf.DoubleValue) | The distance between the robot's left and right feet [m]. |
+| stance_length | [google.protobuf.DoubleValue](#google.protobuf.DoubleValue) | The distance between the robot's front and back feet [m]. |
+| translation | [bosdyn.api.Vec2Value](#bosdyn.api.Vec2Value) | How far the robot should jump [m]. |
+| split_fraction | [google.protobuf.DoubleValue](#google.protobuf.DoubleValue) | How much it should lo/td the first pair of lets ahead of the other pair. In fraction of flight time. |
+| lead_leg_pair | [JumpParams.Lead](#bosdyn.api.spot.JumpParams.Lead) | |
+| yaw_is_absolute | [google.protobuf.BoolValue](#google.protobuf.BoolValue) | Should we turn to a yaw in choreography sequence frame? |
+| translation_is_absolute | [google.protobuf.BoolValue](#google.protobuf.BoolValue) | Should we translate in choreography sequence frame? |
+| absolute_yaw | [google.protobuf.DoubleValue](#google.protobuf.DoubleValue) | The direction the robot should face upon landing relative to pose at the start of the dance. [rad] |
+| absolute_translation | [bosdyn.api.Vec2Value](#bosdyn.api.Vec2Value) | Where the robot should land relative to the pose at the start of the dance. [m] |
+| swing_height | [google.protobuf.DoubleValue](#google.protobuf.DoubleValue) | |
+| absolute | [google.protobuf.BoolValue](#google.protobuf.BoolValue) | Deprecation Warning *** DEPRECATED as of 3.0.0: The absolute field has been deprecated and split into the yaw_is_absolute and translation_is_absolute fields. The following field will be deprecated and moved to 'reserved' in a future release. |
+
+
+
+
+
+
+
+
+### KneelCircleParams
+
+Parameters specific to the kneel_circles move.
+
+
+
+| Field | Type | Description |
+| ----- | ---- | ----------- |
+| location | [bosdyn.api.Vec3Value](#bosdyn.api.Vec3Value) | Location in body frame of the circle center. A typical value for the location is (0.4, 0, -0.5). |
+| beats_per_circle | [google.protobuf.Int32Value](#google.protobuf.Int32Value) | How beats per circle. One or two are reasonable values. |
+| number_of_circles | [google.protobuf.DoubleValue](#google.protobuf.DoubleValue) | How many circles to perform. Mutually exclusive with beats_per_circle. |
+| offset | [google.protobuf.DoubleValue](#google.protobuf.DoubleValue) | How far apart the feet are when circling [m]. |
+| radius | [google.protobuf.DoubleValue](#google.protobuf.DoubleValue) | Size of the circles [m]. |
+| reverse | [google.protobuf.BoolValue](#google.protobuf.BoolValue) | Which way to circle. |
+
+
+
+
+
+
+
+
+### KneelLegMove2Params
+
+Parameters specific to KneelLegMove2 move.
+
+
+
+| Field | Type | Description |
+| ----- | ---- | ----------- |
+| left_hip_x | [google.protobuf.DoubleValue](#google.protobuf.DoubleValue) | Joint angles of the front left leg in radians. |
+| left_hip_y | [google.protobuf.DoubleValue](#google.protobuf.DoubleValue) | |
+| left_knee | [google.protobuf.DoubleValue](#google.protobuf.DoubleValue) | |
+| right_hip_x | [google.protobuf.DoubleValue](#google.protobuf.DoubleValue) | Joint angles of the front right leg in radians. |
+| right_hip_y | [google.protobuf.DoubleValue](#google.protobuf.DoubleValue) | |
+| right_knee | [google.protobuf.DoubleValue](#google.protobuf.DoubleValue) | |
+| easing | [Easing](#bosdyn.api.spot.Easing) | How the motion should be paced. |
+| link_to_next | [google.protobuf.BoolValue](#google.protobuf.BoolValue) | Should we combine with the next move into a smooth trajectory. |
+
+
+
+
+
+
+
+
+### KneelLegMoveParams
+
+Parameters specific to KneelLegMove move.
+
+
+
+| Field | Type | Description |
+| ----- | ---- | ----------- |
+| hip_x | [google.protobuf.DoubleValue](#google.protobuf.DoubleValue) | Joint angles of the left front leg in radians. If mirrored, the joints will be flipped for the other leg. |
+| hip_y | [google.protobuf.DoubleValue](#google.protobuf.DoubleValue) | |
+| knee | [google.protobuf.DoubleValue](#google.protobuf.DoubleValue) | |
+| mirror | [google.protobuf.BoolValue](#google.protobuf.BoolValue) | If mirrored is true, the joints will be flipped for the leg on the other side (right vs left) of the body. |
+| easing | [Easing](#bosdyn.api.spot.Easing) | How the motion should be paced. |
+
+
+
+
+
+
+
+
+### Pace2StepParams
+
+Parameters specific to pace translation.
+
+
+
+| Field | Type | Description |
+| ----- | ---- | ----------- |
+| motion | [bosdyn.api.Vec2Value](#bosdyn.api.Vec2Value) | How far to move relative to starting position. [m] |
+| absolute_motion | [bosdyn.api.Vec2Value](#bosdyn.api.Vec2Value) | Where to move relative to position at start of script. [m] |
+| motion_is_absolute | [google.protobuf.BoolValue](#google.protobuf.BoolValue) | Is motion specified relative to pose at start of dance? |
+| swing_height | [google.protobuf.DoubleValue](#google.protobuf.DoubleValue) | Swing parameters to describe the footstep pattern during the pace translation gait. Note, a zero (or nearly zero) value will be considered as an unspecified parameter. |
+| swing_velocity | [google.protobuf.DoubleValue](#google.protobuf.DoubleValue) | |
+| yaw | [google.protobuf.DoubleValue](#google.protobuf.DoubleValue) | How far to turn, described in radians with a positive value representing a turn to the left. |
+| absolute_yaw | [google.protobuf.DoubleValue](#google.protobuf.DoubleValue) | Orientation to turn to, relative to the orientation at the start of the script. [rad] |
+| yaw_is_absolute | [google.protobuf.BoolValue](#google.protobuf.BoolValue) | Should we turn to a yaw in choreography sequence frame? |
+| absolute | [google.protobuf.BoolValue](#google.protobuf.BoolValue) | Deprecation Warning *** DEPRECATED as of 3.0.0: The absolute field has been deprecated and split into the yaw_is_absolute and translation_is_absolute fields. The following field will be deprecated and moved to 'reserved' in a future release. |
+
+
+
+
+
+
+
+
+### RandomRotateParams
+
+Parameters specific to the RandomRotate move.
+
+
+
+| Field | Type | Description |
+| ----- | ---- | ----------- |
+| amplitude | [EulerZYXValue](#bosdyn.api.spot.EulerZYXValue) | The amplitude [rad] of the rotation in each axis. |
+| speed | [EulerRateZYXValue](#bosdyn.api.spot.EulerRateZYXValue) | The speed [rad/s] of the motion in each axis. |
+| speed_variation | [google.protobuf.DoubleValue](#google.protobuf.DoubleValue) | The amount of variation allowed in the speed of the random rotations [m/s]. Note, this must be a positive value. |
+| num_speed_tiers | [google.protobuf.Int32Value](#google.protobuf.Int32Value) | The specified speed values will be split into this many number of tiers between the bounds of [speed - speed_variation, speed + speed variation]. Then a tier (with a specified speed) will be randomly choosen and performed for each axis. |
+| tier_variation | [google.protobuf.DoubleValue](#google.protobuf.DoubleValue) | How much can the output speed vary from the choosen tiered speed. |
+
+
+
+
+
+
+
+
+### RippleColorParams
+
+
+
+| Field | Type | Description |
+| ----- | ---- | ----------- |
+| main | [Color](#bosdyn.api.spot.Color) | |
+| secondary | [Color](#bosdyn.api.spot.Color) | |
+| pattern | [RippleColorParams.Pattern](#bosdyn.api.spot.RippleColorParams.Pattern) | |
+| light_side | [RippleColorParams.LightSide](#bosdyn.api.spot.RippleColorParams.LightSide) | |
+| increment_slices | [google.protobuf.DoubleValue](#google.protobuf.DoubleValue) | |
+
+
+
+
+
+
+
+
+### RotateBodyParams
+
+Parameters for the robot rotating the body.
+
+
+
+| Field | Type | Description |
+| ----- | ---- | ----------- |
+| rotation | [EulerZYXValue](#bosdyn.api.spot.EulerZYXValue) | The robot will rotate its body to the specified orientation (roll/pitch/yaw). |
+| return_to_start_pose | [google.protobuf.BoolValue](#google.protobuf.BoolValue) | If true, the robot will transition back to the initial pose we started at before this choreography sequence move begin execution, and otherwise it will remain in whatever pose it is in after completing the choreography sequence move. |
+
+
+
+
+
+
+
+
+### RunningManParams
+
+Parameters specific to RunningMan move.
+
+
+
+| Field | Type | Description |
+| ----- | ---- | ----------- |
+| velocity | [bosdyn.api.Vec2Value](#bosdyn.api.Vec2Value) | |
+| swing_height | [google.protobuf.DoubleValue](#google.protobuf.DoubleValue) | How high to pick up the forward-moving feet [m]. |
+| spread | [google.protobuf.DoubleValue](#google.protobuf.DoubleValue) | How far to spread the contralateral pair of feet [m]. |
+| reverse | [google.protobuf.BoolValue](#google.protobuf.BoolValue) | Should we reverse the motion? |
+| pre_move_cycles | [google.protobuf.Int32Value](#google.protobuf.Int32Value) | How many full running man cycles should the robot complete in place before starting to move with the desired velocity. |
+| speed_multiplier | [google.protobuf.DoubleValue](#google.protobuf.DoubleValue) | Do the move at some multiple of the dance cadence. |
+| duty_cycle | [google.protobuf.DoubleValue](#google.protobuf.DoubleValue) | What fraction of the time to have feet on the ground. |
+| com_height | [google.protobuf.DoubleValue](#google.protobuf.DoubleValue) | How high to hold the center of mass above the ground on average. |
+
+
+
+
+
+
+
+
+### SetColorParams
+
+
+
+| Field | Type | Description |
+| ----- | ---- | ----------- |
+| left_color | [Color](#bosdyn.api.spot.Color) | |
+| right_same_as_left | [google.protobuf.BoolValue](#google.protobuf.BoolValue) | |
+| right_color | [Color](#bosdyn.api.spot.Color) | |
+| fade_in_slices | [google.protobuf.DoubleValue](#google.protobuf.DoubleValue) | |
+| fade_out_slices | [google.protobuf.DoubleValue](#google.protobuf.DoubleValue) | |
+
+
+
+
+
+
+
+
+### SideParams
+
+Parameters for moves that can go to either side.
+
+
+
+| Field | Type | Description |
+| ----- | ---- | ----------- |
+| side | [SideParams.Side](#bosdyn.api.spot.SideParams.Side) | |
+
+
+
+
+
+
+
+
+### StepParams
+
+
+
+| Field | Type | Description |
+| ----- | ---- | ----------- |
+| foot | [Leg](#bosdyn.api.spot.Leg) | Which foot to use (FL = 1, FR = 2, HL = 3, HR = 4). |
+| offset | [bosdyn.api.Vec2Value](#bosdyn.api.Vec2Value) | Offset of the foot from it's nominal position, in meters. |
+| second_foot | [Leg](#bosdyn.api.spot.Leg) | Should we use a second foot? (None = 0, FL = 1, FR = 2, HL = 3, HR = 4). |
+| swing_waypoint | [bosdyn.api.Vec3Value](#bosdyn.api.Vec3Value) | Where should the swing foot go? This vector should be described in a gravity-aligned body frame relative to the centerpoint of the swing. If set to {0,0,0}, uses the default swing path. |
+| swing_height | [google.protobuf.DoubleValue](#google.protobuf.DoubleValue) | Parameters for altering swing. Note that these will have no effect if swing_waypoint is specified. As well, a zero (or nearly zero) value will be considered as an unspecified parameter.
+
+meters |
+| liftoff_velocity | [google.protobuf.DoubleValue](#google.protobuf.DoubleValue) | m/s |
+| touchdown_velocity | [google.protobuf.DoubleValue](#google.protobuf.DoubleValue) | m/s |
+| mirror_x | [google.protobuf.BoolValue](#google.protobuf.BoolValue) | Should we mirror the offset for the second foot? Ignored if second_foot is set to None |
+| mirror_y | [google.protobuf.BoolValue](#google.protobuf.BoolValue) | |
+| mirror | [google.protobuf.BoolValue](#google.protobuf.BoolValue) | Deprecation Warning *** DEPRECATED as of 2.3.0: The mirror field has been deprecated in favor for a more descriptive break down to mirror_x and mirror_y. The following field will be deprecated and moved to 'reserved' in a future release. |
+| waypoint_dwell | [google.protobuf.DoubleValue](#google.protobuf.DoubleValue) | What fraction of the swing should be spent near the waypoint. |
+| touch | [google.protobuf.BoolValue](#google.protobuf.BoolValue) | Should we touch the ground and come back rather than stepping to a new place? |
+| touch_offset | [bosdyn.api.Vec2Value](#bosdyn.api.Vec2Value) | |
+
+
+
+
+
+
+
+
+### SwayParams
+
+Parameters specific to Sway move.
+
+
+
+| Field | Type | Description |
+| ----- | ---- | ----------- |
+| vertical | [google.protobuf.DoubleValue](#google.protobuf.DoubleValue) | How far to move up/down [m]. |
+| horizontal | [google.protobuf.DoubleValue](#google.protobuf.DoubleValue) | How far to move left/right [m]. |
+| roll | [google.protobuf.DoubleValue](#google.protobuf.DoubleValue) | How much to roll [rad]. |
+| pivot | [Pivot](#bosdyn.api.spot.Pivot) | What point on the robot's body should the swaying be centered at. For example, should the head move instead of the butt? |
+| style | [SwayParams.SwayStyle](#bosdyn.api.spot.SwayParams.SwayStyle) | What style motion should we use? |
+| pronounced | [google.protobuf.DoubleValue](#google.protobuf.DoubleValue) | How pronounced should the sway-style be? The value is on a scale from [0,1.0]. |
+| hold_zero_axes | [google.protobuf.BoolValue](#google.protobuf.BoolValue) | Should the robot hold previous values for the vertical, horizontal, and roll axes if the value is left unspecified (value of zero). |
+
+
+
+
+
+
+
+
+### TurnParams
+
+Parameters specific to turning.
+
+
+
+| Field | Type | Description |
+| ----- | ---- | ----------- |
+| yaw | [google.protobuf.DoubleValue](#google.protobuf.DoubleValue) | How far to turn, described in radians with a positive value representing a turn to the left. |
+| absolute_yaw | [google.protobuf.DoubleValue](#google.protobuf.DoubleValue) | Orientation to turn to, relative to the orientation at the start of the script. [rad] |
+| yaw_is_absolute | [google.protobuf.BoolValue](#google.protobuf.BoolValue) | Should we turn to a yaw in choreography sequence frame? |
+| swing_height | [google.protobuf.DoubleValue](#google.protobuf.DoubleValue) | Swing parameters to describe the footstep pattern during the turning [height in meters]. Note, a zero (or nearly zero) value will be considered as an unspecified parameter. |
+| swing_velocity | [google.protobuf.DoubleValue](#google.protobuf.DoubleValue) | Swing parameter to describe the foot's swing velocity during the turning [m/s]. Note, a zero (or nearly zero) value will be considered as an unspecified parameter. |
+| motion | [bosdyn.api.Vec2Value](#bosdyn.api.Vec2Value) | How far to move relative to starting position. [m] |
+| absolute_motion | [bosdyn.api.Vec2Value](#bosdyn.api.Vec2Value) | Where to move relative to position at start of script. [m] |
+| motion_is_absolute | [google.protobuf.BoolValue](#google.protobuf.BoolValue) | Is motion specified relative to pose at start of dance? |
+| absolute | [google.protobuf.BoolValue](#google.protobuf.BoolValue) | Deprecation Warning *** DEPRECATED as of 3.0.0: The absolute field has been deprecated and split into the yaw_is_absolute and translation_is_absolute fields. The following field will be deprecated and moved to 'reserved' in a future release. |
+
+
+
+
+
+
+
+
+### TwerkParams
+
+Parameters specific to twerking
+
+
+
+| Field | Type | Description |
+| ----- | ---- | ----------- |
+| height | [google.protobuf.DoubleValue](#google.protobuf.DoubleValue) | How far the robot should twerk in meters. |
+
+
+
+
+
+
+
+
+### WorkspaceArmMoveParams
+
+
+
+| Field | Type | Description |
+| ----- | ---- | ----------- |
+| rotation | [EulerZYXValue](#bosdyn.api.spot.EulerZYXValue) | The robot will rotate its body to the specified orientation (roll/pitch/yaw) [rad]. |
+| translation | [bosdyn.api.Vec3Value](#bosdyn.api.Vec3Value) | The positional offset to the robot's current location [m]. |
+| absolute | [google.protobuf.BoolValue](#google.protobuf.BoolValue) | Go to an absolute position/orientation? Otherwise, relative to starting pose. |
+| frame | [ArmMoveFrame](#bosdyn.api.spot.ArmMoveFrame) | What frame is the motion specified in. |
+| easing | [Easing](#bosdyn.api.spot.Easing) | How the motion should be paced. |
+| dance_frame_id | [google.protobuf.Int32Value](#google.protobuf.Int32Value) | If we're using the dance frame, which one? |
+
+
+
+
+
+
+
+
+
+
+### ArmMoveFrame
+
+
+
+| Name | Number | Description |
+| ---- | ------ | ----------- |
+| ARM_MOVE_FRAME_UNKNOWN | 0 | |
+| ARM_MOVE_FRAME_CENTER_OF_FOOTPRINT | 1 | |
+| ARM_MOVE_FRAME_HAND | 2 | |
+| ARM_MOVE_FRAME_BODY | 3 | |
+| ARM_MOVE_FRAME_SHOULDER | 4 | |
+| ARM_MOVE_FRAME_SHADOW | 5 | |
+| ARM_MOVE_FRAME_DANCE | 6 | |
+
+
+
+
+
+### Easing
+
+Enum to describe the type of easing to perform for the slices at either (or both) the
+beginning and end of a move.
+
+
+
+| Name | Number | Description |
+| ---- | ------ | ----------- |
+| EASING_UNKNOWN | 0 | |
+| EASING_LINEAR | 1 | |
+| EASING_QUADRATIC_INPUT | 2 | |
+| EASING_QUADRATIC_OUTPUT | 3 | |
+| EASING_QUADRATIC_IN_OUT | 4 | |
+| EASING_CUBIC_INPUT | 5 | |
+| EASING_CUBIC_OUTPUT | 6 | |
+| EASING_CUBIC_IN_OUT | 7 | |
+| EASING_EXPONENTIAL_INPUT | 8 | |
+| EASING_EXPONENTIAL_OUTPUT | 9 | |
+| EASING_EXPONENTIAL_IN_OUT | 10 | |
+
+
+
+
+
+### FidgetStandParams.FidgetPreset
+
+
+
+| Name | Number | Description |
+| ---- | ------ | ----------- |
+| PRESET_UNKNOWN | 0 | |
+| PRESET_CUSTOM | 1 | |
+| PRESET_INTEREST | 2 | |
+| PRESET_PLAYFUL | 3 | |
+| PRESET_FEAR | 4 | |
+| PRESET_NERVOUS | 5 | |
+| PRESET_EXHAUSTED | 6 | |
+
+
+
+
+
+### FrameSnapshotParams.Inclusion
+
+
+
+| Name | Number | Description |
+| ---- | ------ | ----------- |
+| INCLUSION_UNKNOWN | 0 | |
+| INCLUSION_IF_STANCE | 1 | |
+| INCLUSION_INCLUDED | 2 | |
+| INCLUSION_EXCLUDED | 3 | |
+
+
+
+
+
+### JumpParams.Lead
+
+If split_fraction is non-zero, which legs to lift first.
+
+
+
+| Name | Number | Description |
+| ---- | ------ | ----------- |
+| LEAD_UNKNOWN | 0 | |
+| LEAD_AUTO | 1 | |
+| LEAD_FRONT | 2 | |
+| LEAD_HIND | 3 | |
+| LEAD_LEFT | 4 | |
+| LEAD_RIGHT | 5 | |
+
+
+
+
+
+### LedLight
+
+
+
+| Name | Number | Description |
+| ---- | ------ | ----------- |
+| LED_LIGHT_UNKNOWN | 0 | |
+| LED_LIGHT_LEFT1 | 1 | |
+| LED_LIGHT_LEFT2 | 2 | |
+| LED_LIGHT_LEFT3 | 3 | |
+| LED_LIGHT_LEFT4 | 4 | |
+| LED_LIGHT_RIGHT1 | 5 | |
+| LED_LIGHT_RIGHT2 | 6 | |
+| LED_LIGHT_RIGHT3 | 7 | |
+| LED_LIGHT_RIGHT4 | 8 | |
+
+
+
+
+
+### Leg
+
+Enum to describe which leg is being referenced in specific choreography sequence moves.
+
+
+
+| Name | Number | Description |
+| ---- | ------ | ----------- |
+| LEG_UNKNOWN | 0 | |
+| LEG_FRONT_LEFT | 1 | |
+| LEG_FRONT_RIGHT | 2 | |
+| LEG_HIND_LEFT | 3 | |
+| LEG_HIND_RIGHT | 4 | |
+| LEG_NO_LEG | -1 | |
+
+
+
+
+
+### Pivot
+
+Enum for the pivot point for certain choreography sequence moves.
+
+
+
+| Name | Number | Description |
+| ---- | ------ | ----------- |
+| PIVOT_UNKNOWN | 0 | |
+| PIVOT_FRONT | 1 | |
+| PIVOT_HIND | 2 | |
+| PIVOT_CENTER | 3 | |
+
+
+
+
+
+### RippleColorParams.LightSide
+
+
+
+| Name | Number | Description |
+| ---- | ------ | ----------- |
+| LIGHT_SIDE_UNKNOWN | 0 | |
+| LIGHT_SIDE_LEFT | 1 | |
+| LIGHT_SIDE_RIGHT | 2 | |
+| LIGHT_SIDE_BOTH_IN_SEQUENCE | 3 | |
+| LIGHT_SIDE_BOTH_MATCHING | 4 | |
+
+
+
+
+
+### RippleColorParams.Pattern
+
+
+
+| Name | Number | Description |
+| ---- | ------ | ----------- |
+| PATTERN_UNKNOWN | 0 | |
+| PATTERN_FLASHING | 1 | |
+| PATTERN_SNAKE | 2 | |
+| PATTERN_ALTERNATE_COLORS | 3 | |
+| PATTERN_FINE_GRAINED_ALTERNATE_COLORS | 4 | |
+
+
+
+
+
+### SideParams.Side
+
+
+
+| Name | Number | Description |
+| ---- | ------ | ----------- |
+| SIDE_UNKNOWN | 0 | |
+| SIDE_LEFT | 1 | |
+| SIDE_RIGHT | 2 | |
+
+
+
+
+
+### SwayParams.SwayStyle
+
+The type of motion used by the Sway sequence move.
+
+
+
+| Name | Number | Description |
+| ---- | ------ | ----------- |
+| SWAY_STYLE_UNKNOWN | 0 | |
+| SWAY_STYLE_STANDARD | 1 | |
+| SWAY_STYLE_FAST_OUT | 2 | |
+| SWAY_STYLE_FAST_RETURN | 3 | |
+| SWAY_STYLE_SQUARE | 4 | |
+| SWAY_STYLE_SPIKE | 5 | |
+| SWAY_STYLE_PLATEAU | 6 | |
+
+
+
+
+
+
+
+
+
+
+
+
+# spot/choreography_sequence.proto
+
+
+
+
+
+### AnimateArm
+
+
+
+| Field | Type | Description |
+| ----- | ---- | ----------- |
+| joint_angles | [ArmJointAngles](#bosdyn.api.spot.ArmJointAngles) | Full arm joint angle specification. |
+| hand_pose | [AnimateArm.HandPose](#bosdyn.api.spot.AnimateArm.HandPose) | The hand position in the animation frame |
+
+
+
+
+
+
+
+
+### AnimateArm.HandPose
+
+An SE3 Pose for the hand where orientation is specified using either a quaternion or
+euler angles
+
+
+
+| Field | Type | Description |
+| ----- | ---- | ----------- |
+| position | [bosdyn.api.Vec3Value](#bosdyn.api.Vec3Value) | |
+| euler_angles | [EulerZYXValue](#bosdyn.api.spot.EulerZYXValue) | The hand's orientation described with euler angles (yaw, pitch, roll). |
+| quaternion | [bosdyn.api.Quaternion](#bosdyn.api.Quaternion) | The hand's orientation described with a quaternion. |
+
+
+
+
+
+
+
+
+### AnimateBody
+
+The AnimateBody keyframe describes the body's position and orientation. At least
+one dimension of the body must be specified.
+
+
+
+| Field | Type | Description |
+| ----- | ---- | ----------- |
+| body_pos | [bosdyn.api.Vec3Value](#bosdyn.api.Vec3Value) | The body position in the animation frame. |
+| com_pos | [bosdyn.api.Vec3Value](#bosdyn.api.Vec3Value) | The body's center of mass position in the animation frame. |
+| euler_angles | [EulerZYXValue](#bosdyn.api.spot.EulerZYXValue) | The body's orientation described with euler angles (yaw, pitch, roll). |
+| quaternion | [bosdyn.api.Quaternion](#bosdyn.api.Quaternion) | The body's orientation described with a quaternion. |
+
+
+
+
+
+
+
+
+### AnimateGripper
+
+
+
+| Field | Type | Description |
+| ----- | ---- | ----------- |
+| gripper_angle | [google.protobuf.DoubleValue](#google.protobuf.DoubleValue) | |
+
+
+
+
+
+
+
+
+### AnimateLegs
+
+The AnimateLegs keyframe describes each leg using either joint angles or the foot position.
+
+
+
+| Field | Type | Description |
+| ----- | ---- | ----------- |
+| fl | [AnimateSingleLeg](#bosdyn.api.spot.AnimateSingleLeg) | Front left leg. |
+| fr | [AnimateSingleLeg](#bosdyn.api.spot.AnimateSingleLeg) | Front right leg. |
+| hl | [AnimateSingleLeg](#bosdyn.api.spot.AnimateSingleLeg) | Hind left leg. |
+| hr | [AnimateSingleLeg](#bosdyn.api.spot.AnimateSingleLeg) | Hind right leg. |
+
+
+
+
+
+
+
+
+### AnimateSingleLeg
+
+A single leg keyframe to describe the leg motion.
+
+
+
+| Field | Type | Description |
+| ----- | ---- | ----------- |
+| joint_angles | [LegJointAngles](#bosdyn.api.spot.LegJointAngles) | Full leg joint angle specification. |
+| foot_pos | [bosdyn.api.Vec3Value](#bosdyn.api.Vec3Value) | The foot position of the leg in the animation frame. |
+| stance | [google.protobuf.BoolValue](#google.protobuf.BoolValue) | If true, the foot is in contact with the ground and standing. If false, the foot is in swing. If unset, the contact will be inferred from the leg joint angles or foot position. |
+
+
+
+
+
+
+
+
+### Animation
+
+Represents an animated dance move that can be used whithin choreographies after uploading.
+
+
+
+| Field | Type | Description |
+| ----- | ---- | ----------- |
+| name | [string](#string) | The name of the animated move, which is how it will be referenced in choreographies. |
+| animation_keyframes | [AnimationKeyframe](#bosdyn.api.spot.AnimationKeyframe) | The animated move is composed of animation keyframes, which specify the duration of each frame. The keyframe describes the position of the body/arms/gripper. |
+| controls_arm | [bool](#bool) | Indicators as to which parts of the robot that the move controls. |
+| controls_legs | [bool](#bool) | |
+| controls_body | [bool](#bool) | |
+| controls_gripper | [bool](#bool) | |
+| track_swing_trajectories | [bool](#bool) | Track animated swing trajectories. Otherwise, takes standard swings between animated liftoff and touchdown locations. |
+| assume_zero_roll_and_pitch | [bool](#bool) | For moves that control the legs, but not the body. If legs are specified by joint angles, we still need body roll and pitch to know the foot height. If `assume_zero_roll_and_pitch` is true, they needn't be explicitly specified. |
+| arm_playback | [Animation.ArmPlayback](#bosdyn.api.spot.Animation.ArmPlayback) | |
+| bpm | [double](#double) | Optional bpm that the animation is successful at. |
+| retime_to_integer_slices | [bool](#bool) | When true, rescales the time of each keyframe slightly such that the move takes an integer number of slices. If false/absent, the move will be padded or truncated slightly to fit an integer number of slices. |
+| minimum_parameters | [AnimateParams](#bosdyn.api.spot.AnimateParams) | The different parameters (minimum, default, and maximum) that can change the move. The min/max bounds are used by Choreographer to constrain the parameter widget, and will also be used when uploading a ChoreographySequence containing the animation to validate that the animated move is allowed. |
+| default_parameters | [AnimateParams](#bosdyn.api.spot.AnimateParams) | |
+| maximum_parameters | [AnimateParams](#bosdyn.api.spot.AnimateParams) | |
+| truncatable | [bool](#bool) | Indicates if the animated moves can be shortened (the animated move will be cut off). Not supported for leg moves. |
+| extendable | [bool](#bool) | Indicates if the animated moves can be stretched (animated move will loop). Not supported for leg moves. |
+| neutral_start | [bool](#bool) | Indicates if the move should start in a neutral stand position. |
+| precise_steps | [bool](#bool) | Step exactly at the animated locations, even at the expense of balance. By default, the optimizer may adjust step locations slightly. |
+| precise_timing | [bool](#bool) | Time everything exactly as animated, even at the expense of balance. By default, the optimizer may adjust timing slightly. |
+| arm_required | [bool](#bool) | If set true, this animation will not run unless the robot has an arm. |
+| arm_prohibited | [bool](#bool) | If set true, this animation will not run unless the robot has no arm. |
+| no_looping | [bool](#bool) | If the animation completes before the move's duration, freeze rather than looping. |
+
+
+
+
+
+
+
+
+### AnimationKeyframe
+
+
+
+| Field | Type | Description |
+| ----- | ---- | ----------- |
+| time | [double](#double) | Time from the start of the animation for this frame. |
+| gripper | [AnimateGripper](#bosdyn.api.spot.AnimateGripper) | Different body parts the animated move can control. It can control multiple body parts at once. |
+| arm | [AnimateArm](#bosdyn.api.spot.AnimateArm) | |
+| body | [AnimateBody](#bosdyn.api.spot.AnimateBody) | |
+| legs | [AnimateLegs](#bosdyn.api.spot.AnimateLegs) | |
+
+
+
+
+
+
+
+
+### ArmJointAngles
+
+The AnimateArm keyframe describes the joint angles of the arm joints in radians.
+Any joint not specified, will hold the previous angle it was at when the keyframe
+begins. At least one arm joint must be specified.
+
+
+
+| Field | Type | Description |
+| ----- | ---- | ----------- |
+| shoulder_0 | [google.protobuf.DoubleValue](#google.protobuf.DoubleValue) | |
+| shoulder_1 | [google.protobuf.DoubleValue](#google.protobuf.DoubleValue) | |
+| elbow_0 | [google.protobuf.DoubleValue](#google.protobuf.DoubleValue) | |
+| elbow_1 | [google.protobuf.DoubleValue](#google.protobuf.DoubleValue) | |
+| wrist_0 | [google.protobuf.DoubleValue](#google.protobuf.DoubleValue) | |
+| wrist_1 | [google.protobuf.DoubleValue](#google.protobuf.DoubleValue) | |
+
+
+
+
+
+
+
+
+### ChoreographerDisplayInfo
+
+Information for the Choreographer to display.
+
+
+
+| Field | Type | Description |
+| ----- | ---- | ----------- |
+| color | [ChoreographerDisplayInfo.Color](#bosdyn.api.spot.ChoreographerDisplayInfo.Color) | |
+| markers | [int32](#int32) | For the GUI, these are marked events in steps. For example if the move puts a foot down, the mark might be exactly when the foot is placed on the ground, relative to the start of the move. |
+| description | [string](#string) | Textual description to be displayed in the GUI. |
+| image | [string](#string) | Image path (local to the UI) to display as an icon. May be an animated gif. |
+| category | [ChoreographerDisplayInfo.Category](#bosdyn.api.spot.ChoreographerDisplayInfo.Category) | |
+
+
+
+
+
+
+
+
+### ChoreographerDisplayInfo.Color
+
+Color of the object. Set it to override the default category color.
+
+
+
+| Field | Type | Description |
+| ----- | ---- | ----------- |
+| r | [int32](#int32) | RGB values for color ranging from [0,255]. |
+| g | [int32](#int32) | |
+| b | [int32](#int32) | |
+| a | [double](#double) | Alpha value for the coloration ranges from [0,1]. |
+
+
+
+
+
+
+
+
+### ChoreographerSave
+
+Describes the metadata and information only used by the Choreographer GUI, which isn't used in
+the API
+
+
+
+| Field | Type | Description |
+| ----- | ---- | ----------- |
+| choreography_sequence | [ChoreographySequence](#bosdyn.api.spot.ChoreographySequence) | The main ChoreographySequence that makes up the dance and is sent to the robot. |
+| music_file | [string](#string) | If specified this is the UI local path of the music to load. |
+| music_start_slice | [double](#double) | UI specific member that describes exactly when the music should start, in slices. This is for time sync issues. |
+| choreography_start_slice | [double](#double) | The start slice for the choreographer save. |
+
+
+
+
+
+
+
+
+### ChoreographySequence
+
+Represents a particular choreography sequence, made up of MoveParams.
+
+
+
+| Field | Type | Description |
+| ----- | ---- | ----------- |
+| name | [string](#string) | Display name or file name associated with the choreography sequence. |
+| slices_per_minute | [double](#double) | Number of slices per minute in the choreography sequence. Typically a slice will correspond to 1/4 a beat. |
+| moves | [MoveParams](#bosdyn.api.spot.MoveParams) | All of the moves in this choreography sequence. |
+
+
+
+
+
+
+
+
+### ChoreographyStateLog
+
+
+
+| Field | Type | Description |
+| ----- | ---- | ----------- |
+| key_frames | [LoggedStateKeyFrame](#bosdyn.api.spot.LoggedStateKeyFrame) | A set of key frames recorded at a high rate. The key frames can be for the duration of an executing choreography or for the duration of a manual recorded log (triggered by the StartRecordingState and StopRecordingState RPCs). The specific set of keyframes is specified by the LogType when requesting to download the data. |
+
+
+
+
+
+
+
+
+### DownloadRobotStateLogRequest
+
+
+
+| Field | Type | Description |
+| ----- | ---- | ----------- |
+| header | [bosdyn.api.RequestHeader](#bosdyn.api.RequestHeader) | Common request header |
+| log_type | [DownloadRobotStateLogRequest.LogType](#bosdyn.api.spot.DownloadRobotStateLogRequest.LogType) | Which data should we download. |
+
+
+
+
+
+
+
+
+### DownloadRobotStateLogResponse
+
+
+
+| Field | Type | Description |
+| ----- | ---- | ----------- |
+| header | [bosdyn.api.ResponseHeader](#bosdyn.api.ResponseHeader) | Common response header |
+| status | [DownloadRobotStateLogResponse.Status](#bosdyn.api.spot.DownloadRobotStateLogResponse.Status) | Return status for the request. |
+| chunk | [bosdyn.api.DataChunk](#bosdyn.api.DataChunk) | Chunk of data to download. Responses are sent in sequence until the data chunk is complete. After receiving all chunks, concatenate them into a single byte string. Then, deserialize the byte string into an ChoreographyStateLog object. |
+
+
+
+
+
+
+
+
+### ExecuteChoreographyRequest
+
+
+
+| Field | Type | Description |
+| ----- | ---- | ----------- |
+| header | [bosdyn.api.RequestHeader](#bosdyn.api.RequestHeader) | Common request header |
+| choreography_sequence_name | [string](#string) | The string name of the ChoreographySequence to use. |
+| start_time | [google.protobuf.Timestamp](#google.protobuf.Timestamp) | The absolute time to start the choreography at. This should be in the robot's clock so we can synchronize music playing and the robot's choreography. |
+| choreography_starting_slice | [double](#double) | The slice (betas/sub-beats) that the choreography should begin excution at. |
+| lease | [bosdyn.api.Lease](#bosdyn.api.Lease) | The Lease to show ownership of the robot body. |
+
+
+
+
+
+
+
+
+### ExecuteChoreographyResponse
+
+
+
+| Field | Type | Description |
+| ----- | ---- | ----------- |
+| header | [bosdyn.api.ResponseHeader](#bosdyn.api.ResponseHeader) | Common response header |
+| lease_use_result | [bosdyn.api.LeaseUseResult](#bosdyn.api.LeaseUseResult) | |
+| status | [ExecuteChoreographyResponse.Status](#bosdyn.api.spot.ExecuteChoreographyResponse.Status) | |
+
+
+
+
+
+
+
+
+### LegJointAngles
+
+Descprition of each leg joint angle (hip x/y and knee) in radians.
+
+
+
+| Field | Type | Description |
+| ----- | ---- | ----------- |
+| hip_x | [double](#double) | |
+| hip_y | [double](#double) | |
+| knee | [double](#double) | |
+
+
+
+
+
+
+
+
+### ListAllMovesRequest
+
+Request a list of all possible moves and the associated parameters (min/max values).
+
+
+
+| Field | Type | Description |
+| ----- | ---- | ----------- |
+| header | [bosdyn.api.RequestHeader](#bosdyn.api.RequestHeader) | Common request header |
+
+
+
+
+
+
+
+
+### ListAllMovesResponse
+
+Response for ListAllMoves that defines the list of available moves and their parameter types.
+
+
+
+| Field | Type | Description |
+| ----- | ---- | ----------- |
+| header | [bosdyn.api.ResponseHeader](#bosdyn.api.ResponseHeader) | Common response header |
+| moves | [MoveInfo](#bosdyn.api.spot.MoveInfo) | List of moves that the robot knows about. |
+| move_param_config | [string](#string) | A copy of the MoveParamsConfig.txt that the robot is using. |
+
+
+
+
+
+
+
+
+### LoggedFootContacts
+
+
+
+| Field | Type | Description |
+| ----- | ---- | ----------- |
+| fr_contact | [bool](#bool) | Boolean indicating whether or not the robot's foot is in contact with the ground. |
+| fl_contact | [bool](#bool) | |
+| hr_contact | [bool](#bool) | |
+| hl_contact | [bool](#bool) | |
+
+
+
+
+
+
+
+
+### LoggedJoints
+
+
+
+| Field | Type | Description |
+| ----- | ---- | ----------- |
+| fl | [LegJointAngles](#bosdyn.api.spot.LegJointAngles) | front left leg joint angles. |
+| fr | [LegJointAngles](#bosdyn.api.spot.LegJointAngles) | front right leg joint angles. |
+| hl | [LegJointAngles](#bosdyn.api.spot.LegJointAngles) | hind left leg joint angles. |
+| hr | [LegJointAngles](#bosdyn.api.spot.LegJointAngles) | hind right leg joint angles. |
+| arm | [ArmJointAngles](#bosdyn.api.spot.ArmJointAngles) | Full set of joint angles for the arm and gripper. |
+| gripper_angle | [google.protobuf.DoubleValue](#google.protobuf.DoubleValue) | |
+
+
+
+
+
+
+
+
+### LoggedStateKeyFrame
+
+
+
+| Field | Type | Description |
+| ----- | ---- | ----------- |
+| joint_angles | [LoggedJoints](#bosdyn.api.spot.LoggedJoints) | Full set of joint angles for the robot. |
+| foot_contact_state | [LoggedFootContacts](#bosdyn.api.spot.LoggedFootContacts) | Foot contacts for the robot. |
+| animation_tform_body | [bosdyn.api.SE3Pose](#bosdyn.api.SE3Pose) | The current pose of the robot body in animation frame. The animation frame is defined based on the robot's footprint when the log first started recording. |
+| timestamp | [google.protobuf.Timestamp](#google.protobuf.Timestamp) | The timestamp (in robot time) for the key frame. |
+
+
+
+
+
+
+
+
+### MoveInfo
+
+Defines properties of a choreography move.
+
+
+
+| Field | Type | Description |
+| ----- | ---- | ----------- |
+| name | [string](#string) | Unique ID of the move type. |
+| move_length_slices | [int32](#int32) | The duration of this move in slices (usually 1/4 beats). |
+| move_length_time | [double](#double) | The duration of this move in seconds. If specified, overrides move_length_slices. |
+| is_extendable | [bool](#bool) | If true, the duration may be adjusted from the default specified by move_length_slices or move_length_time. |
+| min_move_length_slices | [int32](#int32) | Bounds on the duration may be adjusted in slices (usually 1/4 beats). These apply to extendable moves, but may also override move_length_time for some BPM. |
+| max_move_length_slices | [int32](#int32) | |
+| min_time | [double](#double) | Bounds on the duration in time. These apply to extendable moves, but may also override move_length_slices for some BPM. |
+| max_time | [double](#double) | |
+| entrance_states | [MoveInfo.TransitionState](#bosdyn.api.spot.MoveInfo.TransitionState) | The admissible states the robot can be in currently for this move to execute. |
+| exit_state | [MoveInfo.TransitionState](#bosdyn.api.spot.MoveInfo.TransitionState) | The state of the robot after the move is complete. |
+| controls_arm | [bool](#bool) | Indicators as to which parts of the robot that the move controls. |
+| controls_legs | [bool](#bool) | |
+| controls_body | [bool](#bool) | |
+| controls_gripper | [bool](#bool) | |
+| controls_lights | [bool](#bool) | |
+| controls_annotations | [bool](#bool) | |
+| display | [ChoreographerDisplayInfo](#bosdyn.api.spot.ChoreographerDisplayInfo) | Information for the GUI tool to visualize the sequence move info. |
+| animated_move_generated_id | [google.protobuf.StringValue](#google.protobuf.StringValue) | Unique ID for the animated moves. This is sent with the UploadAnimatedMove request and use to track which version of the animated move is currently saved on robot. The ID can be unset, meaning the RPC which uploaded the animation did not provide an identifying hash. |
+
+
+
+
+
+
+
+
+### MoveParams
+
+Defines varying parameters for a particular instance of a move.
+
+
+
+| Field | Type | Description |
+| ----- | ---- | ----------- |
+| type | [string](#string) | Unique ID of the move type that these params are associated with. |
+| start_slice | [int32](#int32) | How many slices since the start of the song this move should be executed at. |
+| requested_slices | [int32](#int32) | The number of slices (beats/sub-beats) that this move is supposed to last for. If the move was extendable, then this corresponds to the number of slices that the user requested. |
+| jump_params | [JumpParams](#bosdyn.api.spot.JumpParams) | |
+| rotate_body_params | [RotateBodyParams](#bosdyn.api.spot.RotateBodyParams) | |
+| step_params | [StepParams](#bosdyn.api.spot.StepParams) | |
+| butt_circle_params | [ButtCircleParams](#bosdyn.api.spot.ButtCircleParams) | |
+| turn_params | [TurnParams](#bosdyn.api.spot.TurnParams) | |
+| pace_2step_params | [Pace2StepParams](#bosdyn.api.spot.Pace2StepParams) | |
+| twerk_params | [TwerkParams](#bosdyn.api.spot.TwerkParams) | |
+| chicken_head_params | [ChickenHeadParams](#bosdyn.api.spot.ChickenHeadParams) | |
+| clap_params | [ClapParams](#bosdyn.api.spot.ClapParams) | |
+| front_up_params | [FrontUpParams](#bosdyn.api.spot.FrontUpParams) | |
+| sway_params | [SwayParams](#bosdyn.api.spot.SwayParams) | |
+| body_hold_params | [BodyHoldParams](#bosdyn.api.spot.BodyHoldParams) | |
+| arm_move_params | [ArmMoveParams](#bosdyn.api.spot.ArmMoveParams) | |
+| kneel_leg_move_params | [KneelLegMoveParams](#bosdyn.api.spot.KneelLegMoveParams) | |
+| running_man_params | [RunningManParams](#bosdyn.api.spot.RunningManParams) | |
+| kneel_circle_params | [KneelCircleParams](#bosdyn.api.spot.KneelCircleParams) | |
+| gripper_params | [GripperParams](#bosdyn.api.spot.GripperParams) | |
+| hop_params | [HopParams](#bosdyn.api.spot.HopParams) | |
+| random_rotate_params | [RandomRotateParams](#bosdyn.api.spot.RandomRotateParams) | |
+| crawl_params | [CrawlParams](#bosdyn.api.spot.CrawlParams) | |
+| side_params | [SideParams](#bosdyn.api.spot.SideParams) | |
+| bourree_params | [BourreeParams](#bosdyn.api.spot.BourreeParams) | |
+| workspace_arm_move_params | [WorkspaceArmMoveParams](#bosdyn.api.spot.WorkspaceArmMoveParams) | |
+| figure8_params | [Figure8Params](#bosdyn.api.spot.Figure8Params) | |
+| kneel_leg_move2_params | [KneelLegMove2Params](#bosdyn.api.spot.KneelLegMove2Params) | |
+| fidget_stand_params | [FidgetStandParams](#bosdyn.api.spot.FidgetStandParams) | |
+| goto_params | [GotoParams](#bosdyn.api.spot.GotoParams) | |
+| frame_snapshot_params | [FrameSnapshotParams](#bosdyn.api.spot.FrameSnapshotParams) | |
+| set_color_params | [SetColorParams](#bosdyn.api.spot.SetColorParams) | |
+| ripple_color_params | [RippleColorParams](#bosdyn.api.spot.RippleColorParams) | |
+| fade_color_params | [FadeColorParams](#bosdyn.api.spot.FadeColorParams) | |
+| independent_color_params | [IndependentColorParams](#bosdyn.api.spot.IndependentColorParams) | |
+| animate_params | [AnimateParams](#bosdyn.api.spot.AnimateParams) | |
+
+
+
+
+
+
+
+
+### StartRecordingStateRequest
+
+
+
+| Field | Type | Description |
+| ----- | ---- | ----------- |
+| header | [bosdyn.api.RequestHeader](#bosdyn.api.RequestHeader) | Common request header |
+| continue_recording_duration | [google.protobuf.Duration](#google.protobuf.Duration) | How long should the robot record for if no stop RPC is sent. A recording session can be extended by setting the recording_session_id below to a non-zero value matching the ID for the current recording session. For both start and continuation commands, the service will stop recording at end_time = (system time when the Start/Continue RPC is received) + (continue_recording_duration), unless another continuation request updates this end time. The robot has an internal maximum recording time of 5 minutes for the complete session log. |
+| recording_session_id | [uint64](#uint64) | Provide the unique identifier of the recording session to extend the recording end time for. If the recording_session_id is 0, then it will create a new session and the robot will clear the recorded robot state buffer and restart recording. If this is a continuation of an existing recording session, than the robot will continue to record until the specified end time. |
+
+
+
+
+
+
+
+
+### StartRecordingStateResponse
+
+
+
+| Field | Type | Description |
+| ----- | ---- | ----------- |
+| header | [bosdyn.api.ResponseHeader](#bosdyn.api.ResponseHeader) | Common response header |
+| status | [StartRecordingStateResponse.Status](#bosdyn.api.spot.StartRecordingStateResponse.Status) | |
+| recording_session_id | [uint64](#uint64) | Unique identifier for the current recording session |
+
+
+
+
+
+
+
+
+### StopRecordingStateRequest
+
+
+
+| Field | Type | Description |
+| ----- | ---- | ----------- |
+| header | [bosdyn.api.RequestHeader](#bosdyn.api.RequestHeader) | Common request header |
+
+
+
+
+
+
+
+
+### StopRecordingStateResponse
+
+
+
+| Field | Type | Description |
+| ----- | ---- | ----------- |
+| header | [bosdyn.api.ResponseHeader](#bosdyn.api.ResponseHeader) | Common response header |
+
+
+
+
+
+
+
+
+### UploadAnimatedMoveRequest
+
+
+
+| Field | Type | Description |
+| ----- | ---- | ----------- |
+| header | [bosdyn.api.RequestHeader](#bosdyn.api.RequestHeader) | Common request header |
+| animated_move_generated_id | [google.protobuf.StringValue](#google.protobuf.StringValue) | Unique ID for the animated moves. This will be automatically generated by the client and is used to uniquely identify the entire animation by creating a hash from the Animation protobuf message after serialization. The ID will be conveyed within the MoveInfo protobuf message in the ListAllMoves RPC. This ID allows the choreography client to only reupload animations that have changed or do not exist on robot already. |
+| animated_move | [Animation](#bosdyn.api.spot.Animation) | AnimatedMove to upload to the robot and create a dance move from. |
+
+
+
+
+
+
+
+
+### UploadAnimatedMoveResponse
+
+
+
+| Field | Type | Description |
+| ----- | ---- | ----------- |
+| header | [bosdyn.api.ResponseHeader](#bosdyn.api.ResponseHeader) | Common response header. |
+| status | [UploadAnimatedMoveResponse.Status](#bosdyn.api.spot.UploadAnimatedMoveResponse.Status) | |
+| warnings | [string](#string) | If the uploaded animated move is invalid (will throw a STATUS_ANIMATION_VALIDATION_FAILED), then warning messages describing the failure cases will be populated here to indicate which parts of the animated move failed. Note: there could be some warning messages even when an animation is marked as ok. |
+
+
+
+
+
+
+
+
+### UploadChoreographyRequest
+
+
+
+| Field | Type | Description |
+| ----- | ---- | ----------- |
+| header | [bosdyn.api.RequestHeader](#bosdyn.api.RequestHeader) | Common request header |
+| choreography_sequence | [ChoreographySequence](#bosdyn.api.spot.ChoreographySequence) | ChoreographySequence to upload and store in memory |
+| non_strict_parsing | [bool](#bool) | Should we run a script that has correctable errors? If true, the service will fix any correctable errors and run the corrected choreography script. If false, the service will reject a choreography script that has any errors. |
+
+
+
+
+
+
+
+
+### UploadChoreographyResponse
+
+
+
+| Field | Type | Description |
+| ----- | ---- | ----------- |
+| header | [bosdyn.api.ResponseHeader](#bosdyn.api.ResponseHeader) | Common response header. If the dance upload is invalid, the header INVALID request error will be set, which means that the choreography did not respect bounds of the parameters or has other attributes missing or incorrect. |
+| warnings | [string](#string) | If the uploaded choreography is invalid (will throw a header InvalidRequest status), then certain warning messages will be populated here to indicate which choreography moves or parameters violated constraints of the robot. |
+
+
+
+
+
+
+
+
+
+
+### Animation.ArmPlayback
+
+Mode for hand trajectory playback
+
+
+
+| Name | Number | Description |
+| ---- | ------ | ----------- |
+| ARM_PLAYBACK_DEFAULT | 0 | Playback as specified. Arm animations specified with joint angles playback in jointspace and arm animations specified as hand poses playback in workspace. |
+| ARM_PLAYBACK_JOINTSPACE | 1 | Playback in jointspace. Arm animation will be most consistent relative to the body |
+| ARM_PLAYBACK_WORKSPACE | 2 | Playback in workspace. Hand pose animation will be most consistent relative to the current footprint. Reference frame is animation frame. |
+| ARM_PLAYBACK_WORKSPACE_DANCE_FRAME | 3 | Playback in workspace with poses relative to the dance frame. hand pose animation will be most consistent relative to a fixed point in the world. |
+
+
+
+
+
+### ChoreographerDisplayInfo.Category
+
+Move Category affects the grouping in the choreographer list view, as well as the color it's
+displayed with.
+
+
+
+| Name | Number | Description |
+| ---- | ------ | ----------- |
+| CATEGORY_UNKNOWN | 0 | |
+| CATEGORY_BODY | 1 | |
+| CATEGORY_STEP | 2 | |
+| CATEGORY_DYNAMIC | 3 | |
+| CATEGORY_TRANSITION | 4 | |
+| CATEGORY_KNEEL | 5 | |
+| CATEGORY_ARM | 6 | |
+| CATEGORY_ANIMATION | 7 | |
+| CATEGORY_MPC | 8 | |
+| CATEGORY_LIGHTS | 9 | |
+| CATEGORY_ANNOTATIONS | 10 | |
+
+
+
+
+
+### DownloadRobotStateLogRequest.LogType
+
+
+
+| Name | Number | Description |
+| ---- | ------ | ----------- |
+| LOG_TYPE_UNKNOWN | 0 | Unknown. Do not use. |
+| LOG_TYPE_MANUAL | 1 | The robot state information recorded from the time of the manual start RPC (StartRecordingState) to either {the time of the manual stop RPC (StopRecordingState), the time of the download logs RPC, or the time of the internal service's buffer filling up}. |
+| LOG_TYPE_LAST_CHOREOGRAPHY | 2 | The robot will automatically record robot state information for the entire duration of an executing choreography in addition to any manual logging. This log type will download this information for the last completed choreography. |
+
+
+
+
+
+### DownloadRobotStateLogResponse.Status
+
+
+
+| Name | Number | Description |
+| ---- | ------ | ----------- |
+| STATUS_UNKNOWN | 0 | Status unknown. Do not use. |
+| STATUS_OK | 1 | The log data downloaded successfully and is complete. |
+| STATUS_NO_RECORDED_INFORMATION | 2 | Error where there is no robot state information logged in the choreography service. |
+| STATUS_INCOMPLETE_DATA | 3 | Error where the complete duration of the recorded session caused the service's recording buffer to fill up. When full, the robot will stop recording but preserve whatever was recorded until that point. The robot has an internal maximum recording time of 5 minutes. The data streamed in this response will go from the start time until the time the buffer was filled. |
+
+
+
+
+
+### ExecuteChoreographyResponse.Status
+
+
+
+| Name | Number | Description |
+| ---- | ------ | ----------- |
+| STATUS_UNKNOWN | 0 | |
+| STATUS_OK | 1 | |
+| STATUS_INVALID_UPLOADED_CHOREOGRAPHY | 2 | |
+| STATUS_ROBOT_COMMAND_ISSUES | 3 | |
+| STATUS_LEASE_ERROR | 4 | |
+
+
+
+
+
+### MoveInfo.TransitionState
+
+The state that the robot is in at the start or end of a move.
+
+
+
+| Name | Number | Description |
+| ---- | ------ | ----------- |
+| TRANSITION_STATE_UNKNOWN | 0 | Unknown or unset state. |
+| TRANSITION_STATE_STAND | 1 | The robot is in a normal (standing) state. |
+| TRANSITION_STATE_KNEEL | 2 | The robot is kneeling down. |
+| TRANSITION_STATE_SIT | 3 | The robot is sitting. |
+| TRANSITION_STATE_SPRAWL | 4 | The robot requires a self-right. |
+
+
+
+
+
+### StartRecordingStateResponse.Status
+
+The status for the start recording request.
+
+
+
+| Name | Number | Description |
+| ---- | ------ | ----------- |
+| STATUS_UNKNOWN | 0 | Status unknown; do not use. |
+| STATUS_OK | 1 | The request succeeded and choreography has either started, or continued with an extended duration based on if a session_id was provided. |
+| STATUS_UNKNOWN_RECORDING_SESSION_ID | 2 | The provided recording_session_id is unknown: it must either be 0 (start a new recording log) or it can match the current recording session id returned by the most recent start recording request. |
+| STATUS_RECORDING_BUFFER_FULL | 3 | The Choreography Service's internal buffer is filled. It will record for a maximum of 5 minutes. It will stop recording, but save the recorded data until |
+
+
+
+
+
+### UploadAnimatedMoveResponse.Status
+
+
+
+| Name | Number | Description |
+| ---- | ------ | ----------- |
+| STATUS_UNKNOWN | 0 | Do not use. |
+| STATUS_OK | 1 | Uploading + parsing the animated move succeeded. |
+| STATUS_ANIMATION_VALIDATION_FAILED | 2 | The animated move is considered invalid, see the warnings. |
+
+
+
+
+
+
+
+
+
+
+
+
+# spot/choreography_service.proto
+
+
+
+
+
+
+
+
+
+
+
+### ChoreographyService
+
+
+
+| Method Name | Request Type | Response Type | Description |
+| ----------- | ------------ | ------------- | ------------|
+| ListAllMoves | [ListAllMovesRequest](#bosdyn.api.spot.ListAllMovesRequest) | [ListAllMovesResponse](#bosdyn.api.spot.ListAllMovesResponse) | List the available dance moves and their parameter information. |
+| UploadChoreography | [UploadChoreographyRequest](#bosdyn.api.spot.UploadChoreographyRequest) | [UploadChoreographyResponse](#bosdyn.api.spot.UploadChoreographyResponse) | Upload a dance to the robot. |
+| UploadAnimatedMove | [UploadAnimatedMoveRequest](#bosdyn.api.spot.UploadAnimatedMoveRequest) | [UploadAnimatedMoveResponse](#bosdyn.api.spot.UploadAnimatedMoveResponse) | Upload an animation to the robot. |
+| ExecuteChoreography | [ExecuteChoreographyRequest](#bosdyn.api.spot.ExecuteChoreographyRequest) | [ExecuteChoreographyResponse](#bosdyn.api.spot.ExecuteChoreographyResponse) | Execute the uploaded dance. |
+| StartRecordingState | [StartRecordingStateRequest](#bosdyn.api.spot.StartRecordingStateRequest) | [StartRecordingStateResponse](#bosdyn.api.spot.StartRecordingStateResponse) | Manually start (or continue) recording the robot state. |
+| StopRecordingState | [StopRecordingStateRequest](#bosdyn.api.spot.StopRecordingStateRequest) | [StopRecordingStateResponse](#bosdyn.api.spot.StopRecordingStateResponse) | Manually stop recording the robot state. |
+| DownloadRobotStateLog | [DownloadRobotStateLogRequest](#bosdyn.api.spot.DownloadRobotStateLogRequest) | [DownloadRobotStateLogResponse](#bosdyn.api.spot.DownloadRobotStateLogResponse) stream | Download log of the latest recorded robot state information. |
+
+
+
+
+
+
+# Standard Types
+
+| .proto Type | Notes | C++ | Java | Python | Go | C# | PHP | Ruby |
+| ----------- | ----- | --- | ---- | ------ | -- | -- | --- | ---- |
+| double | | double | double | float | float64 | double | float | Float |
+| float | | float | float | float | float32 | float | float | Float |
+| int32 | Uses variable-length encoding. Inefficient for encoding negative numbers – if your field is likely to have negative values, use sint32 instead. | int32 | int | int | int32 | int | integer | Bignum or Fixnum (as required) |
+| int64 | Uses variable-length encoding. Inefficient for encoding negative numbers – if your field is likely to have negative values, use sint64 instead. | int64 | long | int/long | int64 | long | integer/string | Bignum |
+| uint32 | Uses variable-length encoding. | uint32 | int | int/long | uint32 | uint | integer | Bignum or Fixnum (as required) |
+| uint64 | Uses variable-length encoding. | uint64 | long | int/long | uint64 | ulong | integer/string | Bignum or Fixnum (as required) |
+| sint32 | Uses variable-length encoding. Signed int value. These more efficiently encode negative numbers than regular int32s. | int32 | int | int | int32 | int | integer | Bignum or Fixnum (as required) |
+| sint64 | Uses variable-length encoding. Signed int value. These more efficiently encode negative numbers than regular int64s. | int64 | long | int/long | int64 | long | integer/string | Bignum |
+| fixed32 | Always four bytes. More efficient than uint32 if values are often greater than 2^28. | uint32 | int | int | uint32 | uint | integer | Bignum or Fixnum (as required) |
+| fixed64 | Always eight bytes. More efficient than uint64 if values are often greater than 2^56. | uint64 | long | int/long | uint64 | ulong | integer/string | Bignum |
+| sfixed32 | Always four bytes. | int32 | int | int | int32 | int | integer | Bignum or Fixnum (as required) |
+| sfixed64 | Always eight bytes. | int64 | long | int/long | int64 | long | integer/string | Bignum |
+| bool | | bool | boolean | boolean | bool | bool | boolean | TrueClass/FalseClass |
+| string | A string must always contain UTF-8 encoded or 7-bit ASCII text. | string | String | str/unicode | string | string | string | String (UTF-8) |
+| bytes | May contain any arbitrary sequence of bytes. | string | ByteString | str | []byte | ByteString | string | String (ASCII-8BIT) |
+
diff --git a/choreography_protos/bosdyn/api/spot/choreography_params.proto b/choreography_protos/bosdyn/api/spot/choreography_params.proto
index 18d971af1..395c37d68 100644
--- a/choreography_protos/bosdyn/api/spot/choreography_params.proto
+++ b/choreography_protos/bosdyn/api/spot/choreography_params.proto
@@ -126,6 +126,7 @@ enum ArmMoveFrame {
ARM_MOVE_FRAME_BODY = 3;
ARM_MOVE_FRAME_SHOULDER = 4;
ARM_MOVE_FRAME_SHADOW = 5;
+ ARM_MOVE_FRAME_DANCE = 6;
}
message WorkspaceArmMoveParams {
@@ -143,6 +144,9 @@ message WorkspaceArmMoveParams {
// How the motion should be paced.
Easing easing = 5;
+
+ // If we're using the dance frame, which one?
+ google.protobuf.Int32Value dance_frame_id = 6;
}
message Figure8Params {
@@ -266,6 +270,15 @@ message CrawlParams {
google.protobuf.DoubleValue stance_length = 4;
}
+message GotoParams {
+ Vec2Value absolute_position = 1;
+ google.protobuf.DoubleValue absolute_yaw = 2;
+ google.protobuf.DoubleValue step_position_stiffness = 3;
+ google.protobuf.DoubleValue duty_cycle = 4;
+ // Should we combine with the next move into a smooth trajectory.
+ google.protobuf.BoolValue link_to_next = 5;
+}
+
// Parameters for the Bourree move.
message BourreeParams {
// The speed at which we should bourree [m/s]. X is forward. Y is left.
@@ -296,8 +309,6 @@ message JumpParams {
google.protobuf.DoubleValue stance_width = 3;
// The distance between the robot's front and back feet [m].
google.protobuf.DoubleValue stance_length = 4;
- // Should we turn to a yaw in choreography sequence frame?
- google.protobuf.BoolValue absolute = 5;
// How far the robot should jump [m].
Vec2Value translation = 6;
// How much it should lo/td the first pair of lets ahead of the other pair. In fraction of flight time.
@@ -312,6 +323,24 @@ message JumpParams {
LEAD_RIGHT = 5;
}
Lead lead_leg_pair = 8;
+
+ // Should we turn to a yaw in choreography sequence frame?
+ google.protobuf.BoolValue yaw_is_absolute = 11;
+ // Should we translate in choreography sequence frame?
+ google.protobuf.BoolValue translation_is_absolute = 12;
+
+ // The direction the robot should face upon landing relative to pose at the start of the dance. [rad]
+ google.protobuf.DoubleValue absolute_yaw = 9;
+ // Where the robot should land relative to the pose at the start of the dance. [m]
+ Vec2Value absolute_translation = 10;
+
+ google.protobuf.DoubleValue swing_height = 13;
+
+
+ // *** Deprecation Warning ***
+ // DEPRECATED as of 3.0.0: The absolute field has been deprecated and split into the yaw_is_absolute and translation_is_absolute fields.
+ // The following field will be deprecated and moved to 'reserved' in a future release.
+ google.protobuf.BoolValue absolute = 5 [deprecated = true];
}
message StepParams {
@@ -383,25 +412,51 @@ message TwerkParams {
message TurnParams {
// How far to turn, described in radians with a positive value representing a turn to the left.
google.protobuf.DoubleValue yaw = 1;
+ // Orientation to turn to, relative to the orientation at the start of the script. [rad]
+ google.protobuf.DoubleValue absolute_yaw = 5;
// Should we turn to a yaw in choreography sequence frame?
- google.protobuf.BoolValue absolute = 2;
+ google.protobuf.BoolValue yaw_is_absolute = 6;
// Swing parameters to describe the footstep pattern during the turning [height in meters]. Note,
// a zero (or nearly zero) value will be considered as an unspecified parameter.
google.protobuf.DoubleValue swing_height = 3;
// Swing parameter to describe the foot's swing velocity during the turning [m/s]. Note, a zero
// (or nearly zero) value will be considered as an unspecified parameter.
google.protobuf.DoubleValue swing_velocity = 4;
+ // How far to move relative to starting position. [m]
+ Vec2Value motion = 7;
+ // Where to move relative to position at start of script. [m]
+ Vec2Value absolute_motion = 8;
+ // Is motion specified relative to pose at start of dance?
+ google.protobuf.BoolValue motion_is_absolute = 9;
+
+ // *** Deprecation Warning ***
+ // DEPRECATED as of 3.0.0: The absolute field has been deprecated and split into the yaw_is_absolute and translation_is_absolute fields.
+ // The following field will be deprecated and moved to 'reserved' in a future release.
+ google.protobuf.BoolValue absolute = 2 [deprecated = true];
}
// Parameters specific to pace translation.
message Pace2StepParams {
- // Where to move.
+ // How far to move relative to starting position. [m]
Vec2Value motion = 1;
+ // Where to move relative to position at start of script. [m]
+ Vec2Value absolute_motion = 6;
+ // Is motion specified relative to pose at start of dance?
+ google.protobuf.BoolValue motion_is_absolute = 7;
// Swing parameters to describe the footstep pattern during the pace translation gait. Note, a zero (or nearly zero)
// value will be considered as an unspecified parameter.
google.protobuf.DoubleValue swing_height = 3;
google.protobuf.DoubleValue swing_velocity = 4;
- // Should the motion be relative to where the dance started (true) rather than relative to the current position (false).
+ // How far to turn, described in radians with a positive value representing a turn to the left.
+ google.protobuf.DoubleValue yaw = 8;
+ // Orientation to turn to, relative to the orientation at the start of the script. [rad]
+ google.protobuf.DoubleValue absolute_yaw = 9;
+ // Should we turn to a yaw in choreography sequence frame?
+ google.protobuf.BoolValue yaw_is_absolute = 10;
+
+ // *** Deprecation Warning ***
+ // DEPRECATED as of 3.0.0: The absolute field has been deprecated and split into the yaw_is_absolute and translation_is_absolute fields.
+ // The following field will be deprecated and moved to 'reserved' in a future release.
google.protobuf.BoolValue absolute = 5;
}
@@ -448,3 +503,167 @@ message FrontUpParams {
google.protobuf.BoolValue mirror = 1;
}
+message FidgetStandParams {
+ enum FidgetPreset {
+ PRESET_UNKNOWN = 0;
+ PRESET_CUSTOM = 1;
+ PRESET_INTEREST = 2;
+ PRESET_PLAYFUL = 3;
+ PRESET_FEAR = 4;
+ PRESET_NERVOUS = 5;
+ PRESET_EXHAUSTED = 6;
+ }
+ FidgetPreset preset = 1;
+ google.protobuf.DoubleValue min_gaze_pitch = 2;
+ google.protobuf.DoubleValue max_gaze_pitch = 3;
+ google.protobuf.DoubleValue gaze_mean_period = 4;
+ Vec3Value gaze_center_cfp = 5;
+ google.protobuf.DoubleValue shift_mean_period = 6;
+ google.protobuf.DoubleValue shift_max_transition_time = 7;
+ google.protobuf.DoubleValue breath_min_z = 8;
+ google.protobuf.DoubleValue breath_max_z = 9;
+ google.protobuf.DoubleValue breath_max_period = 10;
+ google.protobuf.DoubleValue leg_gesture_mean_period = 11;
+ google.protobuf.DoubleValue gaze_slew_rate = 12;
+ Vec3Value gaze_position_generation_gain = 13;
+ google.protobuf.DoubleValue gaze_roll_generation_gain = 14;
+}
+
+message FrameSnapshotParams {
+ google.protobuf.Int32Value frame_id = 1;
+ google.protobuf.Int32Value fiducial_number = 2;
+
+ enum Inclusion {
+ INCLUSION_UNKNOWN = 0;
+ INCLUSION_IF_STANCE = 1;
+ INCLUSION_INCLUDED = 2;
+ INCLUSION_EXCLUDED = 3;
+ }
+ Inclusion include_front_left_leg = 3;
+ Inclusion include_front_right_leg = 4;
+ Inclusion include_hind_left_leg = 5;
+ Inclusion include_hind_right_leg = 6;
+ google.protobuf.BoolValue compensated = 7;
+}
+
+message SetColorParams {
+ Color left_color = 1;
+ google.protobuf.BoolValue right_same_as_left = 2;
+ Color right_color = 3;
+ google.protobuf.DoubleValue fade_in_slices = 4;
+ google.protobuf.DoubleValue fade_out_slices = 5;
+}
+
+message FadeColorParams {
+ Color top_color = 1;
+ Color bottom_color = 2;
+ google.protobuf.DoubleValue fade_in_slices = 3;
+ google.protobuf.DoubleValue fade_out_slices = 4;
+}
+
+message IndependentColorParams {
+ Color top_left = 1;
+ Color upper_mid_left = 2;
+ Color lower_mid_left = 3;
+ Color bottom_left = 4;
+ Color top_right = 5;
+ Color upper_mid_right = 6;
+ Color lower_mid_right = 7;
+ Color bottom_right = 8;
+ google.protobuf.DoubleValue fade_in_slices = 9;
+ google.protobuf.DoubleValue fade_out_slices = 10;
+}
+
+enum LedLight {
+ LED_LIGHT_UNKNOWN = 0;
+ LED_LIGHT_LEFT1 = 1;
+ LED_LIGHT_LEFT2 = 2;
+ LED_LIGHT_LEFT3 = 3;
+ LED_LIGHT_LEFT4 = 4;
+ LED_LIGHT_RIGHT1 = 5;
+ LED_LIGHT_RIGHT2 = 6;
+ LED_LIGHT_RIGHT3 = 7;
+ LED_LIGHT_RIGHT4 = 8;
+}
+
+message Color {
+ google.protobuf.DoubleValue red = 1;
+ google.protobuf.DoubleValue green = 2;
+ google.protobuf.DoubleValue blue = 3;
+}
+
+message RippleColorParams {
+ Color main = 1;
+ Color secondary = 2;
+
+ enum Pattern {
+ PATTERN_UNKNOWN = 0;
+ PATTERN_FLASHING = 1;
+ PATTERN_SNAKE = 2;
+ PATTERN_ALTERNATE_COLORS = 3;
+ PATTERN_FINE_GRAINED_ALTERNATE_COLORS = 4;
+ }
+ Pattern pattern = 3;
+
+ enum LightSide {
+ LIGHT_SIDE_UNKNOWN = 0;
+ LIGHT_SIDE_LEFT = 1;
+ LIGHT_SIDE_RIGHT = 2;
+ LIGHT_SIDE_BOTH_IN_SEQUENCE = 3;
+ LIGHT_SIDE_BOTH_MATCHING = 4;
+ }
+ LightSide light_side = 4;
+ google.protobuf.DoubleValue increment_slices = 5;
+}
+
+
+message AnimateParams {
+ // The name of the animated move. There are no default values/bounds associated with this field.
+ string animation_name = 1;
+
+ // How many slices to smoothly transition from previous pose to animation.
+ google.protobuf.DoubleValue body_entry_slices = 2;
+
+ // How many slices to return from animation to nominal pose. Zero indicates to keep final animated pose.
+ google.protobuf.DoubleValue body_exit_slices = 3;
+
+ // Multiplier for animated translation by axis to exaggerate or suppress motion along specific axes.
+ Vec3Value translation_multiplier = 4;
+
+ // Multiplier for the animated orientation by axis to exaggerate or suppress motion along specific axes.
+ EulerZYXValue rotation_multiplier = 5;
+
+ // How many slices to smoothly transition from previous pose to animation.
+ google.protobuf.DoubleValue arm_entry_slices = 6;
+
+ // Joint angle offsets in radians for the arm joints.
+ google.protobuf.DoubleValue shoulder_0_offset = 7;
+ google.protobuf.DoubleValue shoulder_1_offset = 8;
+ google.protobuf.DoubleValue elbow_0_offset = 9;
+ google.protobuf.DoubleValue elbow_1_offset = 10;
+ google.protobuf.DoubleValue wrist_0_offset = 11;
+ google.protobuf.DoubleValue wrist_1_offset = 12;
+ google.protobuf.DoubleValue gripper_offset = 13;
+
+ // How fast to playback. 1.0 is normal speed. larger is faster.
+ google.protobuf.DoubleValue speed = 14;
+
+ // How late into the nominal script to start.
+ google.protobuf.DoubleValue offset_slices = 15;
+
+ // Multiply all gripper angles by this value.
+ google.protobuf.DoubleValue gripper_multiplier = 16;
+
+ // How hard the gripper can squeeze. Fraction of full strength.
+ google.protobuf.DoubleValue gripper_strength_fraction = 17;
+
+ // Which dance frame to use as a reference for workspace arm moves. Including this parameter
+ // overrides the animation frame.
+ google.protobuf.Int32Value arm_dance_frame_id = 18;
+
+ // How hard to try to track the animated body motion.
+ // Only applicable to animations that control both the body and the legs.
+ // On a scale of 1 to 10 (11 for a bit extra).
+ // Higher will result in more closely tracking the animated body motion, but possibly at the expense of balance for more difficult animations.
+ google.protobuf.DoubleValue body_tracking_stiffness = 19;
+}
diff --git a/choreography_protos/bosdyn/api/spot/choreography_sequence.proto b/choreography_protos/bosdyn/api/spot/choreography_sequence.proto
index fba074c19..b1798332f 100644
--- a/choreography_protos/bosdyn/api/spot/choreography_sequence.proto
+++ b/choreography_protos/bosdyn/api/spot/choreography_sequence.proto
@@ -10,10 +10,14 @@ package bosdyn.api.spot;
option java_outer_classname = "ChoreographyProto";
-import "bosdyn/api/header.proto";
+import "google/protobuf/duration.proto";
import "google/protobuf/timestamp.proto";
+import "google/protobuf/wrappers.proto";
+import "bosdyn/api/geometry.proto";
+import "bosdyn/api/header.proto";
import "bosdyn/api/lease.proto";
import "bosdyn/api/spot/choreography_params.proto";
+import "bosdyn/api/data_chunk.proto";
// Request a list of all possible moves and the associated parameters (min/max values).
message ListAllMovesRequest {
@@ -26,7 +30,7 @@ message ListAllMovesResponse {
// Common response header
ResponseHeader header = 1;
- // List of moves that the robot knows about
+ // List of moves that the robot knows about.
repeated MoveInfo moves = 2;
// A copy of the MoveParamsConfig.txt that the robot is using.
@@ -58,6 +62,39 @@ message UploadChoreographyResponse {
repeated string warnings = 3;
}
+message UploadAnimatedMoveRequest {
+ // Common request header
+ RequestHeader header = 1;
+
+ // Unique ID for the animated moves. This will be automatically generated by the client
+ // and is used to uniquely identify the entire animation by creating a hash from the Animation
+ // protobuf message after serialization. The ID will be conveyed within the MoveInfo protobuf
+ // message in the ListAllMoves RPC. This ID allows the choreography client to only reupload
+ // animations that have changed or do not exist on robot already.
+ google.protobuf.StringValue animated_move_generated_id = 3;
+
+ // AnimatedMove to upload to the robot and create a dance move from.
+ Animation animated_move = 2;
+}
+
+message UploadAnimatedMoveResponse {
+ // Common response header.
+ ResponseHeader header = 1;
+
+ enum Status {
+ STATUS_UNKNOWN = 0; // Do not use.
+ STATUS_OK = 1; // Uploading + parsing the animated move succeeded.
+ STATUS_ANIMATION_VALIDATION_FAILED = 2; // The animated move is considered invalid, see the warnings.
+ }
+ Status status = 2;
+
+ // If the uploaded animated move is invalid (will throw a STATUS_ANIMATION_VALIDATION_FAILED), then
+ // warning messages describing the failure cases will be populated here to indicate which
+ // parts of the animated move failed. Note: there could be some warning messages even when an animation
+ // is marked as ok.
+ repeated string warnings = 3;
+}
+
message ExecuteChoreographyRequest {
// Common request header
RequestHeader header = 1;
@@ -91,6 +128,151 @@ message ExecuteChoreographyResponse {
Status status = 3;
}
+message StartRecordingStateRequest {
+ // Common request header
+ RequestHeader header = 1;
+
+ // How long should the robot record for if no stop RPC is sent. A recording session can be
+ // extended by setting the recording_session_id below to a non-zero value matching the ID for the
+ // current recording session.
+ // For both start and continuation commands, the service will stop recording at
+ // end_time = (system time when the Start/Continue RPC is received) + (continue_recording_duration),
+ // unless another continuation request updates this end time.
+ // The robot has an internal maximum recording time of 5 minutes for the complete session log.
+ google.protobuf.Duration continue_recording_duration = 2;
+
+ // Provide the unique identifier of the recording session to extend the recording end time for.
+ // If the recording_session_id is 0, then it will create a new session and the robot will clear
+ // the recorded robot state buffer and restart recording.
+ // If this is a continuation of an existing recording session, than the robot will continue
+ // to record until the specified end time.
+ uint64 recording_session_id = 3;
+}
+
+message StartRecordingStateResponse {
+ // Common response header
+ ResponseHeader header = 1;
+
+ // The status for the start recording request.
+ enum Status {
+ // Status unknown; do not use.
+ STATUS_UNKNOWN = 0;
+ // The request succeeded and choreography has either started, or continued with an extended
+ // duration based on if a session_id was provided.
+ STATUS_OK = 1;
+ // The provided recording_session_id is unknown: it must either be 0 (start a new recording log)
+ // or it can match the current recording session id returned by the most recent start recording request.
+ STATUS_UNKNOWN_RECORDING_SESSION_ID = 2;
+ // The Choreography Service's internal buffer is filled. It will record for a maximum of 5 minutes. It
+ // will stop recording, but save the recorded data until
+ STATUS_RECORDING_BUFFER_FULL = 3;
+ }
+ Status status = 2;
+
+ // Unique identifier for the current recording session
+ uint64 recording_session_id = 3;
+}
+
+message StopRecordingStateRequest {
+ // Common request header
+ RequestHeader header = 1;
+}
+
+message StopRecordingStateResponse {
+ // Common response header
+ ResponseHeader header = 1;
+}
+
+message DownloadRobotStateLogRequest {
+ // Common request header
+ RequestHeader header = 1;
+
+ enum LogType {
+ // Unknown. Do not use.
+ LOG_TYPE_UNKNOWN = 0;
+ // The robot state information recorded from the time of the manual start RPC (StartRecordingState)
+ // to either {the time of the manual stop RPC (StopRecordingState), the time of the download logs RPC,
+ // or the time of the internal service's buffer filling up}.
+ LOG_TYPE_MANUAL = 1;
+ // The robot will automatically record robot state information for the entire duration of an executing
+ // choreography in addition to any manual logging. This log type will download this information for the
+ // last completed choreography.
+ LOG_TYPE_LAST_CHOREOGRAPHY = 2;
+ }
+ // Which data should we download.
+ LogType log_type = 2;
+}
+
+message LoggedJoints {
+ LegJointAngles fl = 1; // front left leg joint angles.
+ LegJointAngles fr = 2; // front right leg joint angles.
+ LegJointAngles hl = 3; // hind left leg joint angles.
+ LegJointAngles hr = 4; // hind right leg joint angles.
+
+ // Full set of joint angles for the arm and gripper.
+ ArmJointAngles arm = 5;
+ google.protobuf.DoubleValue gripper_angle = 6;
+}
+
+message LoggedFootContacts {
+ // Boolean indicating whether or not the robot's foot is in contact with the ground.
+ bool fr_contact = 1;
+ bool fl_contact = 2;
+ bool hr_contact = 3;
+ bool hl_contact = 4;
+}
+
+message LoggedStateKeyFrame {
+ // Full set of joint angles for the robot.
+ LoggedJoints joint_angles = 1;
+
+ // Foot contacts for the robot.
+ LoggedFootContacts foot_contact_state = 4;
+
+ // The current pose of the robot body in animation frame. The animation frame is defined
+ // based on the robot's footprint when the log first started recording.
+ SE3Pose animation_tform_body = 2;
+
+ // The timestamp (in robot time) for the key frame.
+ google.protobuf.Timestamp timestamp = 3;
+}
+
+message ChoreographyStateLog {
+ // A set of key frames recorded at a high rate. The key frames can be for the duration of an executing
+ // choreography or for the duration of a manual recorded log (triggered by the StartRecordingState and
+ // StopRecordingState RPCs). The specific set of keyframes is specified by the LogType when requesting
+ // to download the data.
+ repeated LoggedStateKeyFrame key_frames = 1;
+}
+
+message DownloadRobotStateLogResponse {
+ // Common response header
+ ResponseHeader header = 1;
+
+ enum Status {
+ // Status unknown. Do not use.
+ STATUS_UNKNOWN = 0;
+ // The log data downloaded successfully and is complete.
+ STATUS_OK = 1;
+ // Error where there is no robot state information logged in the choreography service.
+ STATUS_NO_RECORDED_INFORMATION = 2;
+ // Error where the complete duration of the recorded session caused the service's recording
+ // buffer to fill up. When full, the robot will stop recording but preserve whatever was
+ // recorded until that point. The robot has an internal maximum recording time of 5 minutes.
+ // The data streamed in this response will go from the start time until the time the buffer
+ // was filled.
+ STATUS_INCOMPLETE_DATA = 3;
+ }
+ // Return status for the request.
+ Status status = 2;
+
+ // Chunk of data to download. Responses are sent in sequence until the
+ // data chunk is complete. After receiving all chunks, concatenate them
+ // into a single byte string. Then, deserialize the byte string into an
+ // ChoreographyStateLog object.
+ DataChunk chunk = 3;
+}
+
// Defines varying parameters for a particular instance of a move.
message MoveParams {
// Unique ID of the move type that these params are associated with.
@@ -131,7 +313,16 @@ message MoveParams {
WorkspaceArmMoveParams workspace_arm_move_params = 33;
Figure8Params figure8_params = 34;
KneelLegMove2Params kneel_leg_move2_params = 35;
+ FidgetStandParams fidget_stand_params = 36;
+ GotoParams goto_params = 37;
+ FrameSnapshotParams frame_snapshot_params = 38;
+ SetColorParams set_color_params = 39;
+ RippleColorParams ripple_color_params = 40;
+ FadeColorParams fade_color_params = 41;
+ IndependentColorParams independent_color_params = 42;
+
+ AnimateParams animate_params = 1000;
}
}
@@ -140,14 +331,26 @@ message MoveInfo {
// Unique ID of the move type.
string name = 1;
- // The number of "slices" (beats or sub-beats) that this move takes up.
+ // The duration of this move in slices (usually 1/4 beats).
int32 move_length_slices = 2;
+ // The duration of this move in seconds. If specified, overrides move_length_slices.
+ double move_length_time = 15;
+
+ // If true, the duration may be adjusted from the default specified by move_length_slices or move_length_time.
+ bool is_extendable = 3;
- // The minimum number of "slices" that this move can complete in.
+ // Bounds on the duration may be adjusted in slices (usually 1/4 beats).
+ // These apply to extendable moves, but may also override move_length_time for some BPM.
int32 min_move_length_slices = 13;
+ int32 max_move_length_slices = 14;
- // If true, the user can extend the move beyond the requested length.
- bool is_extendable = 3;
+ // Bounds on the duration in time.
+ // These apply to extendable moves, but may also override move_length_slices for some BPM.
+ double min_time = 6;
+ double max_time = 7;
+
+ // If the slice bounds and time bounds do not overlap for a particular bpm,
+ // the duration will be set to the minimum, violating the specified maximum.
// The state that the robot is in at the start or end of a move.
enum TransitionState {
@@ -162,18 +365,22 @@ message MoveInfo {
// The state of the robot after the move is complete.
TransitionState exit_state = 5;
- // The absolute minimum and maximum times of the move in seconds.
- double min_time = 6;
- double max_time = 7;
// Indicators as to which parts of the robot that the move controls.
bool controls_arm = 8;
bool controls_legs = 9;
bool controls_body = 10;
bool controls_gripper = 12;
+ bool controls_lights = 17;
+ bool controls_annotations = 18;
// Information for the GUI tool to visualize the sequence move info.
ChoreographerDisplayInfo display = 11;
+
+ // Unique ID for the animated moves. This is sent with the UploadAnimatedMove request and use
+ // to track which version of the animated move is currently saved on robot. The ID can be unset,
+ // meaning the RPC which uploaded the animation did not provide an identifying hash.
+ google.protobuf.StringValue animated_move_generated_id = 16;
}
// Information for the Choreographer to display.
@@ -210,6 +417,9 @@ message ChoreographerDisplayInfo {
CATEGORY_KNEEL = 5;
CATEGORY_ARM = 6;
CATEGORY_ANIMATION = 7;
+ CATEGORY_MPC = 8;
+ CATEGORY_LIGHTS = 9;
+ CATEGORY_ANNOTATIONS = 10;
}
Category category = 16;
}
@@ -239,4 +449,208 @@ message ChoreographerSave {
// UI specific member that describes exactly when the music should start, in slices. This is for
// time sync issues.
double music_start_slice = 3;
+
+ // The start slice for the choreographer save.
+ double choreography_start_slice = 4;
+}
+
+// Represents an animated dance move that can be used whithin choreographies after uploading.
+message Animation {
+ // The name of the animated move, which is how it will be referenced in choreographies.
+ string name = 1;
+
+ // The animated move is composed of animation keyframes, which specify the duration of
+ // each frame. The keyframe describes the position of the body/arms/gripper.
+ repeated AnimationKeyframe animation_keyframes = 2;
+
+ // Indicators as to which parts of the robot that the move controls.
+ bool controls_arm = 3;
+ bool controls_legs = 4;
+ bool controls_body = 5;
+ bool controls_gripper = 6;
+
+ // Track animated swing trajectories. Otherwise, takes standard swings between animated liftoff and touchdown locations.
+ bool track_swing_trajectories = 16;
+
+ // For moves that control the legs, but not the body.
+ // If legs are specified by joint angles, we still need body roll and pitch to know the foot height.
+ // If `assume_zero_roll_and_pitch` is true, they needn't be explicitly specified.
+ bool assume_zero_roll_and_pitch = 19;
+
+ // Mode for hand trajectory playback
+ enum ArmPlayback {
+ // Playback as specified. Arm animations specified with joint angles playback in jointspace
+ // and arm animations specified as hand poses playback in workspace.
+ ARM_PLAYBACK_DEFAULT = 0;
+ // Playback in jointspace. Arm animation will be most consistent relative to the body
+ ARM_PLAYBACK_JOINTSPACE = 1;
+ // Playback in workspace. Hand pose animation will be most consistent relative to the
+ // current footprint. Reference frame is animation frame.
+ ARM_PLAYBACK_WORKSPACE = 2;
+ // Playback in workspace with poses relative to the dance frame. hand pose animation will be
+ // most consistent relative to a fixed point in the world.
+ ARM_PLAYBACK_WORKSPACE_DANCE_FRAME = 3;
+ }
+ ArmPlayback arm_playback = 17;
+
+ // Optional bpm that the animation is successful at.
+ double bpm = 7;
+
+ // When true, rescales the time of each keyframe slightly such that the move takes an
+ // integer number of slices. If false/absent, the move will be padded or truncated slightly
+ // to fit an integer number of slices.
+ bool retime_to_integer_slices = 8;
+
+ // The different parameters (minimum, default, and maximum) that can change the move.
+ // The min/max bounds are used by Choreographer to constrain the parameter widget, and will
+ // also be used when uploading a ChoreographySequence containing the animation to validate
+ // that the animated move is allowed.
+ AnimateParams minimum_parameters = 9;
+ AnimateParams default_parameters = 10;
+ AnimateParams maximum_parameters = 11;
+
+ // Indicates if the animated moves can be shortened (the animated move will be cut off). Not
+ // supported for leg moves.
+ bool truncatable = 12;
+
+ // Indicates if the animated moves can be stretched (animated move will loop). Not supported for
+ // leg moves.
+ bool extendable = 13;
+
+ // Indicates if the move should start in a neutral stand position.
+ bool neutral_start = 14;
+
+ // Step exactly at the animated locations, even at the expense of balance.
+ // By default, the optimizer may adjust step locations slightly.
+ bool precise_steps = 15;
+
+ // Time everything exactly as animated, even at the expense of balance.
+ // By default, the optimizer may adjust timing slightly.
+ bool precise_timing = 18;
+
+ // If set true, this animation will not run unless the robot has an arm.
+ bool arm_required = 20;
+
+ // If set true, this animation will not run unless the robot has no arm.
+ bool arm_prohibited = 22;
+
+ // If the animation completes before the move's duration, freeze rather than looping.
+ bool no_looping = 21;
+}
+
+message AnimationKeyframe {
+ // Time from the start of the animation for this frame.
+ double time = 1;
+
+ // Different body parts the animated move can control.
+ // It can control multiple body parts at once.
+ AnimateGripper gripper = 2;
+ AnimateArm arm = 3;
+ AnimateBody body = 4;
+ AnimateLegs legs = 5;
+}
+
+message AnimateGripper {
+ google.protobuf.DoubleValue gripper_angle = 1;
+}
+
+message AnimateArm {
+ // An SE3 Pose for the hand where orientation is specified using either a quaternion or
+ // euler angles
+ message HandPose {
+ Vec3Value position = 1;
+
+ oneof orientation {
+ // The hand's orientation described with euler angles (yaw, pitch, roll).
+ EulerZYXValue euler_angles = 3;
+
+ // The hand's orientation described with a quaternion.
+ Quaternion quaternion = 4;
+ }
+ }
+
+ // For the animated arm, the arm can be described using either the joint angles or
+ // the pose of the hand. NOTE: each keyframe within a single Animation proto must always
+ // specify the arm using the same format for all frames.
+ oneof arm {
+ // Full arm joint angle specification.
+ ArmJointAngles joint_angles = 1;
+
+ // The hand position in the animation frame
+ HandPose hand_pose = 2;
+ }
+}
+
+// The AnimateArm keyframe describes the joint angles of the arm joints in radians.
+// Any joint not specified, will hold the previous angle it was at when the keyframe
+// begins. At least one arm joint must be specified.
+message ArmJointAngles {
+ google.protobuf.DoubleValue shoulder_0 = 1;
+ google.protobuf.DoubleValue shoulder_1 = 2;
+ google.protobuf.DoubleValue elbow_0 = 3;
+ google.protobuf.DoubleValue elbow_1 = 4;
+ google.protobuf.DoubleValue wrist_0 = 5;
+ google.protobuf.DoubleValue wrist_1 = 6;
+}
+
+// The AnimateBody keyframe describes the body's position and orientation. At least
+// one dimension of the body must be specified.
+message AnimateBody {
+
+ // For the animated body keyframe, describe the body position using either the body position or
+ // the center of mass position. NOTE: each keyframe within a single Animation proto must always
+ // specify the body position using the same format for all frames.
+ oneof position {
+ // The body position in the animation frame.
+ Vec3Value body_pos = 1;
+
+ // The body's center of mass position in the animation frame.
+ Vec3Value com_pos = 2;
+ }
+
+ // For the animated body keyframe, describe the body orientation using either euler angles or a
+ // quaternion. NOTE: each keyframe within a single Animation proto must always
+ // specify the body orientation using the same format for all frames.
+ oneof orientation {
+ // The body's orientation described with euler angles (yaw, pitch, roll).
+ EulerZYXValue euler_angles = 3;
+
+ // The body's orientation described with a quaternion.
+ Quaternion quaternion = 4;
+ }
+}
+
+// The AnimateLegs keyframe describes each leg using either joint angles or the foot position.
+message AnimateLegs {
+ AnimateSingleLeg fl = 1; // Front left leg.
+ AnimateSingleLeg fr = 2; // Front right leg.
+ AnimateSingleLeg hl = 3; // Hind left leg.
+ AnimateSingleLeg hr = 4; // Hind right leg.
+}
+
+// A single leg keyframe to describe the leg motion.
+message AnimateSingleLeg {
+
+ // For the animated single legs, the leg can be described using either the joint angles or
+ // the position of the foot. NOTE: each keyframe within a single Animation proto must always
+ // specify a single leg using the same format for all frames.
+ oneof leg {
+ // Full leg joint angle specification.
+ LegJointAngles joint_angles = 1;
+
+ // The foot position of the leg in the animation frame.
+ Vec3Value foot_pos = 2;
+ }
+
+ // If true, the foot is in contact with the ground and standing. If false, the
+ // foot is in swing. If unset, the contact will be inferred from the leg joint angles
+ // or foot position.
+ google.protobuf.BoolValue stance = 3;
+}
+
+// Descprition of each leg joint angle (hip x/y and knee) in radians.
+message LegJointAngles {
+ double hip_x = 1;
+ double hip_y = 2;
+ double knee = 3;
}
diff --git a/choreography_protos/bosdyn/api/spot/choreography_service.proto b/choreography_protos/bosdyn/api/spot/choreography_service.proto
index de31a53cc..8a1100723 100644
--- a/choreography_protos/bosdyn/api/spot/choreography_service.proto
+++ b/choreography_protos/bosdyn/api/spot/choreography_service.proto
@@ -19,6 +19,18 @@ service ChoreographyService {
// Upload a dance to the robot.
rpc UploadChoreography(UploadChoreographyRequest) returns (UploadChoreographyResponse) {}
+ // Upload an animation to the robot.
+ rpc UploadAnimatedMove(UploadAnimatedMoveRequest) returns (UploadAnimatedMoveResponse) {}
+
// Execute the uploaded dance.
rpc ExecuteChoreography(ExecuteChoreographyRequest) returns (ExecuteChoreographyResponse) {}
+
+ // Manually start (or continue) recording the robot state.
+ rpc StartRecordingState(StartRecordingStateRequest) returns (StartRecordingStateResponse) {}
+
+ // Manually stop recording the robot state.
+ rpc StopRecordingState(StopRecordingStateRequest) returns (StopRecordingStateResponse) {}
+
+ // Download log of the latest recorded robot state information.
+ rpc DownloadRobotStateLog(DownloadRobotStateLogRequest) returns (stream DownloadRobotStateLogResponse) {}
}
diff --git a/docs/concepts/README.md b/docs/concepts/README.md
index aa7b62498..892bfdb3e 100644
--- a/docs/concepts/README.md
+++ b/docs/concepts/README.md
@@ -32,6 +32,7 @@ Finally, payloads allow for expansion of services beyond those provided by Spot
* [Geometry and Frames](geometry_and_frames.md)
* [Robot services](robot_services.md)
* [E-Stop](estop_service.md)
+* [Lease](lease_service.md)
* [Developing API Services](developing_api_services.md)
* [Faults](faults.md)
* [Autonomy services](autonomy/README.md)
diff --git a/docs/concepts/arm/arm_services.md b/docs/concepts/arm/arm_services.md
index d15babeca..e8f982685 100644
--- a/docs/concepts/arm/arm_services.md
+++ b/docs/concepts/arm/arm_services.md
@@ -37,9 +37,10 @@ Requests an end effector trajectory move while applying some force to the ground
A [GCODE](https://en.wikipedia.org/wiki/G-code) interpreter that can be used to draw with sidewalk chalk.
## Door Service
-The door service is a framework for opening doors. We provide two command types:
-* **Auto** This message requires users to specify a location to search for a door handle along with and some door parameters. Spot autonomously grabs the handle, opens the door, and walks through.
+The door service is a framework for opening doors. We support three command types:
+* **AutoGrasp**: This message requires users to specify a location to search for a door handle along with and some door parameters. Spot autonomously grabs the handle, opens the door, and walks through.
* **Warmstart**: In Warmstart, the assumption is the robot is already grasping the door handle. The robot will skip the grasp stage of auto, and immediately begin opening the door and then traverses through the doorway.
+* **AutoPush**: Used for doors that can be opened via a push without requiring a grasp. This includes pushbars, crashbars, and doors without a latching mechanism. The robot will point the hand down and push the door open with its wrist based on a push point supplied over the API.
See the following example for using this service:
diff --git a/docs/concepts/autonomy/README.md b/docs/concepts/autonomy/README.md
index e6eac8d4b..e24732c8f 100644
--- a/docs/concepts/autonomy/README.md
+++ b/docs/concepts/autonomy/README.md
@@ -30,3 +30,4 @@ The Autowalk feature is an implementation of the autonomous navigation API. Howe
* [Missions service](missions_service.md)
* [Network compute bridge](../network_compute_bridge.md)
* [AutoReturn service](auto_return.md)
+* [Directed Exploration](directed_exploration.md)
diff --git a/docs/concepts/autonomy/directed_exploration.md b/docs/concepts/autonomy/directed_exploration.md
new file mode 100644
index 000000000..e876b8519
--- /dev/null
+++ b/docs/concepts/autonomy/directed_exploration.md
@@ -0,0 +1,43 @@
+
+
+# Directed Exploration
+
+## What is Directed Exploration?
+
+Directed Exploration is a feature that enables Spot to navigate robustly when the environment has changed. When Spot's path is blocked by an unexpected obstacle, Directed Exploration allows Spot to use its sensors to find a clear path to a nearby waypoint, even if that waypoint is not connected to Spot's current waypoint in the GraphNav map. This enables Spot to deal with environments where new obstacles may have been added, old obstacles may have been moved, and doors may have been opened or closed.
+
+For example, suppose you record the Autowalk mission shown below, where Spot visits the waypoints 1 through 6 in order. When you record the mission, door A is open and door B is closed.
+
+![Initial recorded path](./images/directed_exploration_1.png "Initial recorded path")
+
+Now, imagine someone closes door A and opens door B. Without Directed Exploration, Spot would get stuck at closed door A when it tried to replay the mission.
+
+![Changed environment](./images/directed_exploration_2.png "Changed environment")
+
+Directed Exploration will see that Spot can travel from waypoint 2 to waypoint 5 through open door B, even though this door was closed when the mission was recorded.
+
+![Directed Exploration path](./images/directed_exploration_3.png "Directed Exploration path")
+
+After reaching waypoint 5, Spot will navigate to the final destination at waypoint 6.
+
+## When is Directed Exploration invoked?
+
+Directed Exploration is invoked as a last resort when all other attempts to find a path to the destination have failed. Other strategies include attempting to plan a different route through the edges in the GraphNav map and attempting to follow alternate waypoints that are created next to the waypoints on the map.
+
+Directed Exploration can succeed in some situations where these other strategies fail, because Directed Exploration frees Spot from the constaint of navigating along edges stored in the GraphNav map. Directed Exploration allows Spot to use its sensors to determine whether it can reach nearby waypoints, even if those waypoints aren't connected to its current location in the GraphNav map.
+
+## How to enable/disable Directed Exploration
+
+Directed Exploration is enabled by default.
+
+To disable Directed Exploration when replaying an Autowalk mission from the tablet, uncheck the box marked "Drive around large obstacles" on the mission upload screen. This will disable both Directed Exploration and alternate waypoints, but it will still allow replanning alternate routes through the GraphNav map.
+
+To disable Directed Exploration for a NavigateTo or NavigateRoute command issued through the SDK, set the disable_directed_exploration field of the corresponding TravelParams proto to true.
+
+To disable Directed Exploration for a mission requested through the SDK, set the disable_directed_exploration field of the PlaySettings proto in the PlayMissionRequest to true.
diff --git a/docs/concepts/autonomy/graphnav_map_structure.md b/docs/concepts/autonomy/graphnav_map_structure.md
index 395e7b40d..71574f455 100644
--- a/docs/concepts/autonomy/graphnav_map_structure.md
+++ b/docs/concepts/autonomy/graphnav_map_structure.md
@@ -25,6 +25,12 @@ The viewer also shows fiducials that were detected during the map recording proc
Maps do not have a global coordinate system (like GPS coordinates, for example). Only the relative transformations between waypoints are known.
+# Recording and Modifying Maps
+
+The GraphNav map recording service is used to create and modify maps using robot data. Using the `StartRecording` RPC, you can tell the map recording service to begin creating a new map. You may then drive the robot around a site, and it will record a map. Afterwards (or at intervals during recording), you may download map data.
+
+The [`recording_command_line` example](../../../python/examples/graph_nav_command_line/README.md#recording-service-command-line) in the Spot SDK shows how to record and modify maps at runtime, and how to download map data from the recording service.
+
## Creating waypoints
Waypoints are created by the robot every two meters during map recording. However, they can be created in areas where the robot’s path has more curvature as it navigates around corners and obstacles.
@@ -32,6 +38,128 @@ Waypoints are created by the robot every two meters during map recording. Howeve
Call the `CreateWaypoint` RPC at specific locations in order to perform some action in a mission or when you want to explicitly maintain some mapping between a user event and a waypoint ID. Example: creating a waypoint at an inspection point.
+## Creating edges
+
+Edges are created automatically while recording is active. One edge will be created between every new waypoint that the recording service creates.
+
+If you want to create edges manually, you can call the `CreateEdge` RPC in the recording service.
+
+
+# Map Processing
+Starting in the 3.0 SDK release, the GraphNav map processing service makes two new options available to modify maps, Topology Processing (automatic loop closure) and Anchoring Optimization. The map processing service is an RPC service interface which shares a map with the GraphNav Service. The map processing Service performs operations on the GraphNav map, and can either return results to the user, or write results into the shared map. The map processing service may be used after recording a map using the Map Recording Service.
+
+The [`recording_command_line` example](../../../python/examples/graph_nav_command_line/README.md#recording-service-command-line) has examples for how to use the map processing service on a newly recorded GraphNav map.
+
+## Topology Processing
+The Topology of a GraphNav map refers to the connectivity between different Waypoints using Edges. Below are some examples of Maps with different topologies.
+
+![Topologies](./images/topologies.png)
+
+The recording service records one kind of map topology: the Chain. Chains always go from a starting point to an ending point. Chains are easy to create -- it only requires recording the robot’s current position and sensor data relative to the previous pose and sensor data. There are no connections between waypoints that aren’t along the chain, even if they happen to cross each other in physical space.
+
+When starting a new recording while the robot is localized to an existing waypoint, the recording service will create a Branch -- which is simply a Chain starting from a waypoint within another Chain.
+
+Loops are topologies where there is more than one possible path between two waypoints. The recording service is **unable to create loops** on its own. To create loops, you can manually call the `CreateEdge` RPC, or, starting in SDK 3.0, you may use the map processing service to automatically close loops.
+
+### Why are loops important?
+Any time the robot crosses its own path in physical space, there is an opportunity to create a topological connection (an edge) between where the robot currently is, and where it was in the past. This is called *"loop closure"*.
+
+![The robot has returned to the start](./images/robot_back_at_start.png)
+
+For example, if the robot has returned to its starting place after traversing hundreds of meters through a site, it would be desirable to know that the start and end points of the map are topologically connected.
+
+For example, consider the chain:
+
+```
+start waypoint -> (1,000 waypoints) -> end waypoint
+```
+
+The robot may only travel along edges while using GraphNav, so without an edge between the starting and ending waypoints of the map, the robot is unable to travel betwen the start and end directly. For example, the command:
+
+```
+NavigateTo(start waypoint)
+```
+causes the robot to traverse the 1,000 waypoints it had recorded *in reverse* rather than walking directly from the end to the start.
+
+To prevent this, we must **close the loop** between the starting and ending waypoint.
+
+## Automatic Loop Closure
+You can issue a `ProcessTopologyRequest` to the map processing service to automatically close loops in the GraphNav map. This helps the robot know which areas of the map are topologically connected so that it can make smarter decisions about initializing its localization to the map and planning around it.
+
+Note that Loop Closures in the Autonomy SDK affect both localization and navigation -- an edge is expected to be traversable by the robot in a physical sense (that is, the robot can walk from point A to point B if there is an edge between A and B), and it also encodes the relative transformation between the waypoints it connects.
+
+
+![Topology processing example](./images/topo_processing_example.svg)
+
+There are two types of loop closure currently supported: odometry loop closures and fiducial loop closures.
+
+**Odometry loop closures** occur whenever:
+* The robot walks a total path length of less than 50 meters (for base platform robots) or 100 meters (for robots with LIDAR) and believes it is back at the same location (within 4 meters).
+* That path **does not cross a staircase**.
+* The resulting loop closure edge does not cross any body obstacles.
+
+To close larger loops, the map processing service relies on fiducials. **Fiducial loop closures** occur whenever:
+
+* The robot sees the same fiducial twice.
+* In both instances that the robot sees the fiducial, the fiducial was less than 4 meters away and had low measurement error.
+* The resulting loop closure edge does not cross any body obstacles.
+
+> Note: the Autowalk app automatically calls topology processing after recording ends. SDK users must make sure to manually call the map processing service to enable this feature. See the recording_command_line.py example for how to call this service.
+
+### Ensuring good topology
+
+To make sure that the map processing service has the best chance of closing loops, adhere to the following guidelines:
+
+* The more fiducials in the environment, the better.
+* If the map is multi-floor, there must be at least one fiducial per floor.
+* Try to pass as closely as possible to fiducials while recording.
+* Try to record multiple pathways through the same site to maximize the chance that the robot will be able to navigate around obstacles.
+
+# Anchorings and Anchoring Optimization
+The 3.0 SDK introduces a new concept for GraphNav maps called *anchorings*. In addition to waypoints and edges, GraphNav maps now have an attached anchoring for waypoints and fiducials.
+
+An *anchoring* is a mapping from waypoints to some global reference frame. That is, for every waypoint and fiducial, we have an `SE3Pose` describing the transform from a *seed* frame to that waypoint or fiducial.
+
+For example, to get all of the positions of the waypoints relative to the seed frame, you can do this:
+
+```python
+# For each waypoint in the graph's anchoring, prints the (x, y, z) position of that waypoint.
+def print_anchorings(graph):
+ for anchor in graph.anchoring.anchors:
+ pos = anchor.seed_tform_waypoint.position
+ print("id: {} x: {} y: {} z: {}".format(anchor.id, pos.x, pos.y, pos.z))
+```
+> Note: it is not necessary for every waypoint in the graph to have an anchoring. GraphNav will use edge transformations to interpolate between the provided anchors.
+
+All GraphNav maps have a *default anchoring*, which comes from the User Origin if it exists, or the starting waypoint otherwise. The default anchoring is computed by extrapolating robot odometry over the edges in the graph.
+
+Additionally, the map processing tool can be used to **optimize the anchoring** via the `ProcessAnchoringRequest` RPC. An example for how to use this RPC using its default parameters can be found in the [`recording_command_line` example](../../../python/examples/graph_nav_command_line/README.md#recording-service-command-line). A more complex example can be found in the [`graph_nav_anchoring_optimization` SDK example](../../../python/examples/graph_nav_anchoring_optimization/README.md).
+
+Anchoring optimization can be used to improve the **metric consistency** of a map's anchoring. An example is shown below:
+
+![Anchoring before optimization](./images/treehouse_ko.png)
+*A real, 1km graph_nav map displayed from above before anchoring optimization is applied. The long red and yellow lines are loop closures from topology processing. The numbers are instances of fiducials stored in individual waypoints.*
+
+![Anchoring after optimization](./images/treehouse_anchoring.png)
+*The same map after anchoring optimization has been applied. Notice that segments of the building which were disjoint before now align.*
+
+> Note: the Autowalk app automatically calls anchoring optimization after recording ends. SDK users must make sure to manually call the anchoring optimization RPC to access this feature.
+
+## What can you do with anchorings?
+
+Anchorings can be used to describe the global relationships and layout between waypoints in a GraphNav map. The pose of the robot relative to the anchoring is also available in the Localization message returned by GraphNav as `seed_tform_body`.
+
+Anchorings may also be used to display the GraphNav map relative to a blueprint, BIM model, or other pre-existing map. For details on how to achieve this, please see the [`graph_nav_anchoring_optimization` SDK example](../../../python/examples/graph_nav_anchoring_optimization/README.md).
+
+There is also an SDK example for how to use GraphNav anchorings to extract a global 3D point cloud from a map and save it as a PLY file. This is located in the [`graph_nav_extract_point_cloud` SDK example.](../../../python/examples/graph_nav_extract_point_cloud/README.md)
+
+![An extracted point cloud of a real GraphNav map](./images/extract_cloud.png) *An extracted point cloud from a real GraphNav map, viewed in a 3rdparty tool called [CloudCompare](https://www.danielgm.net/cc/).*
+
+Starting in SDK release 3.0, anchorings can also be used to command the robot. The `NavigateToAnchor` command can be used to send the robot to an approximate `x, y, z` position relative to the seed frame. To do this, the robot will navigate over a series of waypoints and edges from its current location to the nearest waypoint to the given commanded pose, and will then walk in a straight line toward the commanded pose.
+
+# Map Data Transfer
+The GraphNav service has one active map instance on the robot that it shares with the GraphNav recording service. You can download the data stored in this map instance, or upload new data to the robot to replace or extend the current map instance.
+
## Downloading maps
Waypoints and edges have associated snapshots that store data the robot uses to compute localization and inform the robot’s locomotion.
diff --git a/docs/concepts/autonomy/images/directed_exploration_1.png b/docs/concepts/autonomy/images/directed_exploration_1.png
new file mode 100644
index 000000000..f9b4e3023
Binary files /dev/null and b/docs/concepts/autonomy/images/directed_exploration_1.png differ
diff --git a/docs/concepts/autonomy/images/directed_exploration_2.png b/docs/concepts/autonomy/images/directed_exploration_2.png
new file mode 100644
index 000000000..11b857852
Binary files /dev/null and b/docs/concepts/autonomy/images/directed_exploration_2.png differ
diff --git a/docs/concepts/autonomy/images/directed_exploration_3.png b/docs/concepts/autonomy/images/directed_exploration_3.png
new file mode 100644
index 000000000..37162a86d
Binary files /dev/null and b/docs/concepts/autonomy/images/directed_exploration_3.png differ
diff --git a/docs/concepts/autonomy/images/extract_cloud.png b/docs/concepts/autonomy/images/extract_cloud.png
new file mode 100644
index 000000000..f08180399
Binary files /dev/null and b/docs/concepts/autonomy/images/extract_cloud.png differ
diff --git a/docs/concepts/autonomy/images/robot_back_at_start.png b/docs/concepts/autonomy/images/robot_back_at_start.png
new file mode 100644
index 000000000..bea92c405
Binary files /dev/null and b/docs/concepts/autonomy/images/robot_back_at_start.png differ
diff --git a/docs/concepts/autonomy/images/topo_processing_example.svg b/docs/concepts/autonomy/images/topo_processing_example.svg
new file mode 100644
index 000000000..63d4b09e6
--- /dev/null
+++ b/docs/concepts/autonomy/images/topo_processing_example.svg
@@ -0,0 +1 @@
+
\ No newline at end of file
diff --git a/docs/concepts/autonomy/images/topologies.png b/docs/concepts/autonomy/images/topologies.png
new file mode 100644
index 000000000..3da1fe508
Binary files /dev/null and b/docs/concepts/autonomy/images/topologies.png differ
diff --git a/docs/concepts/autonomy/images/treehouse_anchoring.png b/docs/concepts/autonomy/images/treehouse_anchoring.png
new file mode 100644
index 000000000..8670996c4
Binary files /dev/null and b/docs/concepts/autonomy/images/treehouse_anchoring.png differ
diff --git a/docs/concepts/autonomy/images/treehouse_ko.png b/docs/concepts/autonomy/images/treehouse_ko.png
new file mode 100644
index 000000000..8613538a1
Binary files /dev/null and b/docs/concepts/autonomy/images/treehouse_ko.png differ
diff --git a/docs/concepts/autonomy/initialization.md b/docs/concepts/autonomy/initialization.md
index bacf172bd..9b7d2829b 100644
--- a/docs/concepts/autonomy/initialization.md
+++ b/docs/concepts/autonomy/initialization.md
@@ -74,7 +74,7 @@ Note that areas with intersecting walls, corners, furniture, equipment, and othe
## Initializing with search
-If fiducials aren’t available, the client program can use other methods of initialization through the `SetLocalization` RPC (available in the [`graph_nav.proto`](../../../protos/bosdyn/api/graph_nav/graph_nav.proto)). The client can provide an initial guess for the complete localization in the `initial_guess` field which describes the robot’s pose relative to a known waypoint and which waypoint to initialize to. Details on the algorithm that is run when a initial guess is provided are described in the next section.
+If fiducials are not available, the client program can use other methods of initialization through the `SetLocalization` RPC (available in the [`graph_nav.proto`](../../../protos/bosdyn/api/graph_nav/graph_nav.proto)). The client can provide an initial guess for the complete localization in the `initial_guess` field which describes the robot’s pose relative to a known waypoint and which waypoint to initialize to. Details on the algorithm that is run when a initial guess is provided are described in the next section.
If the initial guess for the localization is unknown, then GraphNav can perform a brute force search. The parameters of that search are set in the `SetLocalization` RPC via `max_distance` and `max_yaw` fields. Depending on the size of these parameters, the `SetLocalization` RPC can take a long time to complete.
diff --git a/docs/concepts/base_services.md b/docs/concepts/base_services.md
index d8113ffbf..a47559357 100644
--- a/docs/concepts/base_services.md
+++ b/docs/concepts/base_services.md
@@ -105,3 +105,10 @@ This estimate is purely at the application layer. This is important for two rea
Clients needing high precision timing, such as a payload sensor collecting data while the robot is moving, should instead use NTP to synchronize to the robot.
+## lease
+
+The lease service provides methods to achieve ownership over controlling the robot and maintaining valid communication with that owner.
+
+The lease is required to issue commands that control the robots mobility and motion, such as powering on the robot or commanding stand. To start, the lease must be acquired (or taken) to show ownership of the robot resources. Then retain lease signals must be sent throughout the duration of the operation to preserve ownership and indicate reliable communication with the lease owner. Ultimately the lease should be returned to free the resources to be reclaimed; however, it can be revoked during operation by another user or if the communication signals are not received within a certain period of time.
+
+The [Lease documentation](lease_service.md) provides a more in-depth description of leases, typical lease usage, and how to understand lease errors.
\ No newline at end of file
diff --git a/docs/concepts/choreography/README.md b/docs/concepts/choreography/README.md
index f9214bc36..bdfc9787a 100644
--- a/docs/concepts/choreography/README.md
+++ b/docs/concepts/choreography/README.md
@@ -20,6 +20,9 @@ The high-level [documentation](choreography_service.md) provides an overview of
## Contents
* [Choreography Service](choreography_service.md)
+* [Animations in Choreography](animations_in_choreographer.md)
* [Move Reference Guide](move_reference.md)
* [Choreographer Setup](choreographer_setup.md)
-* [Choreographer Overview](choreographer.md)
\ No newline at end of file
+* [Choreographer Overview](choreographer.md)
+* [Robot Connections in Choreographer](robot_controls_in_choreographer.md)
+* [Animation File Format](animation_file_specification.md)
\ No newline at end of file
diff --git a/docs/concepts/choreography/animation_file_specification.md b/docs/concepts/choreography/animation_file_specification.md
new file mode 100644
index 000000000..01856c165
--- /dev/null
+++ b/docs/concepts/choreography/animation_file_specification.md
@@ -0,0 +1,220 @@
+
+
+# Animation Files for Choreographer
+
+Choreographer supports animations that are parsed from a specific file format into an `Animation` protobuf message, which will be uploaded to the robot using the `UploadAnimation` RPC. Animation files are human readable/editable text files with a *.cha extension. The animation file format consists of three sections: the options section, the parameters section, and the key frames section.
+
+There are three methods to parse an animation cha file into an `Animation` protobuf message:
+1. Create an animations folder in the same directory as the Choreographer executable. All *.cha animation files within this folder will be automatically parsed and uploaded to robot when Choreographer is opened. The directory structure will look like this:
+
+```
+dance_directory/
+ choreographer.exe
+ animations/
+ bourree_arm.cha
+ my_animation.cha
+```
+
+2. In Choreographer, click “File” -> “Load Animated Move” to upload a single animation file after the application is already opened.
+3. The python script `animation_file_to_proto.py` (in the bosdyn-choreography-client package) will parse the text file and can be used to output a protobuf message.
+
+
+# File Specification
+
+## File Name and Extension
+The animation text file will have the *.cha extension. The filename will become the move name, which can be referenced in choreography sequences. Choreographer will display the animated move name with underscores converted to spaces and each word capitalized.
+
+## Structure
+The animation cha file consists of three sections. Each section must be present and separated by a blank line such that the animation file parser can succeed. In all three sections, values within the same line can be separated by any combination of spaces and tabs.
+
+The file sections are:
+- **Options:** pre-defined information used by both Choreographer and the robot to interpret the animation correctly.
+- **Parameters:** adjustable values to customize the animated move further. These values appear in the Parameters section of Choreographer.
+- **Body Keyframes:** the complete set of key frames, either densely or sparsely specified, for a fixed amount of time. The move duration/keyframe timestamps are indicated by either a timestamp in this section, or computed from the frequency defined in the options section). The keyframes consist of either joint angles or poses for the body parts being controlled.
+
+## Units
+All units are in:
+- Distance: meters
+- Angles: radians
+- Time: seconds. Sometimes time is measured in “slices” (¼ beat), so the duration is dependent on the sequence's BPM (beats per minute).
+
+## Commenting Support
+We support comments within the animation files. A comment is marked with either “#” or “//” and can take up an entire line in the file or be at the end of an existing line. Comments must be within one of the three main sections of the file, and they cannot create a new section.
+
+# Options File Section
+The options section defines how the animated move is controlled and interpreted by both Choreographer and the robot. The section consists of lines with a keyword at the beginning of the line, and a fixed number of values separated by spaces following the keyword. The number of values is specific to the keyword used and defined below. Some keywords, like “neutral_start”, have no values following the keyword.
+
+The options section must contain a definition for which tracks the animation controls. This is specified by the “controls” keyword, and a line structured as follows:
+```
+controls TRACK1... TRACK_N
+```
+Following the keyword "controls", the fields TRACK1 … TRACK_N are one or more of [legs, body, arm, gripper] (e.g. `controls legs body` for a move that controls the legs and body but not the arm or gripper.)
+
+## Supported Keywords for the Options Section
+Other than the “controls” tracks keyword, the other keywords in the options section are optional. The following are the supported key words and defining values for this section and what they define about the animated move:
+
+`bpm VALUE`: Where `VALUE` is the nominal beats per minute of the animation. If the script is at a different BPM, the animation will be time-scaled so it takes the same number of beats. If not specified, the animation will play at the nominal speed regardless of script BPM, taking a variable number of beats.
+
+`extendable`: Indicates the move can be stretched in Choreographer. Doing so will cause the animation to loop.
+
+`truncatable`: Indicates that the move can be shortened in Choreographer. Doing so will cause the animation to end early.
+
+`display_rgb RED GREEN BLUE`: Where `RED`, `GREEN`, `BLUE` are integer numbers between 0 and 255. This defines the color the move block will display as within Choreographer. If not present, a default color will be generated based on the hash of the file name.
+
+`frequency HZ`: Where `HZ` is the number of frames per second in the animation body. If unspecified, the time column must be present in the body.
+
+`retime_to_integer_slices`: Rescales time slightly so that the move takes an integer number of slices. If absent, the animation will be padded or truncated slightly to take an integer number of slices.
+
+`description TEXT`: Where `TEXT` to be displayed within Choreographer as a description of the animation.
+
+`neutral_start`: Applies only to moves that control the body but do not contain leg information. Indicates the move can be assumed to start in a neutral stand. If not specified, it is instead assumed that the center of footprint is at the origin of the animation frame.
+
+`precise_steps`: Step exactly at the animated locations, even at the expense of balance.
+
+`precise_timing`: Step exactly at the animated times, even at the expense of balance.
+
+`track_swing_trajectories`: Track animated swing trajectories. Otherwise, takes standard swings between animated liftoff and touchdown locations.
+
+`arm_playback OPTION`: Where `OPTION` is one of the following. If `OPTION` is not included the default behavior is for arm animations specified as joint angles to playback as joint space and animations specified as hand poses to default to workspace.
+
+- `jointspace`: Arm animation playback is with respect to the body.
+- `workspace`: Arm animation playback is with respect to the current footprint.
+- `workspace_dance_frame`: Arm animation playback is with respect to a dance frame in workspace. Parameter arm_dance_frame_id can be used to specify which dance frame if multiple exist.
+
+`requires_arm`: If set true, can not be loaded on a robot without an arm.
+
+`no_looping`: If the animation completes before the move's duration, freeze rather than looping. Without this option, the animation might loop because either the move duration was extended or because the `speed` parameter was set to a value greater than 1.
+
+# Parameters File Section
+The parameters section defines adjustable values for the animated move. Each parameter value specified will show in the parameters panel in Choreographer when the animation is selected to be edited/added. The parameter names coincide with the values present in the `AnimateParams` protobuf message (in choreography_params.proto). The animation parameters may only apply for moves which control specific tracks (arm, gripper, body,legs). Each parameter is described on a single line and can be specified two different ways:
+
+1. The parameter name, followed by the minimum, default, and maximum value (in that order).
+```
+PARAMETER_NAME MINIMUM_VALUE DEFAULT_VALUE MAXIMUM_VALUE
+```
+
+2. Only the parameter name, and the limits/default will be configured from predefined values within Choreographer.
+```
+PARAMETER_NAME
+```
+Note: If no parameters are needed, put the keywords “no parameters” as the parameter section such that the file will still be parsed correctly.
+
+## Supported parameters pertaining to all tracks
+
+`speed`: Play the animation at this time multiplier.
+
+`offset_slices`: Start the move with the script at this slice.
+
+## Supported parameters when controlling the body track
+
+`body_entry_slices`: How many slices to spend transitioning smoothly from the previous pose to the animated pose trajectory for body motion.
+
+`body_exit_slices`: How many slices to spend transitioning from the animated trajectory back to the nominal pose. If set to 0, will not transition back.
+
+`translation_multiplier.x`: Multiply the body motion in the x direction.
+
+`translation_multiplier.y`: Multiply the body motion in the y direction.
+
+`translation_multiplier.z`: Multiply the body motion in the z direction.
+
+`rotation_multiplier.roll`: Multiply the body motion in the roll direction.
+
+`rotation_multiplier.pitch`: Multiply the body motion in the pitch direction.
+
+`rotation_multiplier.yaw`: Multiply the body motion in the yaw direction.
+
+`body_tracking_stiffness`: How hard to try to track the animated body motion. Only applicable to animations that control both the body and the legs. On a scale of 1 to 10 (11 for a bit extra). Higher will result in more closely tracking the animated body motion, but possibly at the expense of balance for more difficult animations.
+
+## Supported parameters when controlling the arm track
+
+`arm_entry_slices`: How many slices to spend transitioning smoothly from the previous pose to the animated pose trajectory for arm and gripper motion.
+
+`shoulder_0_offset`: Offset to add to the SH0 angle in all animation keyframes.
+
+`shoulder_1_offset`: Offset to add to the SH1 angle in all animation keyframes.
+
+`elbow_0_offset`: Offset to add to the EL0 angle in all animation keyframes.
+
+`elbow_1_offset`: Offset to add to the EL1 angle in all animation keyframes.
+
+`wrist_0_offset`: Offset to add to the WR0 angle in all animation keyframes.
+
+`wrist_1_offset`: Offset to add to the WR1 angle in all animation keyframes.
+
+`arm_required`: Prevents a robot without an arm from loading the animation.
+
+`arm_prohibited`: Prevents a robot with an arm from loading the animation.
+
+## Supported parameters when controlling the gripper track
+
+`gripper_offset`: Offset to add to the gripper angle in all animation keyframes.
+
+`gripper_multiplier`: Multiply all gripper angles by this value.
+
+`gripper_strength_fraction`: How hard the gripper can squeeze. Fraction of full strength.
+
+## Supported parameters when controlling either (or both) the arm and gripper tracks
+
+`arm_dance_frame_id`: Dance frame to reference for workspace arm moves. Only valid in combination with the option `arm_playback workspace_dance_frame` (specified in the options section of the file).
+
+# Body Keyframe File Section
+
+The body keyframe section defines the actual animated move through keyframes describing either the pose or joint angles at each keyframe timestamp. Each keyframe is a single line consisting of a number of fields determined by the number of columns specified. The first line will specify what value is in each column and how many columns there will be; it is written as a series of key words (space separated) where each keyword has an fixed number of columns that the parser expects.
+
+Columns defined by keywords can either be an individual definition, or a group definition describing a fixed number of values. For example, “body_pos” describes a group of three columns, and “body_x body_y body_z” describes those same three columns using the individual column keywords.
+
+The same column value cannot be specified multiple times. Column values are only defined in the first line of the body keyframe section. Some columns are multiple ways of specifying the same body control (for example, “leg_joints” and “foot_pos”) and will be mutually exclusive when defining an animated move.
+
+Column definitions are required for each track that is being controlled (as specified in the options section). If additional columns controlled unspecified tracks are included, they will be ignored when the animated move is executed.
+
+## Supported Columns Not Pertaining to a Particular Track
+
+`time`: The timestamp of each frame. The start of the animation is 0. This column and the “frequency” option are mutually exclusive, but one is required.
+
+## Supported Columns Pertaining to Gripper Track
+
+`gripper`: Gripper joint angle. Required.
+
+The field gripper is required.
+
+## Supported Columns Pertaining to Arm Track
+
+`arm_joints`: Grouping of 6 columns [shoulder0 shoulder1 elbow0 elbow1 wrist0 wrist1]. Represents the arm joint angles. Any columns not included will be held constant at the previous joint angle. Mutually exclusive with hand_pos and hand orientation specifications.
+
+`hand_pos`: Grouping of [hand_x hand_y hand_z]. Hand position in the animation frame.
+
+`hand_quat_wxyz`: Grouping of [hand_quat_w hand_quat_x hand_quat_y hand_quat_z]. Mutually exclusive with other orientation specifications.
+
+`hand_quat_xyzw`: Grouping of hand_quat_x hand_quat_y hand_quat_z hand_quat_w]. Mutually exclusive with other orientation specifications.
+
+`hand_euler_rpy`: Grouping of [hand_roll hand_pitch hand_yaw] Mutually exclusive with other orientation specifications.
+
+## Supported Columns Pertaining to Body Track
+
+`body_pos`: Grouping of [body_x body_y body_z]. Body position in the animation frame. Mutually exclusive with com_pos.
+
+`com_pos`: Grouping of [com_x com_y com_z]. Center of Mass position in the animation frame. Mutually exclusive with body_pos.
+
+`body_quat_wxyz`: Grouping of [body_quat_w body_quat_x body_quat_y body_quat_z]. Mutually exclusive with other orientation specifications.
+
+`body_quat_xyzw`: Grouping of [body_quat_x body_quat_y body_quat_z body_quat_w]. Mutually exclusive with other orientation specifications.
+
+`body_euler_rpy`: Grouping of [body_roll body_pitch body_yaw] Mutually exclusive with other orientation specifications.
+
+At least one dimension of the body must be specified.
+
+## Supported Columns Pertaining to Legs Track
+
+`leg_joints`: Grouping of [fl_hx fl_hy fl_kn fr_hx fr_hy fr_kn hl_hx hl_hy hl_kn hr_hx hr_hy hr_kn]. Can also be grouped by leg as [fl_angles fr_angles hl_angles hr_angles] Mutually exclusive (by leg) with foot_pos.
+
+`foot_pos`: Grouping of [fl_x fl_y fl_z fr_x fr_y fr_z hl_x hl_y hl_z hr_x hr_y hr_z]. Can also be grouped by leg as [fl_pos fr_pos hl_pos hr_pos]. Mutually exclusive (by leg) with leg_joints.
+
+`contact`: Grouping of [fl_contact fr_contact hl_contact hr_contact]. 1 if in stance. 0 if in swing. If absent, contact will be inferred from either leg_joints or foot_pos.
+
+Either leg_joints or foot_pos are required for each leg.
diff --git a/docs/concepts/choreography/animations_in_choreographer.md b/docs/concepts/choreography/animations_in_choreographer.md
new file mode 100644
index 000000000..4ade4e0ec
--- /dev/null
+++ b/docs/concepts/choreography/animations_in_choreographer.md
@@ -0,0 +1,48 @@
+
+
+
+# Animations in Choreography
+
+The Choreography service is a framework for producing scripted motion through a list of customizable, predetermined moves. The dances can be customized through the track layering system, the parameters associated with each move, and the adjustable beats per minute of the dances. While these knobs provide a large amount of flexibility when authoring choreographies, there are scenarios where the exact output can't fully be expressed through the existing moves. To do this, we have developed an animation pipeline and API in the 2.4 Spot software release.
+
+The animation pipeline allows you to create wholly custom sequences using 3D animation tools and integrate them in Choreographer scripts just like the predefined, default moves. For the intro sequence of the [“Spot’s On It” video](https://www.youtube.com/watch?v=7atZfX85nd4), we used Autodesk Maya to produce the kaleidoscoping dance moves. Autodesk Maya is a 3D animation software that gives fine-grain control for authoring and editing kinematics trajectories, but a range of tool sets can be used with this API.
+
+
+
+The base representation for the animation is a human-readable text file, so while an animated dance move can be created through 3D animation tools like Autodesk Maya, it can also be hand-written and edited. The primary component of the animated move is a set of timestamped key frames which specify the motion of the different tracks (arms, gripper, legs, or body). The timestamped keyframes that specify the animation can be densely spaced, as they would be from a 3D animation software, or they can be very sparse and the robot will interpolate between each key frame.Additionally, there are components of this text file which enable different animation-specific options and also specifies the different parameters which are associated with this move. A complete overview of the animation file format can be found in the ["Animation File Specification" document](animation_file_specification.md).
+
+The animated move text files are be parsed into an `Animation` protobuf messages and uploaded to the robot. Once the animation is uploaded to the robot, it can be referenced by name within choreographies. This parsing step (from text file to protobuf message) happens automatically in Choreographer for both animations in the "animations" directory that matches the directory structure shown below, and also for animations uploaded through "File"->"Load Animated Move" menus.
+
+```
+dance_directory/
+ choreographer.exe
+ animations/
+ bourree_arm.cha
+ my_animation.cha
+```
+
+There is a python script in the `bosdyn-choreography-client` package, called `animation_file_to_proto.py`, which provides client access to the same functions used by Choreographer to complete the text file to protobuf animation parsing.
+
+When Choreographer is opened, it begins uploading all animations automatically to every connected robot. Once an animation is uploaded, it will last on the robot until it is rebooted. A dialog will appear indicating the status of all animations being uploaded; the image below shows an example of this dialog. If an animation fails to upload, then check the terminal where the Choreographer executable is running. An error message should be present on the terminal and provide a description of the part of the animation that was invalid.
+
+![Animation Upload Dialog](images/animation_upload_dialog.png)
+
+## Choreography Logs for Animations
+
+To aid in creating animated dance moves without 3D animation software, we have added choreography logs. These logs can be recorded through the choreography log service while driving the robot around with the tablet, moving the robots arm around manually (while it is powered off), or running an existing choreography. The choreography log contains high-rate timestamped key frames with the robot's joint state, foot contacts, and body pose relative to the animation frame (defined by the robot's foot state when the choreography log started). These key frames can be used directly for the animation file's key frames.
+
+The choreography logs are divided into two types: auto and manual. The log types are used to specify the log's duration and when it is recorded. The "auto" log is recorded whenever a choreography is being executed by the robot. The robot will start recording an "auto" log automatically when a choreography begins, and stop 3 seconds after the completion of a choreography. The "manual" log can be recorded whenever and its duration is defined by when the robot receives a `StartRecordingState` RPC to when it receives a `StopRecordingState` RPC. The manual log has a maximum of 5 minutes of recording. The robot will only keep the most recent "auto" log and the most recent "manual" log saved - therefore the logs must be downloaded immediately to ensure data is not lost.
+
+In Choreographer, there are buttons to start/stop recording the "manual" log (shown below). As well, there are buttons to download each log type (shown below). The log can be downloaded as a [pickle file](https://docs.python.org/3/library/pickle.html), which tightly packages the data stored as a python dictionary into a serialized object. This is the default format and will be used if no file extension is specified in the file name when saving the log. Additionally, it can be downloaded as a text file, which saves the `ChoreographyStateLog` protobuf message as a protobuf-to-text message; to save in this format, when typing the log name the ".txt" should be explicitly included.
+
+![Choreographer Log Buttons](images/log_buttons.png)
+
+The API for the choreography logs is described in the [Choreography Service document](choreography_service.md) and can be accessed via the API in the choreography client.
\ No newline at end of file
diff --git a/docs/concepts/choreography/choreographer.md b/docs/concepts/choreography/choreographer.md
index 393510031..199c127c6 100644
--- a/docs/concepts/choreography/choreographer.md
+++ b/docs/concepts/choreography/choreographer.md
@@ -12,31 +12,51 @@ Choreographer is a tool for authoring dances and executing them on robots. It le
## Running Choreographer
-Choreographer is an executable program we will provide you via a download link at the [Support Center](https://support.bostondynamics.com/). To run the program, simply download it at the link we will provide you, and execute it. Note that the Choreographer executable is only available for Windows. If you wish to run Choreographer from Mac or Linux, we will need to provide you with Python wheels and installation instructions (advanced usage only). Please contact us if that is the case.
+Choreographer is an executable program we will provide you via a download link at the [Support Center](https://support.bostondynamics.com/). To run the program, simply download it at the link we will provide you, and execute it. Note that the Choreographer executable is only available for Windows and Linux.
If you wish to run Choreographer connected to a Spot, please see the “Connecting Robots to Choreographer” section.
## Choreography Safety
-When testing your choreography sequence on a robot, always keep in mind basic safety procedures . Make sure there is plenty of space around your Spot, keep all Spots at least two meters apart from each other, and be sure that neither you nor anyone else approach the dancing Spot. Never approach your Spot unless its motors have been powered off.
+When testing your choreography sequence on a robot, always keep in mind basic safety procedures. Make sure there is plenty of space around your Spot, keep all Spots at least two meters apart from each other, and be sure that neither you nor anyone else approach the dancing Spot. Never approach your Spot unless its motors have been powered off.
+
+### Beginner vs Advanced Mode
+
+One of the goals for Choreographer is to provide a tool that gives the user as much freedom as they could possibly want. As such, you will be able to find combinations of moves, parameters, and BPM that Spot cannot reliably perform under all conditions.
+
+Beginner mode provides a more controlled experience that will be more likely to yield reliable results. This mode has smaller parameter ranges which allows for less energetic but generally more reliable dances. Additionally, it has some of the more dynamic dance moves removed from the Moves List and does not provide any support for animated dance moves.
+
+By default, in the 2.4 released Choreographer executable, it will automatically load in Beginner mode. You can enable it by using the `--restricted` argument when starting Choreographer from the command line. As well, if you are in Advanced mode and go to the Settings menu and select load in Beginner mode, it will take effect and load in this mode next time you open Choreographer.
+
+You can switch to Advanced mode by selecting the load in Advanced mode checkbox in Choreographer's welcome menu, or navigating to "Settings" and selecting load in Advanced mode (which will take effect next time you open Choreographer). Note that if you create a dance in Advanced mode, Choreographer may not be able to load it while in Beginner mode if the dance’s parameters are outside its reduced range.
+
+It is highly recommended that Choreographer users start in Beginner mode until they are comfortable using the robot and creating/executing choreographies!
## Interface Overview
-![Interface Guide](images/image5.png)
+
-The Choreographer interface consists of seven main areas. They are:
+The Choreographer interface consists of the following important key sections/buttons:
-1) **Moves List** - Here you can find all of our predefined moves, sorted by general category.
+1) **Moves List** - Here you can find all of our predefined moves (both stock moves and animation moves), sorted by general category, such as "Body" or "Transition".
1) **Dance Timeline** - This is the main area of the Choreographer, and it shows a representation of your dance over time. Each move is a different block, which can be clicked to edit the parameters, dragged around, copy-pasted, or potentially stretched or shrunk if the move parameter's allow it.
-1) **Robot Controls** - The robot controls are a row of buttons you can use to send commands to any robots connected to Choreographer, including starting and stopping your dance, and powering on or off your robot’s motors. (Note, this row is disabled when you are not connected to any robots, as shown above.)
+1) **Dance Tabs** - Multiple choreography sequences can be opened at once, and will appear as different tabs above the timeline.
1) **Move Name** - When you select a move in the Moves List, its name and description will appear here.
1) **Robot Preview** - This section gives you a preview of the robot’s body and arm during your selected move. Note that this section only appears for select moves that directly position the body of the robot or moves which control the arm.
-1) **Move Parameters** - When you select a move that is customizable, its different, adjustable parameters will appear here. You can modify them to adjust how the robot will act during this move. Be sure to test to make sure the robot can handle your parameters! Sometimes more extreme parameters can be too much for the robot during high or low BPM songs, so if a combination of parameters don’t work, adjust them until they do for your situation! Near each parameter's name, a blue question mark may appear which will provide a description of the specific parameter.
+1) **Move Parameters** - When you select a move that is customizable, its different, adjustable parameters will appear here. You can modify them to adjust how the robot will act during this move. Be sure to test and make sure the robot can handle your parameters! Sometimes more extreme parameters can be too much for the robot during high or low BPM songs, so if a combination of parameters don’t work, adjust them until they do for your situation! Near each parameter's name, a blue question mark may appear which will provide a description of the specific parameter if you hover the mouse over it.
+1) **Robot Controls** - The robot controls are a row of buttons you can use to send commands to any robots connected to Choreographer, including starting and stopping your dance, and powering on or off your robot’s motors. (Note, this row is disabled when you are not connected to any robots, as shown above.)
1) **Music Controls** - This row of controls lets you load and play a song to play during your robot’s dance, manually adjust the BPM of your robot’s moves to match that of your song or the volume of the music, and stop the music and the robot's dancing if one is connected.
+1) **Move Configuration/Robot Management Tabs** - Tabs to toggle between the move configuration tab (displays move name and parameters), and the robot management tab (displays all active robot connections and health statistics).
+1) **Mode Indicator** - Choreographer has "Beginner" and "Advanced" modes. The title provides an indicator to help remember which mode the application is loaded in. There is an option in the "Settings" menu to switch modes when the application is next re-opened.
+1) **Add/Disconnect Buttons** - Buttons which enable dynamically changing which robots are connected and controlled by Choreographer.
+1) **Robot Connections** - Each row indicates the robot hostname and other information regarding the robot currently connected to Choreographer. The checkbox indicates whether or not the robot is being controlled (e.g. when unchecked, pressing robot control buttons like "sit" will do nothing to the unselected robot).
+1) **Dance Selector** - Drop down menu to choose which of the open choreographies the robot will execute when both checked (in the "Selected" column) and start choreography is clicked.
+1) **Health Stats** - Columns which show the power state (on/off), battery state of charge, and any faults for each robot.
+
## Choreography File Basics
-Spot choreography files consist mainly of a sequence of predefined moves that can be arranged in the timeline of the choreographer (Interface #2). Each move can contain any combination of Gripper, Arm, Body, Legs, or multiple body parts that it affects, and will appear on the appropriate track(s) within the timeline. You can mix and match so that the legs can do a move such as Step while the body does a move such as Rotate Body. However, moves may not overlap on any tracks.
+Spot choreography sequence files consist mainly of a sequence of predefined moves that can be arranged in the timeline of Choreographer. Each move can contain any combination of Gripper, Arm, Body, Legs, Lights or multiple body parts that it affects, and will appear on the appropriate track(s) within the timeline. You can mix and match so that the legs can do a move such as Step while the body does a move such as Rotate Body. However, moves may not overlap on any tracks.
All choreography files are assumed to be 4/4 signatures in the Choreographer UI. The timeline is broken up into quarter notes (thick vertical lines), each of which is broken up by four lighter vertical lines. Each of those 16th-note intervals is known as a Slice. All moves must be a whole number of slices, and each move must begin and end at a slice boundary. How many slices a move takes is dependent on the BPM (Beats Per Minute) of your song. See Loading Music for more information on how to change your BPM.
@@ -44,37 +64,44 @@ All choreography files are assumed to be 4/4 signatures in the Choreographer UI.
For example, this "Running Man" move has been extended from the default number of slices, and will now control the legs track for the first 12 slices of this script:
-![Slice Diagram](images/image4.png)
+![Slice Diagram](images/running_man.png)
## Adding Moves
-To add a move to the timeline, there are multiple different methods. You can single click the move in the Moves List, which will open the parameters of the move, but not add it to the timeline. As well, the up and down arrow keys will navigate between different moves in the Moves List once one is selected. While you have a move selected in the Moves List, you can adjust its parameters in the Move Parameters section.
-
-Once the parameters are adjusted to the desired values, the move can be added to the dance timeline by either 1) pressing the Add button beneath the moves list, 2) double clicking the move name in the Moves List, or 3) entering Insert Mode (described below). If you adjust a move’s parameters and then add it to the timeline, the new Move block that appears in the timeline will have those same modified parameters.
+There are multiple different methods to add moves to the timeline. You can single click the move in the Moves List, which will open the name, description, and parameters of the move in the Move Configuration Tab, but will not add it to the timeline. As well, the up and down arrow keys will navigate between different moves in the Moves List once one is selected. While you have a move selected in the Moves List, you can adjust its parameters in the Move Parameters section.
-The icon , in the upper left of the Timeline view, will enter Insert Mode when pressed. When in Insert Mode, you can click anywhere in the Timeline to add the selected move (with any of the parameter modifications you have made) to the Timeline at that point. To then exit Insert Mode and re-enter the default Choreographer mode, press the button or hit the escape key on the keyboard.
+Once the parameters are adjusted to the desired values, the move can be added to the dance timeline by any of the following methods:
+1. Pressing the Add button beneath the moves list, which appends the move to the end of the timeline with any parameter changes.
+2. Double clicking the move name in the Moves List, which appends the move to the end of the timeline with any parameter changes.
+3. Click the toggle to go from "Append" to "Insert" (under the Moves List). In "insert" mode, hovering the mouse of the timeline will show a "ghost" move block, which when clicked will be added to the timeline with any parameter changes. To exit "insert" mode, hit the escape key on the keyboard or press the toggle again to return to "append" mode.
+![Insert/Append Mode Toggle](images/adding_modes.png)
+4. Drag a move from the Moves List into the timeline with any parameter changes. Like insert mode, a ghost move block will appear and the mouse can be moved to determine the location to drop and insert the move into the timeline.
## Modifying Move Blocks
-Once a move is added to your Timeline, it can be dragged left and right to the appropriate time. Some, but not all, of the moves can be resized by clicking and dragging on the edge of the move’s block. The move will automatically enforce any requirements it has about minimum or maximum duration. Note, to help you with longer moves, the Timeline can be zoomed in/out using the Zoom bar above it.
+Once a move is added to your Timeline, it can be dragged left and right to the appropriate time. Some, but not all, of the moves can be resized by clicking and dragging on the edge of the move’s block; hover the mouse over the edge of the move and if it can be resized it will show an arrow instead of the regular mouse. The move will automatically enforce any requirements it has about minimum or maximum duration. Note, to help you with longer moves, the Timeline can be zoomed in/out using the Zoom bar below it.
## Modifying Move Parameters
To modify a move’s parameters, simply click it on the timeline to select it, and modify the parameters that appear in the Move Parameters section. Each move has different parameters, and some may not have any parameters at all. Please see the [Moves Reference Guide](move_reference.md) for descriptions of what each parameter does for each move type. Each numerical parameter can be modified by editing its text field, adjusting its slider, or pressing the Up or Down arrow buttons. Boolean parameters can be changed by checking or unchecking the box. Enum parameters are changed by choosing new values in the drop down menu.
+A move's parameters can be modified before it is added to the timeline as well. Once the move is selected in the Moves List, the parameters can be edited and when the move is ultimately added to the timeline, it will contain these parameter modifications.
+
## Robot Preview
The Robot Preview pane will appear for certain applicable moves. You can adjust the camera position and angle of that pane in order to get a better view of your move. Use the scroll wheel to zoom in and out, left click and drag to pan the camera, and right click and drag to rotate it around the preview robot.
## Selecting Multiple Moves
-To select multiple moves, click on empty space in the Timeline view, and then drag over the moves that should be selected. To unselect all the moves, click in empty space on the Time view. Note that you cannot edit the parameters of multiple moves at once, but you can drag them around the timeline or copy and paste them all.
+To select multiple moves, click on empty space in the Timeline view, and then drag over the moves that should be selected. To unselect all the moves, click in empty space on the Time view. Note that you cannot edit the parameters of multiple moves at once, but you can drag them around the timeline, delete them all, or copy and paste them all.
## Copy / Pasting / Deleting Moves
-When you have any number of moves selected, you can Copy (Ctrl+C or Edit->Copy) and Paste (Ctrl+V or Edit->Paste) them as you wish. When you paste moves into your choreography sequence, the new moves will attempt to appear as close as they can to the original move’s location, moving right on the timeline until they can find a place they fit. You can also right click them and choose “Clone” to instantly create a copy of your selected moves, which will also be inserted as close as they can fit into your choreography sequence.
+When you have any number of moves selected, you can Copy (Ctrl+C, Edit->Copy) and Paste (Ctrl+V or Edit->Paste) them as you wish. When you paste moves into a choreography sequence, the new moves will attempt to appear as close as they can to the original move’s location, moving right on the timeline until they can find a place they fit. Moves can be copied and pasted between different open dance tabs.
+
+You can also right click a single move or move grouping and choose “Clone” to instantly create a copy of your selected moves in the currently opened dance, which will also be inserted as close as they can fit into your choreography sequence.
-To delete moves, simply select them and either press Delete or Backspace, or choose Edit->Delete in the menus.
+To delete moves, simply select them and either press the Delete or Backspace keys, choose Edit->Delete in the top-left menus, or right click and select "Delete".
## Loading Music
@@ -84,21 +111,25 @@ Once a dance is loaded, you can preview it by hitting the “Play Music” butto
After loading a dance, you must manually set the BPM (Beats Per Minute) of your dance to that of the song. There are many online tools to help you calculate the BPM of any song, but we also provide a metronome to help you with that process if you choose to do it manually.
-## Red+Green Sliders
+## Red Slider
-The red slider allows the user to start the dance at a different location than the beginning; the dance will start at the move associated with the closest slice to the slider's location. The green slider allows the user to adjust when the music starts; the music will begin playing when the dance reaches the slice closest to the slider's location. The picture below circles the two sliders; the lines drawn at the center of the sliders show exactly where in the timeline the slider's location is. They can be moved by clicking on the colored boxes and dragging them to the desired location.
+The red slider allows the user to start the dance at a different location than the beginning; the dance will start at the move associated with the closest slice to the slider's location and the music will start at the timestamp within the song associated with the slider's location. The line drawn at the center of the sliders show exactly where in the timeline the slider's location is, as shown in the image below. It can be moved by clicking on the colored boxes and dragging it to the desired location.
-![Sliders](images/image6.png)
+![Slider](images/red_slider.png)
## Previewing Moves
If you have a robot connected to the Choreographer, you can preview moves before adding them to the Timeline. Simply select a move from the Moves List, modify its parameters however you want, and press the Preview Move button. This will cancel all current dances and actions on the robot, and it will perform the one move you have selected. This is a great way to test out parameter modifications before adding it to your move sequence.
+![Preview Move Button](images/preview_move_button.png)
+
## Performing Choreography Sequences
-The “Start Choreography” button in the Robot Controls bar will upload the choreography from the sequence viewer to the robot, and then send a command to execute the routine to the robot while beginning the music at the same starting time specified in the choreography sent to the robot. There will be at least a three second delay (programmed into the button) to ensure that the music and routine can begin at the same time on the robot. Note, if the robot is not started in the proper position (sprawl, sit, stand) and has to automatically make transitions to be ready for the first move in the routine, then the timing of the music starting and the choreography sequence starting will likely be incorrect.
+The “Start Choreography” button in the Robot Controls bar will upload the choreography (whichever is selected in the drop down for the robot in the Robot Management Tab - by default, this is the current open choreography tab) to the robot, and then send a command to execute the routine to the robot while beginning the music at the same starting time specified in the choreography sent to the robot. There will be at least a three second delay (programmed into the button) to ensure that the music and routine can begin at the same time on the robot. To adjust this delay, the command line argument `--delay DELAY_IN_SECONDS` can be passed when starting the Choreographer application from the command line.
-To stop the choreography routine or the music playing, the “Stop” button will return the robot to a standing position and stop the music. In an emergency, use E-Stop or Power Off instead.
+Note, if the robot is not started in the proper position (sprawl, sit, stand) and has to automatically make transitions to be ready for the first move in the routine, then the timing of the music starting and the choreography sequence starting will likely be incorrect.
+
+To stop the choreography routine (or just stop the music from playing if no robot is connected/executing a choreography), the “Stop” button will freeze the robot with all four feet on the ground and stop the music. In an emergency, use "E-Stop" or "Power Off" buttons instead.
## Saving and Loading Choreography Files
@@ -106,73 +137,19 @@ You can save and load Choreographer routines that you create. To save your curre
Additionally, you can append an existing choreography sequence at the end of your current dance by going to File->Append Choreography (Ctrl+E), which will automatically add all of the move blocks from that file to the end of your current routine. This is particularly useful if you want to construct a choreography sequence from smaller premade sequences.
-## Connecting Robots to Choreographer
-
-Robots cannot currently be connected to or disconnected from Choreographer while it is running. In order to connect your robot to Choreographer, you must start Choreographer from the command line and pass in the arguments `--hostname {IP/Hostname of your Spot} --user {Username you use to log in to your Spot} --password {Password for your Spot}`. If you wish to connect to multiple Spots at once, simply add more copies of those command line arguments, one set for each Spot. Note that all Spots will do the exact routine, and the Choreographer program does not yet support individual routines for each Spot. In order to accomplish that, you can save your individual routine files and write a custom script to execute them both at the same time on each robot.
-
-## Robot Controls
-
-The Robot Controls bar is disabled if there are no robots connected to the Choreographer program. If a robot is connected, the buttons will have the following effects:
-
-
-Button|Function
-----|----
-Power Off| Powers off the Spot’s motors. Always press this before approaching your Spot.
-Power On | Powers on your Spot’s motors. You must activate this before your Spot can stand or start choreography.
-E-Stop | Enables or disables your Spot’s E-Stop. In an emergency, use this to stop the Spot immediately.
-Self-Right | If your Spot has fallen, this will attempt to right it into a sitting position
-Sit | Sits the Spot in-place. Cancels all current choreography and music, but E-Stop or Power Off should be used in an emergency.
-Stand | Brings Spot to a stand. Cancels all current choreography and music, but E-Stop or Power Off should be used in an emergency.
-Enable Joystick | Activated Joystick Controls (see Joystick Controls section)
-Enable WASD Driving | Activates "WASD" keyboard driving (see WASD Controls section)
-Start Choreography | Sends your choreography sequence to your Spot, then starts a 3 second countdown before the robot begins dancing. Any loaded music will automatically start as soon as Spot begins to dance.
-
-## Joystick Controls
-
-![](images/image1.png)
-
-A X-Box gamepad controller can be used with the GUI for convenience of moving and positioning the robot. The button layout is set up for X-Box 360 controllers, which are readily available, and can be connected to a computer through a USB port. Many of the buttons in the GUI are linked to gamepad buttons, and the gamepad button will behave the same as the corresponding GUI button. As shown in the diagram above they are:
-
-Button | Function
-----|-----
-A | Stop
-B | Enable Joystick
-X | Start Choreography
-Y | Stand
-Left-Bumper | Self Right
-Right-Bumper | Sit
-Start | Power On
-Back | Power Off
-
-When the joystick is enabled (either through hitting the "B" button on the gamepad or through the GUI button), the robot will walk, and can be driven by the joysticks. As shown in the diagram above, the left joystick controls translation, and the right joystick controls yaw. When the robot is controlled through any of the other Robot Controls buttons, when WASD mode is enabled, or when a dance routine has been started, joystick driving will be disabled however other buttons will still work.
-
-While using Choreographer, the joystick controller mapping diagram can be accessed as a reminder using the menus Help->Joystick Controller Mapping.
## Keyboard Controls
-Similar to the joystick control, we provide the ability to drive the robot using the WASD keys on the keyboard. When enabled (either through hitting the GUI button or by pressing "v" on the keyboard), the robot will walk and can be driven using the WASD keys. Joystick mode will be disabled while driving in WASD mode. The hotkeys are setup to mimic the joystick button key presses when applicable. When the robot is controlled through any of the other Robot Controls buttons, when joystick mode is enabled, or when a dance routine has been started, the WASD driving will be disabled, however other keypresses will still be available.
+Choreographer has specific hotkey mappings available to make common actions used when editing choreographies more easily accessible While using Choreographer, a table of available keystrokes can be accessed as a reminder using the menus Help->Hotkeys Documentation.
Key | Function
----|-----
-v | Enable WASD mode
-b | Enable Joystick mode
-k | Power On
-l | Power Off
-y | Stand
-x | Start Choreography
-[ | Sit
-] | Self-right
-w | Walk forward
-a | Strafe left
-s | Walk backwards
-d | Strafe right
-q | Turn left
-e | Turn right
-
-While using Choreographer, a table of available keystrokes can be accessed as a reminder using the menus Help->Hotkeys Documentation.
-
-## Restricted Mode
-
-One of the goals for Choreographer is to provide a tool that gives the user as much freedom as they could possibly want. As such, you will be able to find combinations of moves, parameters, and BPM that Spot cannot reliably perform under all conditions. If, however, you want a more controlled experience that will be more likely to provide more reliable results, we have offered a Restricted Mode for Choreographer. To enable Restricted Mode, simply start Choreographer with an extra `--restricted` argument. Several of the more dynamic moves will be missing, and parameter ranges will generally be smaller, allowing for less energetic and generally more reliable dances.
-
-Note that if you create a dance in normal mode, Choreographer may not be able to load it while in Restricted Mode if the dance’s parameters are outside its reduced range.
+i | Enter "insert" mode
+Esc | Exit "insert" mode
+p | Play music
+Shift + Click | Select multiple moves, adding each one to the selected group when clicked.
+Left/Right Arrow Keys | Nudge a move (or group of selected moves) left/right by one slice in the timeline. Cannot cross other moves with nudging; this can only be done when dragging a move (or group).
+Shift + Left/Right Arrow Keys | Expand a move on the left/right side by one slice if possible. This only works when a single move is selected (and not a group of moves).
+Ctrl + Left/Right Arrow Keys | Shrink a move on the left/right side by one slice if possible. This only works when a single move is selected (and not a group of moves).
+Ctrl+C | Copy the move (or group of selected moves).
+Ctrl+V | Paste the copied move (or group of selected moves).
diff --git a/docs/concepts/choreography/choreographer_setup.md b/docs/concepts/choreography/choreographer_setup.md
index 53cf97ed8..17e95020c 100644
--- a/docs/concepts/choreography/choreographer_setup.md
+++ b/docs/concepts/choreography/choreographer_setup.md
@@ -8,7 +8,7 @@ Development Kit License (20191101-BDSDK-SL).
# Install Choreographer
-Choreographer is an application to easily author choreographies with advanced moves and parameters and execute the routines on robot with music synchronization. The application and choreography service require a special license to use. The application can be downloaded from the [Support Center](https://support.bostondynamics.com) in the "Downloads Page" for the 2.1 Release. Additionally, the Support Center provides in depth documentation for how to use Choreographer to create routines, modify moves, connect to robots to execute routines, and debug issues with a FAQs section.
+Choreographer is an application to easily author choreographies with advanced moves and parameters and execute the routines on robot with music synchronization. The application and choreography service require a special license to use. The application can be downloaded from the [Support Center](https://support.bostondynamics.com) in the "Downloads Page" for the latest Spot software release. Additionally, the Support Center provides in depth documentation for how to use Choreographer to create routines, modify moves, connect to robots to execute routines, and debug issues with a FAQs section.
## System Requirements
@@ -23,7 +23,7 @@ The Choreographer application is an executable which can be run directly on a la
sudo chmod +x choreographer
```
-To run Choroeographer without any robots connected, just double-click on the executable to open it.
+To run Choreographer without any robots connected, just double-click on the executable to open it.
To run Choreographer with robots, start Choreographer from the command line, in the directory where the executable was download, with the following options:
* On windows:
diff --git a/docs/concepts/choreography/choreography_service.md b/docs/concepts/choreography/choreography_service.md
index 7a0a825c2..0a9b4452e 100644
--- a/docs/concepts/choreography/choreography_service.md
+++ b/docs/concepts/choreography/choreography_service.md
@@ -11,14 +11,14 @@ Development Kit License (20191101-BDSDK-SL).
## Overview
### What is it?
-The Choreography service is a framework for producing precisely scripted motion, currently focused on dancing.
-An example script can be seen on [YouTube](https://www.youtube.com/watch?v=kHBcVlqpvZ8).
+The Choreography service is a framework for producing precisely scripted motion, currently focused on dancing. Example choreography scripts can be seen on the Boston Dynamics Youtube channel, showing the robot dancing to [Bruno Mars's "Uptown Funk"](https://www.youtube.com/watch?v=kHBcVlqpvZ8) and [The Contours' "Do You Love Me"](https://www.youtube.com/watch?v=fn3KWM1kuAw).
A choreography sequence consists of a series of moves. We can achieve a wide variety of possible behavior from a moderate list of available moves by:
1) Combining multiple moves (see the tracks/layering section).
2) Altering move parameters to vary the behavior of the move.
+3) Adjusting the BPM of the choreography sequence to change the speed of the moves.
### Note on Reliability
@@ -38,24 +38,30 @@ Note that any number of slices per minute can be selected, however very fast or
### Tracks/Layering
-We divide the robot’s motion into four distinct tracks:
+We divide the robot’s motion into the following distinct tracks:
* Legs
* Body
* Arm
* Gripper
-Each dance move requires one or more of these tracks. Moves that use different tracks can be run simultaneously in any combination. In Choreographer, a track is represented as a horizontal section in the timeline view. For example, here is a screenshot from Choreographer of a script that combines moves in all three of the four tracks:
+In addition to the base motion, there are also tracks for:
-![](images/main_image1.png)
+* Lights: controls the robot's front two sets of LEDs.
+* Annotations: enables dance annotations that are separate from any specific move.
+* (Choreographer Only) Music: controls the audio played from the Choreographer application when dancing.
+
+Each dance move requires one or more of these tracks. Moves that use different tracks can be run simultaneously in any combination. In Choreographer, a track is represented as a horizontal section in the timeline view. For example, here is an image from Choreographer of a script that combines moves in three of the four tracks:
+
+![Tracks](images/tracks_labeled.png)
And the resulting behavior looks like this:
-![](gif_images/main_image3.gif)
+![Dancing Behavior Gif](gif_images/main_image3.gif)
-Some moves require multiple tracks, such as the "Skip" move which uses the Body and Legs track or the "Arm Move Relative" which uses the Arm and Gripper tracks, as shown in this example:
+Some moves require multiple tracks, such as the "Jump" move which uses the Body and Legs track or the "Arm Move" which uses the Arm and Gripper tracks, as shown in this example:
-![](images/main_image4.png)
+![](images/multi_tracked_dance.png)
### Entry/Exit conditions
@@ -74,10 +80,12 @@ The first leg-track move can have any entry state, and the robot will automatica
All subsequent legs-track moves must have an entry state that corresponds to the previous legs-track move’s exit state. Scripts that violate this requirement will be rejected by the API and return warnings indicating which moves violate the entry/exit states. As well, routines made in Choreographer will highlight moves red when the entry state does not match the previous leg move's exit state, such as in this example:
-![](images/main_image2.png)
+![Transitions Error](images/transition_error.png)
## API
+### Choreography API Interface
+
The API defines a choreography sequence by a unique name, the number of slices per minute, and a repeated list of moves. Each move consists of the move’s type, its starting slice, duration (in slices), and the actual parameters (`MoveParams` proto message). The `MoveParams` message describe how the robot should behave during each move. For example, a move parameter could specify positions for the body. Each parameter may have specific limits/bounds that are described by the `MoveInfo` proto; this information can be found using the `ListAllMoves` RPC.
Once a choreography sequence is created, the `UploadChoreography` RPC will send the routine to the robot. The choreography service will validate and check the structure of the routine to ensure it is feasible and within bounds.
@@ -86,8 +94,34 @@ The service will return a list of warnings and failures related to the uploaded
The `ExecuteChoreography` RPC will run the choreography sequence to completion on the robot. A choreography sequence is identified by the unique name of the sequence that was uploaded to the robot. Additionally, a starting time (in robot’s time) and a starting slice will fully specify to the robot when to start the choreography sequence and at which move.
+### Animation API Interface
+
+The API defines an animated move (`Animation` proto message) by a unique name, a repeated list of `AnimationKeyFrames` which describes the robot's motion at each timestamp, and additional parameters and options which fully describe how the move should be executed. Unlike other dance moves, the information about the minimum and maximum parameters, if the move is extendable, or which tracks the move controls are not known to the robot beforehand and must be specified in the `Animation` protobuf.
+
+The `Animation` can be uploaded to the robot using the `UploadAnimatedMove` RPC, which will send the animation to the robot. The choreography service will validate and check the structure of the animation to ensure it is fully specified and is feasible. If the animation does not pass this validation, the RPC will respond with a failure status and a set of warning messages indicating which parts of the animation failed.
+
+If the animation uploads successfully, then it can be used within choreography sequences and will appear as a move option in Choreographer with any of the parameters that were specified in the initial `Animation` protobuf message. The move type associated with the uploaded animation is "animation::" + the animation's name. The animations will persist on robot until either the robot is powered off or an animation with the exact same name is uploaded (overwriting the previous animation with that name).
+
+While animations can be written manually using protobuf in any application, we have also provided a way to create animations from human-readable text files. The animation text file has the extension *.cha and has a specific format which is described in the [animation file specification document](animation_file_specification.md). The animation *.cha file can be converted into an `Animation` protobuf message using the `animation_file_to_proto.py` script provided in the choreography client library.
+
+### Choreography Logs API Interface
+
+The API defines a choreography log using the `ChoreographyStateLog` protobuf message, which consists of a repeated series of timestamped key frames that contain the joint state of the entire robot, the foot contacts, and the SE3Pose for the robot body relative to the animation frame. The animation frame is defined based on the feet position at the beginning of the animation: the position is the center of all four feet, and the rotation is yaw only as computed from the feet positions.
+
+The choreography logs are divided into two types:
+- Automatic/"Last Choreography" Logs: This log is recorded from when the `ExecuteChoreography` RPC is first received to 3 seconds after the completion of the choreography.
+- Manual Logs: This log is recorded from when a `StartRecordingState` RPC is received to when a `StopRecordingState` RPC is received. There is a maximum of 5 minutes of recording for a manual log.
+
+The choreography logs can be used to review how the robot actually executed a choreography or animated dance move. For example, specifically for animations, if the move is not completely feasible, the robot will attempt to get as close to what was asked as possible but may not succeed. The choreography log can be used to understand and update the animated move to be realistic.
+
+The `DownloadRobotStateLog` RPC can be used to download the choreography logs. The request specifies which type of log should be downloaded. The response is streamed over grpc and recombined by the choreography client to create a full log message. The robot only keeps one auto log and one manual log in its buffer (2 total logs) at a time, so the log must be downloaded immediately after completing the move on robot.
+
+### Choreography Client
+
The choreography service has a python client library which provides helper functions for each RPC as well as functions that help convert the choreography sequence from a protobuf message into either a binary or text file.
+### Python Examples using the Choreography API
+
The [upload_choreographed_sequence example](../../../python/examples/upload_choreographed_sequence/README.md) demonstrates how to read an existing routine from a saved text file, upload it to the robot, and then execute the uploaded choreography.
## Config Files
@@ -110,6 +144,8 @@ The `MoveInfoConfig` can be parsed by a protobuf parser into each moves field of
* controls_legs: Does this move require the legs track.
* controls_body: Does this move require the body track.
* controls_gripper: Does this move require the gripper track.
+* controls_lights: Does this move require the front LED lights.
+* controls_annotations: Does this move update the overall dance state.
* display: Information for how Choreographer should display the move.
* color: The color to draw the box for the move in the timeline tracks.
* markers: The slices to draw the small grey vertical lines. These usually correspond to events such as touchdown and liftoff, and help the user line those events up as desired (e.g. on the beat). Negative values here indicate slices before the end of the move.
diff --git a/docs/concepts/choreography/gif_images/Goto.gif b/docs/concepts/choreography/gif_images/Goto.gif
new file mode 100644
index 000000000..e4d21b733
Binary files /dev/null and b/docs/concepts/choreography/gif_images/Goto.gif differ
diff --git a/docs/concepts/choreography/images/adding_modes.png b/docs/concepts/choreography/images/adding_modes.png
new file mode 100644
index 000000000..86eca4bd4
Binary files /dev/null and b/docs/concepts/choreography/images/adding_modes.png differ
diff --git a/docs/concepts/choreography/images/animation_upload_dialog.png b/docs/concepts/choreography/images/animation_upload_dialog.png
new file mode 100644
index 000000000..a85bf7f43
Binary files /dev/null and b/docs/concepts/choreography/images/animation_upload_dialog.png differ
diff --git a/docs/concepts/choreography/images/choreography_interface.png b/docs/concepts/choreography/images/choreography_interface.png
new file mode 100644
index 000000000..632ceda40
Binary files /dev/null and b/docs/concepts/choreography/images/choreography_interface.png differ
diff --git a/docs/concepts/choreography/images/choreography_interface_robot_management.png b/docs/concepts/choreography/images/choreography_interface_robot_management.png
new file mode 100644
index 000000000..69d223457
Binary files /dev/null and b/docs/concepts/choreography/images/choreography_interface_robot_management.png differ
diff --git a/docs/concepts/choreography/images/image2.png b/docs/concepts/choreography/images/image2.png
deleted file mode 100644
index 56d1cd8d7..000000000
Binary files a/docs/concepts/choreography/images/image2.png and /dev/null differ
diff --git a/docs/concepts/choreography/images/image3.png b/docs/concepts/choreography/images/image3.png
deleted file mode 100644
index ba52ea61f..000000000
Binary files a/docs/concepts/choreography/images/image3.png and /dev/null differ
diff --git a/docs/concepts/choreography/images/image4.png b/docs/concepts/choreography/images/image4.png
deleted file mode 100644
index fbfe8ac13..000000000
Binary files a/docs/concepts/choreography/images/image4.png and /dev/null differ
diff --git a/docs/concepts/choreography/images/image5.png b/docs/concepts/choreography/images/image5.png
deleted file mode 100644
index 8045446de..000000000
Binary files a/docs/concepts/choreography/images/image5.png and /dev/null differ
diff --git a/docs/concepts/choreography/images/image6.png b/docs/concepts/choreography/images/image6.png
deleted file mode 100644
index c6315014a..000000000
Binary files a/docs/concepts/choreography/images/image6.png and /dev/null differ
diff --git a/docs/concepts/choreography/images/image1.png b/docs/concepts/choreography/images/joystick_help.png
similarity index 100%
rename from docs/concepts/choreography/images/image1.png
rename to docs/concepts/choreography/images/joystick_help.png
diff --git a/docs/concepts/choreography/images/log_buttons.png b/docs/concepts/choreography/images/log_buttons.png
new file mode 100644
index 000000000..e5bbcdc00
Binary files /dev/null and b/docs/concepts/choreography/images/log_buttons.png differ
diff --git a/docs/concepts/choreography/images/main_image2.png b/docs/concepts/choreography/images/main_image2.png
deleted file mode 100644
index c1b58c26c..000000000
Binary files a/docs/concepts/choreography/images/main_image2.png and /dev/null differ
diff --git a/docs/concepts/choreography/images/main_image4.png b/docs/concepts/choreography/images/main_image4.png
deleted file mode 100644
index d7ab0343e..000000000
Binary files a/docs/concepts/choreography/images/main_image4.png and /dev/null differ
diff --git a/docs/concepts/choreography/images/multi_tracked_dance.png b/docs/concepts/choreography/images/multi_tracked_dance.png
new file mode 100644
index 000000000..4bfd6ea99
Binary files /dev/null and b/docs/concepts/choreography/images/multi_tracked_dance.png differ
diff --git a/docs/concepts/choreography/images/preview_move_button.png b/docs/concepts/choreography/images/preview_move_button.png
new file mode 100644
index 000000000..95222fd22
Binary files /dev/null and b/docs/concepts/choreography/images/preview_move_button.png differ
diff --git a/docs/concepts/choreography/images/red_slider.png b/docs/concepts/choreography/images/red_slider.png
new file mode 100644
index 000000000..581f7edc4
Binary files /dev/null and b/docs/concepts/choreography/images/red_slider.png differ
diff --git a/docs/concepts/choreography/images/robot_management.png b/docs/concepts/choreography/images/robot_management.png
new file mode 100644
index 000000000..be5bac5b5
Binary files /dev/null and b/docs/concepts/choreography/images/robot_management.png differ
diff --git a/docs/concepts/choreography/images/running_man.png b/docs/concepts/choreography/images/running_man.png
new file mode 100644
index 000000000..cf2409041
Binary files /dev/null and b/docs/concepts/choreography/images/running_man.png differ
diff --git a/docs/concepts/choreography/images/main_image1.png b/docs/concepts/choreography/images/tracks_labeled.png
similarity index 100%
rename from docs/concepts/choreography/images/main_image1.png
rename to docs/concepts/choreography/images/tracks_labeled.png
diff --git a/docs/concepts/choreography/images/transition_error.png b/docs/concepts/choreography/images/transition_error.png
new file mode 100644
index 000000000..4d6e4930d
Binary files /dev/null and b/docs/concepts/choreography/images/transition_error.png differ
diff --git a/docs/concepts/choreography/move_reference.md b/docs/concepts/choreography/move_reference.md
index 96bfdaab3..118407236 100644
--- a/docs/concepts/choreography/move_reference.md
+++ b/docs/concepts/choreography/move_reference.md
@@ -102,6 +102,26 @@ pivot | What part of the robot to pivot around. Pivoting around the front means
clockwise | Which direction to rotate.
starting_angle | Where in the circle to start rotation. Since it spirals outwards, it may not be obvious exactly where it starts.
+### fidget_stand
+
+A procedurally-generated idle animation. It looks around, breathes, shifts its weight, and stamps. Can use one of the preset configurations or customize the parameters.
+
+Parameter | Effect
+--|--
+preset | Pre-designed parameter combinations that attempt to convey a specific emotion. NOTE: All other sliders are only active if this is set to "Custom".
+min_gaze_pitch | How far down it will look. (Radians)
+max_gaze_pitch | How far up it will look. (Radians)
+gaze_mean_period | How frequently it will look somewhere else. (Seconds)
+gaze_center_cfp | Where the gaze array originates in the center-of-footprint frame. (Meters)
+shift_mean_period | How frequently it will shift its weight. (Seconds)
+shift_max_transition_time | Maximum amount of time it will spend shifting its weight. (Seconds)
+breath_min_z | Minimum amplitude of the "breathing". (Meters)
+breath_max_z | Maximum amplitude of the "breathing". (Meters)
+leg_gesture_mean_period | How frequently it will do a leg gesture. (Seconds)
+gaze_slew_rate | How quickly it will shift its gaze. (Meters/Second)
+gaze_position_generation_gain | How much Brownian motion will be applied to the gaze point.
+gaze_roll_generation_gain | How much Brownian motion will be applied to the gaze roll.
+
## Step Moves
### step
@@ -123,6 +143,20 @@ swing_height | How high to lift the foot/feet. Does nothing if a swing_waypoint
liftoff_velocity | How quickly to raise the foot/feet. Does nothing if a swing_waypoint is specified.
touchdown_velocity | How quickly to lower the foot/feet. Does nothing if a swing_waypoint is specified.
+### goto
+
+![](gif_images/Goto.gif)
+
+Trot to a specified position in the dance frame. (Unless explicitly set, the dance frame is defined by where the dance began.) Takes 1 step per beat. Extend the duration to successfully move farther.
+
+Parameter | Effect
+--|--
+absolute_position | Position we should go to in the dance frame.
+absolute_yaw | Yaw orientation we should go to in the dance frame.
+step_position_stiffness | How precisely should we step in the nominal locations.
+duty_cycle | What fraction of the time a foot is on the ground. 0.5 is a standard trot.
+link_to_next | Should we smoothly transition from this move to a subsequent goto move.
+
### trot
![](gif_images/trot.gif)
@@ -153,8 +187,12 @@ Take two steps to turn in place. Requires 2 beats (8 slices).
Parameter | Effect
--|--
-yaw | How far to turn.
-absolute | If true, yaw is interpreted relative to the orientation the choreography sequence began in. If false, yaw is interpreted relative to the orientation before entering the move.
+motion_is_absolute | Is motion in the dance frame (true) or relative to the current position (false).
+motion | How far to move. (Meters)
+absolute_motion | Where to move to in the dance frame. (Meters)
+yaw_is_absolute | Is yaw in the dance frame (true) or relative to the current position (false).
+yaw | How far to rotate. (Radians)
+absolute_yaw | Where to rotate to in the dance frame. (Radians)
swing_height | How how to pick up the feet. Zero indicates to use a default swing.
swing_velocity | How quickly to lift off and touch down the feet | Zero indicates to use a default swing.
@@ -166,10 +204,14 @@ Take two steps to translate using a pace gait (legs on the same side of the robo
Parameter | Effect
--|--
-motion | How far and what direction to move.
+motion_is_absolute | Is motion in the dance frame (true) or relative to the current position (false).
+motion | How far to move. (Meters)
+absolute_motion | Where to move to in the dance frame. (Meters)
+yaw_is_absolute | Is yaw in the dance frame (true) or relative to the current position (false).
+yaw | How far to rotate. (Radians)
+absolute_yaw | Where to rotate to in the dance frame. (Radians)
swing_height | How how to pick up the feet. Zero indicates to use a default swing.
swing_velocity | How quickly to lift off and touch down the feet | Zero indicates to use a default swing.
-absolute | Motion is measured relative to the dance’s starting location rather than the current location. May not be accurate for longer dances with lots of stepping.
### crawl
@@ -266,13 +308,17 @@ Jumps in place. Nominally lasts one beat, but may take more slices at faster te
Parameter | Effect
--|--
-yaw | What orientation to land in.
-absolute | If true, yaw is interpreted relative to the orientation the choreography sequence began in. If false, yaw is interpreted relative to the orientation before entering the move.
+translation_is_absolute | Is motion in the dance frame (true) or relative to the current position (false).
+translation | How far to move. (Meters)
+absolute_translation | Where to move to in the dance frame. (Meters)
+yaw_is_absolute | Is yaw in the dance frame (true) or relative to the current position (false).
+yaw | How far to rotate. (Radians)
+absolute_yaw | Where to rotate to in the dance frame. (Radians)
flight_slices | How long the jump should last with all feet off the ground measured in slices. Depending on tempo, higher values will be unreliable.
stance_width/stance_length | The footprint the robot should land its jump in.
+swing_height | How how to pick up the feet.
split_fraction | Splits the liftoff and touchdown into two pairs of two legs. In fraction of of swing with two legs in flight, so 0 means all legs fully synchronized.
lead_leg_pair | If split_fraction is not 0, indicates which legs lift off first. Default AUTO mode will select a pair based on the translation vector.
-translation | Distance and direction the robot should jump.
## Transition Moves
@@ -486,6 +532,17 @@ Parameter | Effect
angle | The desired gripper angle.
speed | The desired velocity of the gripper in rad/s.
+### frame_snapshot
+
+This move does not directly do anything. Instead it sets the frame to be used by future moves that are in an absolute frame. Some such moves will always be in the dance frame (frame_id = 0), and some allow you to explicitly select a frame_id. This move can set frames relative to either a fiducial or to the robot's footprint.
+
+Parameter | Effect
+--|--
+frame_id | Which frame to set. 0 is the dance frame and will be used by moves that do absolute motion but don't explicitly select a frame.
+fiducial_number | Which fiducial to set the frame relative to. If set to -1 or the fiducial has not been seen recently, will set according to the robot's footprint.
+include_each_leg | Should each leg be included when determining the footprint.
+compensated | If a foot is not included, should we offset the footprint as if that foot were in a nominal stance? Otherwise, we'll take the center of the included feet.
+
### chicken_head
![](gif_images/chickenhead-opt.gif)
@@ -498,3 +555,58 @@ bob_magnitude | A vector describing the amplitude and direction of a periodic mo
beats_per_cycle | How long it takes to complete 1 cycle of the motion described by bob_magnitude
follow | If set to true, the gripper position will adjust if the robot steps to a new location.
+## Lights moves
+
+All colors are specified as RGB on a 0-255 scale.
+
+### set_color
+
+Sets the color of the robot's face lights. Can independently set left and right lights if desired. Can fade in and out gradually if desired.
+
+Parameter | Effect
+--|--
+left_color | Color of the left lights. (Color of all lights if right_same_as_left specified.)
+right_same_as_left | If true, all lights are left_color. If false, left and right are independently specified.
+right_color | Color of the right lights. Does nothing if right_same_as_left specified.
+fade_in_slices | How long to spend brightening at the beginning, measured in slices (1/4 beats).
+fade_out_slices | How long to spend darkening at the end, measured in slices (1/4 beats).
+
+### fade_color
+
+Sets the lights to show one color at the top fading towards another color at the bottom.
+
+Parameter | Effect
+--|--
+top_color | The color of the topmost lights.
+bottom_color | The color of the bottommost lights.
+fade_in_slices | How long to spend brightening at the beginning, measured in slices (1/4 beats).
+fade_out_slices | How long to spend darkening at the end, measured in slices (1/4 beats).
+
+### independent_Color
+
+Independently specifies the color for all 8 of the lights.
+
+Parameter | Effect
+--|--
+top_left | Color of the top left light.
+upper_mid_left | Color of the second from top left light.
+lower_mid_left | Color of the second from bottom left light.
+bottom_left | Color of the bottom left light.
+top_right | Color of the top right light.
+upper_mid_right | Color of the second from top right light.
+lower_mid_right | CColor of the second from bottom right light.
+bottom_right | Color of the bottom right light.
+fade_in_slices | How long to spend brightening at the beginning, measured in slices (1/4 beats).
+fade_out_slices | How long to spend darkening at the end, measured in slices (1/4 beats).
+
+### ripple_Color
+
+Sets the lights in a variety of moving patterns.
+
+Parameter | Effect
+--|--
+main | Color to make the lights.
+secondary | Second color to make the lights, used only by some patterns.
+pattern | Select from a variety of light motion patterns.
+light_side | Which side lights to use for the pattern.
+increment_slices | How quickly to move the pattern. Gives the time in slices (1/4 beats) between updates to the pattern.
\ No newline at end of file
diff --git a/docs/concepts/choreography/robot_controls_in_choreographer.md b/docs/concepts/choreography/robot_controls_in_choreographer.md
new file mode 100644
index 000000000..243998a5f
--- /dev/null
+++ b/docs/concepts/choreography/robot_controls_in_choreographer.md
@@ -0,0 +1,85 @@
+
+
+# Connecting Robots to Choreographer
+
+Robots can either be connected to Choreographer before starting the application, or they can be dynamically connected and disconnected from the application once it is running.
+
+In order to connect your robot to Choreographer, you must start Choreographer from the command line and pass in the arguments `--hostname {IP/Hostname of your Spot} --user {Username you use to log in to your Spot} --password {Password for your Spot}`. If you wish to connect to multiple Spots at once, simply add more copies of those command line arguments, one set for each Spot.
+
+To connect your robot after the application has been started, select the Robot Management Tab (shown in the image below), and then click the "Add" button. This will prompt you for the robot's hostname, username, and then password. To remove any robot connections, click the "Disconnect" button and check any robots that you want to disconnect from. Note: disconnecting from a robot will power off the robot and release the lease and E-Stop.
+
+![Robot Management Tab](images/robot_management.png)
+
+Choreographer supports connecting to multiple robots at once. In the Robot Management Tab, you can select which robots are being controlled at a given time. Additionally, you can apply different choreographies to each robot and start them all at the same time.
+
+## Robot Controls
+
+The Robot Controls bar is disabled if there are no robots connected to the Choreographer program. If a robot is connected, the buttons will have the following effects:
+
+
+Button|Function
+----|----
+Power Off| Powers off the Spot’s motors. Always press this before approaching your Spot.
+Power On | Powers on your Spot’s motors. You must activate this before your Spot can stand or start choreography.
+E-Stop | Enables or disables your Spot’s E-Stop. In an emergency, use this to stop the Spot immediately.
+Self-Right | If your Spot has fallen, this will attempt to right it into a sitting position
+Sit | Sits the Spot in-place. Cancels all current choreography and music, but E-Stop or Power Off should be used in an emergency.
+Stand | Brings Spot to a stand. Cancels all current choreography and music, but E-Stop or Power Off should be used in an emergency.
+Enable Joystick | Activated Joystick Controls (see Joystick Controls section)
+Enable WASD Driving | Activates "WASD" keyboard driving (see WASD Controls section)
+Return To Start | Spot will walk from its current location to the last location it started a dance (either a full choreography or the move preview dance).
+Start Choreography | Sends your choreography sequence to your Spot, then starts a 3 second countdown before the robot begins dancing. Any loaded music will automatically start as soon as Spot begins to dance.
+
+Note: The return to start button (added in the Spot 2.4 release) will bring all selected robots back to their starting position after completing a choreography. If multiple robots are being controlled, there is obstacle avoidance enabled when they are navigating back to the starting position. To adjust this obstacle padding distance (in meters), the command line argument `--obs-padding DIST_IN_METERS` can be provided when starting the Choreographer application.
+
+## Joystick Controls
+
+![Joystick Controls](images/joystick_help.png)
+
+A X-Box gamepad controller can be used with the GUI for convenience of moving and positioning the robot. The button layout is set up for X-Box 360 controllers, which are readily available, and can be connected to a computer through a USB port. Many of the buttons in the GUI are linked to gamepad buttons, and the gamepad button will behave the same as the corresponding GUI button. As shown in the diagram above they are:
+
+Button | Function
+----|-----
+A | Stop
+B | Enable Joystick
+X | Start Choreography
+Y | Stand
+Left-Bumper | Self Right
+Right-Bumper | Sit
+Start | Power On
+Back | Power Off
+
+When the joystick is enabled (either through hitting the "B" button on the gamepad or through the GUI button), the robot will walk, and can be driven by the joysticks. As shown in the diagram above, the left joystick controls translation, and the right joystick controls yaw. When the robot is controlled through any of the other Robot Controls buttons, when WASD mode is enabled, or when a dance routine has been started, joystick driving will be disabled however other buttons will still work.
+
+While using Choreographer, the joystick controller mapping diagram can be accessed as a reminder using the menus Help->Joystick Controller Mapping.
+
+Note: the X-Box controller must be connected to the computer via USB port before opening the Choreographer application.
+
+## Keyboard Controls
+
+Similar to the joystick control, we provide the ability to drive the robot using the WASD keys on the keyboard. When enabled (either through hitting the GUI button or by pressing "v" on the keyboard), the robot will walk and can be driven using the WASD keys. Joystick mode will be disabled while driving in WASD mode. The hotkeys are setup to mimic the joystick button key presses when applicable. When the robot is controlled through any of the other Robot Controls buttons, when joystick mode is enabled, or when a dance routine has been started, the WASD driving will be disabled, however other keypresses will still be available.
+
+Key | Function
+----|-----
+v | Enable WASD mode
+b | Enable Joystick mode
+k | Power On
+l | Power Off
+y | Stand
+x | Start Choreography
+[ | Sit
+] | Self-right
+w | Walk forward
+a | Strafe left
+s | Walk backwards
+d | Strafe right
+q | Turn left
+e | Turn right
+
+While using Choreographer, a table of available keystrokes can be accessed as a reminder using the menus Help->Hotkeys Documentation.
diff --git a/docs/concepts/developing_api_services.md b/docs/concepts/developing_api_services.md
index b389cd766..5102a1623 100644
--- a/docs/concepts/developing_api_services.md
+++ b/docs/concepts/developing_api_services.md
@@ -40,7 +40,7 @@ service_runner = GrpcServiceRunner(service_servicer, add_servicer_to_server_fn,
# Run the server until a SIGINT is received.
service_runner.run_until_interrupt()
```
-The `image_service_pb2_grpc.add_ImageServiceServicer_to_server` is a function auto generated from a protobuf service definition. The function links a servicer to a server and is generated for every protobuf server compiled. The `WebCamImageServicer` is a custom class that inherits from the servicer class auto generated from the protobuf service definition. It defines methods for responding to all possible service requests. The [`GrpcServiceRunner`](../../python/bosdyn-client/src/bosdyn/client/util.py) class will create and run a server object associated with the passed in servicer. It will monitor for requests at the given port. The `run_until_interrupt()` method can be used to keep a server alive until it receives a SIGINT.
+The `image_service_pb2_grpc.add_ImageServiceServicer_to_server` is a function auto generated from a protobuf service definition. The function links a servicer to a server and is generated for every protobuf server compiled. The `WebCamImageServicer` is a custom class that inherits from the servicer class auto generated from the protobuf service definition. It defines methods for responding to all possible service requests. The [`GrpcServiceRunner`](../../python/bosdyn-client/src/bosdyn/client/server_util.py) class will create and run a server object associated with the passed in servicer. It will monitor for requests at the given port. The `run_until_interrupt()` method can be used to keep a server alive until it receives a SIGINT.
#### Registering a Service
Registering a service requires communication with the robot. A service can register itself via the Directory Registration Client provided with the Python SDK. Each service instance should have a unique service name, service authority, and service type associated with it that are provided at registration time. These details will enable Spot to route client requests to the proper service. The preferred method of registering a service is shown below. Note that it registers with a directory keep alive and will have liveness monitoring enabled.
diff --git a/docs/concepts/lease_example.png b/docs/concepts/lease_example.png
new file mode 100644
index 000000000..77fa1db08
Binary files /dev/null and b/docs/concepts/lease_example.png differ
diff --git a/docs/concepts/lease_service.md b/docs/concepts/lease_service.md
new file mode 100644
index 000000000..1385876eb
--- /dev/null
+++ b/docs/concepts/lease_service.md
@@ -0,0 +1,69 @@
+
+
+# Lease Service
+
+Leases represent ownership of the robot for the purpose of issuing commands to control the robot. There can only ever be a single owner of a specific robot resource at any given time, however an application can delegate the lease to other systems to perform various tasks. Leases can be returned by one application and subsequently acquired by a different application to change ownership. Alternatively, a lease can be revoked by another application to change ownership forcefully.
+
+In addition to identifying ownership, the lease system is used to confirm that a robot is still being controlled and that there are valid communications to the owner. If an owner can no longer be reached within a certain period of time, the lease will automatically be revoked from that owner.
+
+## Basic Lease Usage
+
+Services which command or control the robot require a lease. These services will reject incoming requests which are from non-owners or contain older ownership information. Ownership semantics provides additional safety to operation workflows by preventing accidental commands from disturbing the current owner’s application. Note, some services will only read information from the robot, and do not require a lease (e.g. image service or robot-state service).
+
+Applications which want to control the robot will begin by acquiring a lease to the robot. A lease can be acquired with the `AcquireLease` RPC when it has no existing owner.
+
+Throughout the duration of the application, the client should continue to send “keep-alive” signals to continue ownership of the robot’s resources. The “keep-alive” signal is sent as the `RetainLease` RPC. If enough time passes between the last retain lease request received, the robot will revoke ownership of the lease such that other applications can acquire ownership.
+
+When issuing robot commands, the robot will be sent a version of the lease with the specific command to permit the robot to complete the desired command. This lease will automatically be included with the command request when using the SDK
+
+Finally, when the application is complete, it should return the lease. The `ReturnLease` RPC should be used when a client is finished with their ownership to indicate that the robot is no longer owned without waiting for a revocation timeout. Note, applications should only return the lease when it was the one who acquired the lease using the `AcquireLease` RPC. In the case of a remote mission service callback or other services which are delegated the lease by a different main service, the application's service should *not* return the lease when finished - the main service acquired the lease and will return it when complete.
+
+## [Advanced] Lease Resources
+
+A lease can claim ownership of the entire robot or specific resources. The “body” resource describes ownership of the entire robot, and can be broken down into sub-resources for the arm, gripper, and mobility (controlling only the legs), as shown in the image below. In most use cases, an application simply needs to send the "body" lease and the robot will determine which specific resources it needs for the command being issued.
+
+![Resource Tree](resource_tree.png)
+
+The different robot resources create a tree-structured hierarchy such that the client can control a subset of the robot or the entire robot without needing to manage individual resources. Owning a lease with a parent resource represents control over the entire subtree.
+
+## [Advanced] Additional Lease Usage Patterns
+
+Typically, the lease and ownership can be gained using the `AcquireLease` RPC> If the lease is actively being owned, the `TakeLease` RPC provides an alternative that will forcefully take ownership of the lease resources to give control to the application issuing the RPC. This should only be used expressly by a human who is aware of what they’re taking control from.
+
+When issuing a specific command which requires a lease, the robot should be sent a sub-lease of the main, acquired lease. This delegates the ownership of that lease’s resources to the particular service being sent the command. Note, the creation of a sub-lease for a specific command is done automatically by the client library. Commands which require a lease will respond with `LeaseUseResult` protos, which reflect whether or not the lease provides proper ownership of the robot.
+
+Additionally, clients should simply send the “body” lease and the robot service’s will split the lease into only the necessary parts. This allows clients to issue multiple commands which control different resources (e.x. a gripper command and a body trajectory command) without worrying about breaking apart a lease themselves. Clients can split leases into sub-resources, but will need to be careful to preserve the sequence number.
+
+## [Advanced] Lease Representation
+
+A lease contains the resource which it controls ownership, a sequence, an epoch, and set of client names.
+
+The sequence is a list of integer numbers which indicates when the lease was generated and if it is a sub-lease. The first number in the sequence is the root lease number, and is generated by the base lease service. Sequence numbers are incremented when creating new leases. Additional sequence numbers can be appended to the lease sequence when creating a sub-lease to delegate ownership to a specific service. For example, the lease sequence `lease=[3, 1]` has a root lease number of `3`, and a single sub-lease with value `1`.
+
+Resource ownership is determined by whichever lease has the highest root lease number. For example, when comparing `leaseA = [6, 1]` and `leaseB = [5, 13]`, leaseA is considered newer and the current owner.
+
+In addition to ownership, the lease sequences are used by services to determine which command should be executing currently. Services will use and execute whichever command shows ownership of the necessary resources and has the newest lease sequence. To compare leases, the sequence number with the first higher number is considered the newest lease. For example, when comparing `leaseA = [1, 2, 10]` and `leaseB = [1, 2, 11]`, `leaseB` is considered newer since the second sequence number is higher in `leaseB` than in `leaseA`.
+
+The epoch is used to create a scope for the sequence field, such that only leases with the same epoch can be compared. The client names are a list of each client which has held the lease; these can be helpful for debugging to see which services were delegated ownership and created sub-leases.
+
+Consider the example below.
+
+![Lease Usage Example](lease_example.png)
+
+A client application took the lease from the tablet (or the tablet lost communication) and now owns the robot. The client application delegates control to graph-nav by creating a sub-lease, and graph-nav again creates a sub-lease to give control to the robot command service. When the tablet attempts to issue a robot command, it will be rejected because it is now considered older.
+
+## [Advanced] Understanding Lease Errors
+
+When a command is sent including a lease, the service checks the following things before accepting the command:
+ - The incoming lease is valid: contains the current epoch, has known robot resources and the resources are correct for this type of command, has a non-empty sequence with a root lease number which has been issued by the lease service.
+ - The incoming lease shows ownership as the newest lease: compares the incoming lease with the service’s current known lease.
+
+The service will then respond to the command with a `LeaseUseResult` proto that indicates whether the incoming lease passed the checks and includes additional information to aid in debugging.
+
+When a command with a lease fails with a lease error, the status within the lease use result provides the root-cause of the failure. In addition to the status, the LeaseUseResult proto contains information about the current lease owner of the incoming lease’s resource, the most recent lease known to the system for the incoming lease’s resource, the latest leases for each leaf resource (since these can be different depending on the types of commands sent). This information can be used to react to the lease failure within the client application.
diff --git a/docs/concepts/resource_tree.png b/docs/concepts/resource_tree.png
new file mode 100644
index 000000000..5a69ca184
Binary files /dev/null and b/docs/concepts/resource_tree.png differ
diff --git a/docs/concepts/robot_services.md b/docs/concepts/robot_services.md
index 4d64b2791..c40ae56ba 100644
--- a/docs/concepts/robot_services.md
+++ b/docs/concepts/robot_services.md
@@ -72,7 +72,7 @@ The robot state service also tracks different parameters for the robot and this
Spot can have many different image sources, including the cameras on the base platform or any other payloads which implement the image service proto definitions, like Spot CAM. The image service provides a way to list all these different sources using the `ListImageSources` RPC and then query the sources for their images with the `GetImage` RPC.
-Images can be regular pixel-based visual images where the data value corresponds to the greyscale (or color) intensity. They can also be depth images where the data value corresponds to the depth measured from the camera sensor. To align depth data with visual image data, use the `depth_in_visual_frame` sources, which reprojects the depth onto the same projection as the visual image.
+Images can be regular pixel-based visual images where the data value corresponds to the greyscale (or color) intensity. They can also be depth images where the data value corresponds to the depth measured from the camera sensor. To align depth data with visual image data, use the `depth_in_visual_frame` sources ([example code](../../python/examples/get_depth_plus_visual_image/README.md)), which reprojects the depth onto the same projection as the visual image.
Since an image can be a lot of data, there are also different types of encodings and compression schemes that the image can be transmitted as to reduce the size of the data sent over the wire. The image service offers a `Format` field to describe the encoding of the image:
diff --git a/docs/html/.buildinfo b/docs/html/.buildinfo
index 41ec2bfc9..b943f295a 100644
--- a/docs/html/.buildinfo
+++ b/docs/html/.buildinfo
@@ -1,4 +1,4 @@
# Sphinx build info version 1
# This file hashes the configuration used when building these files. When it is not found, a full rebuild will be done.
-config: f3466e6a0c1cc35a831b2e5ec3f8a5f2
+config: a38894f88ac7eaa19d875d76f307766a
tags: 645f666f9bcd5a90fca523b33c5a78b7
diff --git a/docs/html/README.html b/docs/html/README.html
index e21d82b97..28763dc5c 100644
--- a/docs/html/README.html
+++ b/docs/html/README.html
@@ -8,7 +8,7 @@
-
Spot SDKSpot API protocol definition. This reference guide covers the details of the protocol applications used to communicate to Spot. Application developers who wish to use a language other than Python can implement clients that speak the protocol.
Spot SDK Repository. The GitHub repo where all of the Spot SDK code is hosted.
-
This is version 2.3.5 of the SDK. Please review the Release Notes to see what has changed.
+
This is version 3.0.0 of the SDK. Please review the Release Notes to see what has changed.
[docs]classBaseDataReader:# pylint: disable=too-many-instance-attributes"""Shared parent class for DataReader and StreamedDataReader."""
- def__init__(self,outfile):
+ def__init__(self,infile=None,filename=None):"""
+ At least one of the following arguments must be specified.
+
Args:
- outfile: binary file-like object for reading (e.g., from open(fname, "rb")).
+ infile: binary file-like object for reading (e.g., from open(fname, "rb")).
+ filename: path of input file, if applicable. """
- self._file=outfile
+ self._file=infile
+ self._filename=filename
+ ifnotself._file:
+ ifnotself._filename:
+ raiseValueError("One of infile or filename must be specified")
+ self._file=open(self._filename,'rb')self._file_descriptor=Noneself._spec_index=Noneself._index_offset=None
@@ -579,6 +628,11 @@
Source code for bosdyn.bddf.base_data_reader
self._file_index=Noneself._read_header()
+ @property
+ deffilename(self):
+ """Return input file name, if specified, or None if not."""
+ returnself._filename
+
@propertydeffile_descriptor(self):"""Return the file descriptor from the start of the file/stream."""
diff --git a/docs/html/_modules/bosdyn/bddf/block_writer.html b/docs/html/_modules/bosdyn/bddf/block_writer.html
index e7eee0c82..67b4c7e60 100644
--- a/docs/html/_modules/bosdyn/bddf/block_writer.html
+++ b/docs/html/_modules/bosdyn/bddf/block_writer.html
@@ -7,7 +7,7 @@
- bosdyn.bddf.block_writer — Spot 2.3.5 documentation
+ bosdyn.bddf.block_writer — Spot 3.0.0 documentation
@@ -66,7 +66,7 @@
Methods raise ParseError if there is a problem with the format of the file. """
- def__init__(self,outfile):
+ def__init__(self,infile=None,filename=None):"""
+ At least one of the following arguments must be specified.
+
Args:
- outfile: binary file-like object for reading (e.g., from open(fname, "rb")).
+ infile: binary file-like object for reading (e.g., from open(fname, "rb")).
+ filename: path of input file, if applicable. """
- super(DataReader,self).__init__(outfile)
+ super(DataReader,self).__init__(infile,filename)self._series_index_to_descriptor={}self._series_index_to_block_index={}# {series_index -> SeriesBlockIndex}self._read_index()
diff --git a/docs/html/_modules/bosdyn/bddf/data_writer.html b/docs/html/_modules/bosdyn/bddf/data_writer.html
index 2c0507453..94112204a 100644
--- a/docs/html/_modules/bosdyn/bddf/data_writer.html
+++ b/docs/html/_modules/bosdyn/bddf/data_writer.html
@@ -7,7 +7,7 @@
- bosdyn.bddf.data_writer — Spot 2.3.5 documentation
+ bosdyn.bddf.data_writer — Spot 3.0.0 documentation
@@ -66,7 +66,7 @@
+# Copyright (c) 2021 Boston Dynamics, Inc. All rights reserved.
+#
+# Downloading, reproducing, distributing or otherwise using the SDK Software
+# is subject to the terms and conditions of the Boston Dynamics Software
+# Development Kit License (20191101-BDSDK-SL).
+
+"""A class for reading message data from a DataFile."""
+
+from.commonimportPROTOBUF_CONTENT_TYPE
+
+
+
[docs]classMessageReader:
+ """A class for reading message data from a DataFile.
+
+ Methods raise ParseError if there is a problem with the format of the file.
+ """
+
+ def__init__(self,data_reader,require_protobuf=False):
+ self._data_reader=data_reader
+ self._channel_name_to_series_descriptor={}
+ self._channel_name_to_series_index={}
+ forseries_index,series_identifierinenumerate(data_reader.file_index.series_identifiers):
+ try:
+ channel_name=series_identifier.spec["bosdyn:channel"]
+ exceptKeyError:
+ continue# Not a protobuf series
+ series_descriptor=self._data_reader.series_descriptor(series_index)
+ ifseries_descriptor.WhichOneof("DataType")!="message_type":
+ continue# Not a protobuf series
+ message_type=series_descriptor.message_type
+ ifrequire_protobufandmessage_type.content_type!=PROTOBUF_CONTENT_TYPE:
+ continue# Not a protobuf series
+ self._channel_name_to_series_descriptor[channel_name]=series_descriptor
+ self._channel_name_to_series_index[channel_name]=series_descriptor.series_index
+
+ @property
+ defdata_reader(self):
+ """Return underlying DataReader this object is using."""
+ returnself._data_reader
+
+ @property
+ defchannel_name_to_series_decriptor(self):
+ """Return a mapping of {chanel name -> series descriptor} for message series."""
+ returnself._channel_name_to_series_descriptor
+
+
[docs]defseries_index(self,channel_name,message_type=None):
+ """Return series index (int) to access SeriesDescriptors and messages.
+
+ Args:
+ channel_name: name of the channel of messages.
+ message_type: specify message type, if channel may have multiple
+ kinds of messages (optional)
+ """
+ ifmessage_typeisNone:
+ returnself._channel_name_to_series_index[channel_name]
+
+ forseries_index,series_identifierinenumerate(
+ self._data_reader.file_index.series_identifiers):
+ try:
+ series_channel=series_identifier.spec["bosdyn:channel"]
+ exceptKeyError:
+ continue# Not a protobuf series
+ ifseries_channel!=channel_name:
+ continue
+ series_descriptor=self._data_reader.series_descriptor(series_index)
+ ifseries_descriptor.WhichOneof("DataType")!="message_type":
+ continue# Not a protobuf series
+ ifmessage_type==series_descriptor.message_type.type_name:
+ returnseries_index
+
+ raiseKeyError("No series with channel_name={} and message_type={}".format(
+ channel_name,message_type))
+
+
[docs]defseries_index_to_descriptor(self,series_index):
+ """Given a series index, return the associated SeriesDescriptor
+
+ Args:
+ series_index: index (int) from the series_index() call
+ """
+ returnself._data_reader.file_index.series_descriptor(series_index)
+
+
[docs]defget_blob(self,series_index,index_in_series):
+ """Return binary data from message stored in the file.
+
+ Args:
+ series_index: index (int) from the series_index() call
+ index_in_series: the index of the message within the series
+
+ Returns: DataTypeDescriptor for channel, timestamp_nsec (int), binary data
+ """
+ returnself._data_reader.read(series_index,index_in_series)
# Development Kit License (20191101-BDSDK-SL)."""A class for reading Protobuf data from a DataFile."""
-
-from.commonimportPROTOBUF_CONTENT_TYPE
+from.message_readerimportMessageReader
-
[docs]classProtobufReader(MessageReader):"""A class for reading Protobuf data from a DataFile. Methods raise ParseError if there is a problem with the format of the file. """def__init__(self,data_reader):
- self._data_reader=data_reader
- self._channel_name_to_series_descriptor={}
- self._channel_name_to_series_index={}
- forseries_index,series_identifierinenumerate(data_reader.file_index.series_identifiers):
- try:
- channel_name=series_identifier.spec["bosdyn:channel"]
- exceptKeyError:
- continue# Not a protobuf series
- series_descriptor=self._data_reader.series_descriptor(series_index)
- ifseries_descriptor.WhichOneof("DataType")!="message_type":
- continue# Not a protobuf series
- message_type=series_descriptor.message_type
- ifmessage_type.content_type!=PROTOBUF_CONTENT_TYPE:
- continue# Not a protobuf series
- self._channel_name_to_series_descriptor[channel_name]=series_descriptor
- self._channel_name_to_series_index[channel_name]=series_descriptor.series_index
-
- @property
- defdata_reader(self):
- """Return underlying DataReader this object is using."""
- returnself._data_reader
-
-
[docs]defseries_index(self,channel_name):
- """Return the series index (int) by which SeriesDescriptors and messages can be accessed.
-
- Args:
- channel_name: name of the channel of messages.
- """
- returnself._channel_name_to_series_index[channel_name]
-
-
[docs]defseries_index_to_descriptor(self,series_index):
- """Given a series index, return the associated SeriesDescriptor
-
- Args:
- series_index: index (int) from the series_index() call
- """
- returnself._data_reader.file_index.series_descriptor(series_index)
[docs]defget_message(self,series_index,protobuf_type,index_in_series):"""Return a deserialized protobuf from bytes stored in the file.
@@ -611,7 +615,7 @@
Source code for bosdyn.bddf.protobuf_series_writer
"""Class for registering a series which stores protobuf messages in a message series."""
-from.bosdynimportMessageChannel
-from.commonimportPROTOBUF_CONTENT_TYPE
+from.bosdynimportMessageChannel
+from.commonimportPROTOBUF_CONTENT_TYPE
+
[docs]classProtobufSeriesWriter:# pylint: disable=too-many-instance-attributes"""A class for registering a series which stores protobuf messages in a message series.
diff --git a/docs/html/_modules/bosdyn/bddf/stream_data_reader.html b/docs/html/_modules/bosdyn/bddf/stream_data_reader.html
index 9c328a087..93d9727ff 100644
--- a/docs/html/_modules/bosdyn/bddf/stream_data_reader.html
+++ b/docs/html/_modules/bosdyn/bddf/stream_data_reader.html
@@ -7,7 +7,7 @@
- bosdyn.bddf.stream_data_reader — Spot 2.3.5 documentation
+ bosdyn.bddf.stream_data_reader — Spot 3.0.0 documentation
@@ -66,7 +66,7 @@
Source code for bosdyn.choreography.client.animation_file_conversion_helpers
+# Copyright (c) 2021 Boston Dynamics, Inc. All rights reserved.
+#
+# Downloading, reproducing, distributing or otherwise using the SDK Software
+# is subject to the terms and conditions of the Boston Dynamics Software
+# Development Kit License (20191101-BDSDK-SL).
+
+"""A set helpers which convert specific lines from an animation
+file into the animation-specific protobuf messages.
+
+NOTE: All of these helpers are to convert specific values read from a `cha`
+file into fields within the choreography_sequence_pb2.Animation protobuf
+message. They are used by the animation_file_to_proto.py file.
+"""
+
+frombosdyn.api.spotimport(choreography_sequence_pb2,choreography_service_pb2,
+ choreography_service_pb2_grpc)
+
+
+
Source code for bosdyn.choreography.client.animation_file_to_proto
+# Copyright (c) 2021 Boston Dynamics, Inc. All rights reserved.
+#
+# Downloading, reproducing, distributing or otherwise using the SDK Software
+# is subject to the terms and conditions of the Boston Dynamics Software
+# Development Kit License (20191101-BDSDK-SL).
+
+"""A tool to convert animation files into protobuf messages which can be uploaded to the robot
+and used within choreography sequences."""
+
+importargparse
+importhashlib
+importlogging
+importntpath
+importos
+importsys
+
+fromgoogle.protobufimporttext_format,wrappers_pb2
+
+importbosdyn.client
+importbosdyn.client.util
+frombosdyn.api.spotimportchoreography_sequence_pb2
+frombosdyn.choreography.client.animation_file_conversion_helpersimport*
+
+LOGGER=logging.getLogger(__name__)
+
+# The options keywords represent the first section of the file, and will be parsed into
+# specific fields within the Animation proto.
+OPTIONS_KEYWORDS_TO_FUNCTION={
+ "controls":controls_option,
+ "bpm":bpm_option,
+ "extendable":extendable_option,
+ "truncatable":truncatable_option,
+ "neutral_start":neutral_start_option,
+ "precise_steps":precise_steps_option,
+ "precise_timing":precise_timing_option,
+ "no_looping":no_looping_option,
+ "arm_required":arm_required_option,
+ "arm_prohibited":arm_prohibited_option,
+ "track_swing_trajectories":track_swing_trajectories_option,
+ "assume_zero_roll_and_pitch":assume_zero_roll_and_pitch_option,
+ "arm_playback":arm_playback_option,
+ "display_rgb":display_rgb_option,
+ "frequency":frequency_option,
+ "retime_to_integer_slices":retime_to_integer_slices_option,
+ "description":description_option,
+}
+
+# The grouped headers represent animation keyframe values which can be used and specify multiple protobuf
+# values for a single header keyword. For example, body_pos has x/y/z position values.
+GROUPED_HEADERS={
+ "body_pos":(3,body_pos_handler),
+ "com_pos":(3,com_pos_handler),
+ "body_euler_rpy":(3,body_euler_rpy_angles_handler),
+ "body_quat_xyzw":(4,body_quaternion_xyzw_handler),
+ "body_quat_wxyz":(4,body_quaternion_wxyz_handler),
+ "leg_joints":(12,leg_angles_handler),
+ "foot_pos":(12,foot_pos_handler),
+ "hand_pos":(3,hand_pos_handler),
+ "hand_euler_rpy":(3,hand_euler_rpy_angles_handler),
+ "hand_quat_xyzw":(4,hand_quaternion_xyzw_handler),
+ "hand_quat_wxyz":(4,hand_quaternion_wxyz_handler),
+ "contact":(4,contact_handler),
+ "arm_joints":(6,arm_joints_handler),
+ "fl_angles":(3,fl_angles_handler),
+ "fr_angles":(3,fr_angles_handler),
+ "hl_angles":(3,hl_angles_handler),
+ "hr_angles":(3,hr_angles_handler),
+ "fl_pos":(3,fl_pos_handler),
+ "fr_pos":(3,fr_pos_handler),
+ "hl_pos":(3,hl_pos_handler),
+ "hr_pos":(3,hr_pos_handler),
+}
+
+# The single grouped headers represent animation keyframe values which can be used and specify a single
+# protobuf value for the header keyword.
+SINGLE_HEADERS={
+ "body_x":body_x_handler,
+ "body_y":body_y_handler,
+ "body_z":body_z_handler,
+ "com_x":com_x_handler,
+ "com_y":com_y_handler,
+ "com_z":com_z_handler,
+ "body_quat_w":body_quat_w_handler,
+ "body_quat_x":body_quat_x_handler,
+ "body_quat_y":body_quat_y_handler,
+ "body_quat_z":body_quat_z_handler,
+ "body_roll":body_roll_handler,
+ "body_pitch":body_pitch_handler,
+ "body_yaw":body_yaw_handler,
+ "fl_hx":fl_hx_handler,
+ "fl_hy":fl_hy_handler,
+ "fl_kn":fl_kn_handler,
+ "fr_hx":fr_hx_handler,
+ "fr_hy":fr_hy_handler,
+ "fr_kn":fr_kn_handler,
+ "hl_hx":hl_hx_handler,
+ "hl_hy":hl_hy_handler,
+ "hl_kn":hl_kn_handler,
+ "hr_hx":hr_hx_handler,
+ "hr_hy":hr_hy_handler,
+ "hr_kn":hr_kn_handler,
+ "fl_x":fl_x_handler,
+ "fl_y":fl_y_handler,
+ "fl_z":fl_z_handler,
+ "fr_x":fr_x_handler,
+ "fr_y":fr_y_handler,
+ "fr_z":fr_z_handler,
+ "hr_x":hr_x_handler,
+ "hr_y":hr_y_handler,
+ "hr_z":hr_z_handler,
+ "hl_x":hl_x_handler,
+ "hl_y":hl_y_handler,
+ "hl_z":hl_z_handler,
+ "fl_contact":fl_contact_handler,
+ "fr_contact":fr_contact_handler,
+ "hl_contact":hl_contact_handler,
+ "hr_contact":hr_contact_handler,
+ "shoulder0":sh0_handler,
+ "shoulder1":sh1_handler,
+ "elbow0":el0_handler,
+ "elbow1":el1_handler,
+ "wrist0":wr0_handler,
+ "wrist1":wr1_handler,
+ "hand_x":hand_x_handler,
+ "hand_y":hand_y_handler,
+ "hand_z":hand_z_handler,
+ "hand_quat_w":hand_quat_w_handler,
+ "hand_quat_x":hand_quat_x_handler,
+ "hand_quat_y":hand_quat_y_handler,
+ "hand_quat_z":hand_quat_z_handler,
+ "hand_roll":hand_roll_handler,
+ "hand_pitch":hand_pitch_handler,
+ "hand_yaw":hand_yaw_handler,
+ "gripper":gripper_handler,
+ "time":start_time_handler,
+}
+
+COMMENT_DELIMITERS=["//","#"]
+
+
+
[docs]classAnimation():
+ """Helper class to track values read from the animation file that are important to choreographer
+ and necessary when uploading animated moves."""
+
+ def__init__(self):
+ # The name of the animated move.
+ self.name=None
+
+ # [Optional] The BPM which the move will be performed at.
+ self.bpm=None
+
+ # [Optional] The frequency at which the keyframes occur. If not provided in the CHA file, then
+ # explicit timestamps must be provided for each keyframe.
+ self.frequency=None
+
+ # Default description for the animated move.
+ self.description="Animated dance move."
+
+ # The protobuf message representing the animation.
+ self.proto=choreography_sequence_pb2.Animation()
+
+ # The color of the animated move's block when loaded in Choreographer.
+ self.rgb=[100,100,100]
+
+ # Each individual parameter line as read from a *.cha file.
+ self.parameter_lines=[]
+
+
[docs]defcreate_move_info_proto(self):
+ """Creates the MoveInfo protobuf message from the parsed animation file.
+
+ Returns:
+ The choreography_sequence.MoveInfo protobuf message for the animation as generated
+ by the different animation fields in the Animation proto.
+ """
+ move_info=choreography_sequence_pb2.MoveInfo()
+ move_info.name=self.name
+ move_info.is_extendable=self.proto.extendableorself.proto.truncatable# "is adjustable"
+
+ # Should always have move.time, so the duration is the final time of the last frame.
+ move_duration_sec=self.proto.animation_keyframes[-1].time
+ ifself.bpmisnotNone:
+ # Compute the move length slices using the bpm.
+ slices_per_minute=4*self.bpm
+ move_duration_minutes=move_duration_sec/60.0
+ move_info.move_length_slices=int(move_duration_minutes*slices_per_minute)
+ else:
+ # Just use the time to size the move.
+ move_info.move_length_time=move_duration_sec
+
+ # Set the max/min info based on truncatable/extendable flags.
+ ifself.proto.truncatableandnotself.proto.extendable:
+ move_info.max_time=move_info.move_length_time
+ move_info.max_move_length_slices=move_info.move_length_slices
+ elifself.proto.extendableandnotself.proto.truncatable:
+ move_info.min_time=move_info.move_length_time
+ move_info.min_move_length_slices=move_info.move_length_slices
+
+ # Set the different track information
+ move_info.controls_arm=self.proto.controls_arm
+ move_info.controls_gripper=self.proto.controls_gripper
+ move_info.controls_legs=self.proto.controls_legs
+ move_info.controls_body=self.proto.controls_body
+
+ # Set the choreographer-specific display information
+ move_info.display.category=choreography_sequence_pb2.ChoreographerDisplayInfo.CATEGORY_ANIMATION
+ move_info.display.color.r=self.rgb[0]
+ move_info.display.color.g=self.rgb[1]
+ move_info.display.color.b=self.rgb[2]
+ move_info.display.color.a=1.0
+ move_info.display.description=self.description
+
+ # Animations are required to start and end in a standing position.
+ move_info.entrance_states.append(choreography_sequence_pb2.MoveInfo.TRANSITION_STATE_STAND)
+ move_info.exit_state=choreography_sequence_pb2.MoveInfo.TRANSITION_STATE_STAND
+
+ returnmove_info
+
+
+
[docs]defset_proto(proto_object,attribute_name,attribute_value):
+ """Helper function to set a field to a specific value in the protobuf message.
+
+ Args:
+ proto_object (Protobuf message): Any generic protobuf message.
+ attribute_name (String): The field name within the protobuf message.
+ attribute_value: A value with type matching the field type defined in the protobuf
+ message definition. This will be saved in the attribute_name field.
+
+ Returns:
+ Nothing. Mutates the input proto_object to update the specified field name to the
+ provided value.
+ """
+ field_attr=getattr(proto_object,attribute_name)
+ field_type=type(field_attr.value)
+ field_attr.value=field_type(attribute_value)
+
+
+
[docs]defhandle_nested_double_value_params(proto_object,name,attribute_value):
+ """Helper function to set a field to a DoubleValue protobuf in a protobuf message.
+
+ Args:
+ proto_object (Protobuf message): Any generic protobuf message.
+ name (String): The field name within the protobuf message. This name should
+ be both the field name and sub-field name separated by a period. For example,
+ for the Vec3 velocity field, the name would be 'velocity.x'.
+ attribute_value: A value with type matching the field type defined in the protobuf
+ message definition. This will be saved in the attribute_name field.
+
+ Returns:
+ Nothing. Mutates the input proto_object to update the specified field name to the
+ provided value.
+ """
+ subfields=name.split(".")
+ current_attr=proto_object
+ forfieldinsubfields:
+ current_attr=getattr(current_attr,field)
+ current_attr.CopyFrom(wrappers_pb2.DoubleValue(value=attribute_value))
+
+
+
[docs]defread_animation_params(animation):
+ """Parses the set of lines that are the parameters section of the file.
+
+ Reads the parameter lines into the min/max/default values in the Animation proto.
+
+ Args:
+ animation (Animation): The animation class structure containing the parameter lines.
+
+ Returns:
+ The mutated animation class, which now contains populated params fields in the
+ animation's protobuf.
+ """
+ current_params_default=animation.proto.default_parameters
+ current_params_min=animation.proto.minimum_parameters
+ current_params_max=animation.proto.maximum_parameters
+ group_field_splitter="."
+ forparaminanimation.parameter_lines:
+ split_line=param.split()
+ param_name=str(split_line[0])
+ min_max_default_vals=[
+ float(split_line[i])iffloat(split_line[i])!=0else1e-6foriinrange(1,4)
+ ]
+
+ # Now create the parameter protobuf message.
+ ifgroup_field_splitterinparam_name:
+ # Non-individual fields, so handle slightly differently.
+ try:
+ handle_nested_double_value_params(current_params_min,param_name,
+ min_max_default_vals[0])
+ handle_nested_double_value_params(current_params_default,param_name,
+ min_max_default_vals[1])
+ handle_nested_double_value_params(current_params_max,param_name,
+ min_max_default_vals[2])
+ exceptAttributeError:
+ err="Cannot parse file %s: unknown parameter field name %s."%(animation.name,
+ param_name)
+ raiseException(err)
+ else:
+ # Individual field using a DoubleValue proto.
+ try:
+ set_proto(current_params_min,param_name,min_max_default_vals[0])
+ set_proto(current_params_default,param_name,min_max_default_vals[1])
+ set_proto(current_params_max,param_name,min_max_default_vals[2])
+ exceptAttributeError:
+ err="Cannot parse file %s: unknown parameter field name %s."%(animation.name,
+ param_name)
+ raiseException(err)
+ returnanimation
+
+
+
[docs]defread_and_find_animation_params(animate_move_params_file,filepath_input=True):
+ """Create a mapping of the parameter name to the default parameter values.
+
+ Args:
+ animate_move_params_file (string): filepath to the default parameters file, or a string representing the contents
+ of the parameters file.
+ filepath_input (boolean): With filepath_input set to True, the animate_move_params_file argument will be interpreted as
+ a file path to the default parameters file. When set to False, the animate_move_params_file argument will be read as
+ the information in the default parameters file passed as a string.
+
+ Returns:
+ A dictionary containing the parameter name as the key, and the full parameter line from
+ the file as the value.
+ """
+ if(filepath_input):
+ #if animate_move_params_file is a filepath open the file
+ params_file=open(animate_move_params_file,"r")
+ else:
+ #if animate_move_params_file is a string of parameter information split the string by newline characters
+ params_file=animate_move_params_file.splitlines()
+
+ reading_animate_params=False
+ animate_params_vals={}
+ forlineinparams_file:
+ line=line.strip()
+ if"animate_params"inline:
+ # Starting the animate params section.
+ reading_animate_params=True
+ continue
+
+ ifreading_animate_paramsandline=="":
+ # Finished reading the animate_params section.
+ reading_animate_params=False
+ returnanimate_params_vals
+
+ ifreading_animate_params:
+ # Set the parameter name as the key, and the full parameter line (as a string) as the value in
+ # the animate_params_vals dictionary.
+ split_line=line.split()
+ param_name=split_line[0]
+ animate_params_vals[param_name]=line
+ continue
+ returnanimate_params_vals
+
+
+
[docs]defconvert_animation_file_to_proto(animated_file,animate_move_params_file=""):
+ """Parses a file into the animation proto that will be uploaded to the robot.
+
+ Args:
+ animated_file (string): The filepath to the animation text file.
+ animate_move_params_file (string): [Optional] The filepath to a default set of move parameters.
+
+ Returns:
+ The Animation class, which contains the animation proto to be uploaded to the robot, as well
+ as additional information to be used by Choreographer.
+ """
+
+ # Create a mapping of the default values for each parameter from the MoveParamsConfig.txt file.
+ # These will be used if a user doesn't provide min/max/default, but does include the parameter
+ # name in the file.
+ maybe_use_default_params=animate_move_params_file!=""
+ default_animate_params_values={}
+ ifmaybe_use_default_params:
+ ifos.path.exists(animate_move_params_file):
+ # if there is a file at the location animate_move_params_file try to read the file
+ default_animate_params_values=read_and_find_animation_params(animate_move_params_file)
+ else:
+ # if animate_move_params_file isn't a locatable file try to read the string as parameter field data
+ default_animate_params_values=read_and_find_animation_params(animate_move_params_file,False)
+
+ animation=Animation()
+ animation.name=ntpath.basename(animated_file).split(".cha")[0]
+
+ animation_specs=open(animated_file,"r")
+
+ # Expecting three chunks, separated by a blank line.
+ section_counter=0
+
+ # The set of keywords that describe each column in the movement section.
+ movement_columns=[]
+
+ # If the keyframe needs the timestamps set based off the frequency, track that information here.
+ # Value 1: indicates if it needs the timestamps set, value 2: indicates the current cumulative time
+ # summed while iterating over each keyframe in the file.
+ set_keyframe_times=[False,0]
+
+ # Make up a unique color that is persistent based on the animation name.
+ name_hash=hashlib.md5(animation.name.encode())
+ animation.rgb[0]=int(name_hash.hexdigest()[0:2],16)
+ animation.rgb[1]=int(name_hash.hexdigest()[2:4],16)
+ animation.rgb[2]=int(name_hash.hexdigest()[4:6],16)
+
+ forlineinanimation_specs:
+ line=line.strip()
+
+ ifline=="":
+ # The sections are separated by an empty line
+ section_counter+=1
+ continue
+
+ # Check if there are any comments. Comments can be both at the end of an exisiting line, or
+ # on a line of there own. They are marked with # or //. We want to just ignore them and continue
+ # parsing the file as normal.
+ fordeliminCOMMENT_DELIMITERS:
+ line=line.split(delim)[0]# Take any content before the comment starts.
+
+ ifline=="":
+ # If after stripping all the comment content there is no line left, then continue.
+ # We do NOT increment the section counter for comment lines!
+ continue
+
+ ifsection_counter==0:
+ # the first section is the "options section".
+ # Take first word of line and use that as the options keyword. Apply whatever function
+ # is specified for that keyword to the remaining line values.
+ file_line_split=line.split()
+ keyword=file_line_split[0]
+ ifkeywordinOPTIONS_KEYWORDS_TO_FUNCTION:
+ # Apply the keywords functionality to the animation class.
+ OPTIONS_KEYWORDS_TO_FUNCTION[keyword](file_line_split,animation)
+
+ elifsection_counter==1:
+ # Second section represents the parameters for choreographer. Simply store the lines
+ # for use by choreographer's config reader.
+ line=line.strip().lower()
+ if"no parameters"inline:
+ # This is the keyword for no parameters, so just skip the section and move on.
+ continue
+ split_line=line.split()
+ iflen(split_line)==1andmaybe_use_default_params:
+ # If the parameter name was provided with no user-specified min/max/default values,
+ # then attempt to use the default values from MoveParamsConfig.txt.
+ ifsplit_line[0]indefault_animate_params_values:
+ animation.parameter_lines.append(default_animate_params_values[split_line[0]])
+ else:
+ err="Cannot parse file %s: parameter field name %s was provided but is not a default parameter."%(
+ animation.name,split_line[0])
+ raiseException(err)
+ else:
+ animation.parameter_lines.append(line)
+
+ elifsection_counter==2:
+ # The final section is the animated moves section.
+ iflen(movement_columns)==0:
+ # The first line will contain all the different column headers.
+ movement_columns=line.split()
+
+ # If "time" is not in the column, then we should set that for every keyframe based on frequency
+ if"time"notinmovement_columns:
+ set_keyframe_times[0]=True
+ ifanimation.frequencyisNone:
+ prefix="Cannot parse file %s: "%animation.name
+ err=prefix+"Either frequency or keyframe timestamps must be provided. Neither were found."
+ raiseException(err)
+ continue
+
+ vals=line.split()
+ animation_keyframe=choreography_sequence_pb2.AnimationKeyframe()
+ current_index=0
+ forheaderinmovement_columns:
+ ifheaderinGROUPED_HEADERS:
+ # For grouped headers, get the next N line values, where N is specified by the grouped
+ # headers dict, and set those in the animation keyframe protobuf message.
+ header_activities=GROUPED_HEADERS[header]
+ header_values=vals[current_index:current_index+
+ header_activities[0]]# not inclusive
+ header_values=[float(val)ifval!='0'else1e-6forvalinheader_values]
+ header_activities[1](header_values,animation_keyframe)
+ current_index=current_index+header_activities[0]
+ elifheaderinSINGLE_HEADERS:
+ # Add the single value into the animation keyframe protobuf message.
+ header_value=float(vals[current_index])
+ ifheader_value==0:
+ header_value=1e-6
+ SINGLE_HEADERS[header](header_value,animation_keyframe)
+ current_index+=1
+ else:
+ # Don't fail silently and mismatch indices of other groups if one group header is not found.
+ err="Cannot parse file %s: Unknown body movement keyword %s"%(
+ animation.name,header)
+ raiseException(err)
+
+ ifset_keyframe_times[0]:
+ # Update the timestamp in the keyframe, then increment it based on the frequency
+ animation_keyframe.time=set_keyframe_times[1]
+ set_keyframe_times[1]+=(1.0/animation.frequency)
+
+ # Add the animation frame into the animation proto.
+ animation.proto.animation_keyframes.extend([animation_keyframe])
+
+ else:
+ # An animation file should only have 3 sections: the options, the parameters, and the body movement keyframes.
+ err=(
+ "Cannot parse file %s: Animation file contains more than 3 sections deliniated by whitespace."
+ " Make sure all comments are included in a exisiting section."%(animation.name))
+ raiseException(err)
+
+ animation.proto.name=animation.name
+ ifanimation.bpmisnotNone:
+ animation.proto.bpm=animation.bpm
+
+ # Read out the parameters into protobuf messages.
+ read_animation_params(animation)
+
+ returnanimation
+
+
+
[docs]defwrite_animation_to_dest(animation,destination):
+ """Write the new animation proto to a .cap file.
+
+ Args:
+ animation(Animation class): The animation class object generated by the
+ `cha` file conversion helpers to save the protobuf from.
+ destination (string): The full filepath to the location to save the animation
+ protobuf message.
+ """
+ if(animation.nameisNone):
+ LOGGER.error("Invalid file name, cannot save choreography sequence.")
+ raiseIOError("Invalid file name, cannot save choreography sequence.")
+
+ ifnotos.path.exists(destination):
+ LOGGER.error("Path(%s) to save file does not exist. Creating it."%destination)
+ os.makedirs(destination,exist_ok=True)
+
+ animation_proto_bytes=text_format.MessageToString(animation.proto)
+ withopen(str(os.path.join(destination,animation.name+".cap")),'w')asf:
+ f.write(animation_proto_bytes)
+
+
+
[docs]defmain():
+ parser=argparse.ArgumentParser()
+ parser.add_argument('--cha-filepath',help='The filename of the animation file.')
+ parser.add_argument('--cha-directory',help="The filepath to a directory with animation files.")
+ parser.add_argument(
+ '--config-filepath',help=
+ 'The filepath for a move params config file. This can be found using the ListAllMoves RPC.',
+ default="")
+ parser.add_argument('--destination-filepath',
+ help='The file location to save the animation proto.',default=".")
+ options=parser.parse_args()
+
+ ifoptions.cha_filepath:
+ animation=convert_animation_file_to_proto(options.cha_filepath,options.config_filepath)
+ write_animation_to_dest(animation,options.destination_filepath)
+
+ elifoptions.cha_directory:
+ files_in_dir=os.listdir(options.cha_directory)
+ forfilenameinfiles_in_dir:
+ iffilename.endswith(".cha"):
+ animation=convert_animation_file_to_proto(
+ os.path.join(options.cha_directory,filename),options.config_filepath)
+ write_animation_to_dest(animation,options.destination_filepath)
+
+ else:
+ print(
+ "Please provide either the --cha-filepath argument for a single animation file, or the --cha-directory argument"
+ " for a full directory of animation files.")
+
+ returnTrue
Source code for bosdyn.choreography.client.choreography
[docs]deflist_all_moves(self,**kwargs):"""Get a list of the different choreography sequence moves and associated parameters."""req=choreography_sequence_pb2.ListAllMovesRequest()
- returnself.call(self._stub.ListAllMoves,req,
- value_from_response=None,# Return the complete response message
- error_from_response=common_header_errors,**kwargs)
[docs]deflist_all_moves_async(self,object_type=None,time_start_point=None,**kwargs):"""Async version of list_all_moves()."""req=choreography_sequence_pb2.ListAllMovesRequest()
- returnself.call_async(self._stub.ListAllMoves,req,
- value_from_response=None,# Return the complete response message
- error_from_response=common_header_errors,**kwargs)
[docs]defupload_choreography(self,choreography_seq,non_strict_parsing=True,**kwargs):
- """Upload the choreography sequence to the robot."""
- req=choreography_sequence_pb2.UploadChoreographyRequest(choreography_sequence=choreography_seq,non_strict_parsing=non_strict_parsing)
- returnself.call(self._stub.UploadChoreography,req,
- value_from_response=None,# Return the complete response message
- error_from_response=common_header_errors,
- **kwargs)
+ """Upload the choreography sequence to the robot.
+
+ Args:
+ choreography_seq (choreography_sequence_pb2.ChoreographySequence proto): The
+ dance sequence to be sent and stored on the robot.
+ non_strict_parsing (boolean): If true, the robot will fix any correctable errors within
+ the choreogrpahy and allow users to run the dance. If false, if there are errors
+ the robot will reject the choreography when attempting to run the dance.
+
+ Returns:
+ The UploadChoreographyResponse message, which includes any warnings generated from the
+ validation process for the choreography. If non_strict_parsing=False and there are warnings,
+ then the dance will not be able to run.
+ """
+ req=choreography_sequence_pb2.UploadChoreographyRequest(
+ choreography_sequence=choreography_seq,non_strict_parsing=non_strict_parsing)
+ returnself.call(
+ self._stub.UploadChoreography,
+ req,
+ value_from_response=None,# Return the complete response message
+ error_from_response=common_header_errors,
+ **kwargs)
[docs]defupload_choreography_async(self,choreography_seq,non_strict_parsing=True,**kwargs):"""Async version of upload_choreography()."""
- req=choreography_sequence_pb2.UploadChoreographyRequest(choreography_sequence=choreography_seq,non_strict_parsing=non_strict_parsing)
- returnself.call_async(self._stub.UploadChoreography,req,
- value_from_response=None,# Return the complete response message
- error_from_response=common_header_errors,**kwargs)
[docs]defupload_animated_move(self,animation,generated_id="",**kwargs):
+ """Upload the animation proto to the robot to be used as a move in choreography sequences.
+
+ Args:
+ animation (choreography_sequence_pb2.Animation): The animated move protobuf message. This
+ can be generated by converting a `cha` file using the animation_file_to_proto helpers.
+ generated_id (string): The ID hash generated for the animation based on the serialization
+ of the protobuf message. This can be left empty, and the robot will re-parse and
+ validate the message. This will be filled out automatically when using
+ the AnimationUploadHelper.
+
+ Returns:
+ The UploadAnimatedMoveResponse message, which includes warnings if the uploaded animation
+ was invalid.
+ """
+ gen_id_proto=StringValue(value=generated_id)
+ req=choreography_sequence_pb2.UploadAnimatedMoveRequest(
+ animated_move=animation,animated_move_generated_id=gen_id_proto)
+ returnself.call(
+ self._stub.UploadAnimatedMove,
+ req,
+ value_from_response=None,# Return the complete response message
+ error_from_response=_upload_animated_move_errors,
+ **kwargs)
+
+
[docs]defupload_animated_move_async(self,animation,generated_id="",**kwargs):
+ """Async version of upload_animated_move()."""
+ gen_id_proto=StringValue(value=generated_id)
+ req=choreography_sequence_pb2.UploadAnimatedMoveRequest(
+ animated_move=animation,animated_move_generated_id=gen_id_proto)
+ returnself.call_async(
+ self._stub.UploadAnimatedMove,
+ req,
+ value_from_response=None,# Return the complete response message
+ error_from_response=_upload_animated_move_errors,
+ **kwargs)
+
+
+
[docs]defchoreography_log_to_animation_file(self,name,fpath,has_arm,*args):
+ """Turn the choreography log from the proto into an animation `cha` file type.
+
+ Args:
+ name (string): Name that the `cha` file will be saved as.
+ fpath (string): Location where the new `cha` file will be saved.
+ has_arm (boolean): True if the robot has an arm, False if the robot doesn't have an arm. When False arm motion won't
+ be added to the `cha` file.
+ *args (string(s) or list): String(s), list of strings, or a mix of string(s) and list(s) that are the options to be
+ included in the `cha` file. (ex. 'truncatable')
+
+ Returns:
+ The filename of the new animation `cha` file for the most recent choreography log.
+ """
+
+ defoption_list(*argvs):
+ """Takes a list of options or a variable number of string arguments, or a mix of the two and returns a single list
+ containing all of the options to be included
+ """
+ x=[]
+ foriinargvs:
+ if'list'instr(type(i)):
+ x+=list(i)
+ elif'str'instr(type(i)):
+ x.append(i)
+ returnx
+
+ deflist_to_formated_string(alist,space):
+ """Helper function for formatting the `cha` file columns."""
+ format_str=''.join(spacefor_inalist)
+ returnformat_str.format(*alist)
+
+ deftimestamp_to_seconds(timestamp):
+ """Helper function to turn a seconds quantity and nanoseconds quantity into one value of unit seconds."""
+ returntimestamp.seconds+1e-9*timestamp.nanos
+
+
+ #get the choreography log for the recording
+ log_type=choreography_sequence_pb2.DownloadRobotStateLogRequest.LOG_TYPE_MANUAL
+ response_status,choreography_log=self.download_robot_state_log(log_type)
+
+ #create the header for the *.cha file
+ joint_type_list=["leg_joints","body_pos","body_quat_xyzw","time","contact"]
+ controls_header="controls legs body"
+ description="Animation created from log recording."
+
+ ifhas_arm:
+ #if the robot has an arm, add the gripper and arm to the header description and keywords section
+ joint_type_list.append("arm_joints")
+ joint_type_list.append("gripper")
+ controls_header=controls_header+" arm gripper"
+
+ #format the complete header with all the options to be included in the *.cha file
+ joint_spacer="{:<60}"
+ header=(controls_header+"\n"
+ "description: "+description+"\n")
+
+ foroptioninoption_list(*args):
+ header=header+option+"\n"
+
+ header=header+("\n"
+ "no parameters\n\n"+list_to_formated_string(joint_type_list,joint_spacer)+"\n")
+
+ spacer_val='{:<30}'
+ ext=".cha"
+ file_path=os.path.join(fpath,name+ext)
+ initial_time=-1
+
+ withopen(file_path,'w')asf:
+ #write the formatted header to the *.cha file
+ f.write(header)
+
+ #create an empty list to hold the values for the current keyframe
+ list_values=[]
+
+ # get the list of all the keyframes from the recording,
+ # (the complete description of the robot in space at each timestamp)
+ keyframe_list=choreography_log.key_frames
+
+ forkinkeyframe_list:
+ # for each animation keyframe in the protobuf log put the desired joint values and timestamp in a list
+ # and then write them to the *.cha file. Values are added to list_values in the same order as the
+ # joint categories in joint_type_list, and the order of the joint angles within those groups must also
+ # go in the correct order for the *.cha to be read correctly.
+ list_values=[]
+
+ #leg_joints values:
+ #front right leg joint values
+ list_values.append(k.joint_angles.fr.hip_x)
+ list_values.append(k.joint_angles.fr.hip_y)
+ list_values.append(k.joint_angles.fr.knee)
+
+ #front left leg joint values
+ list_values.append(k.joint_angles.fl.hip_x)
+ list_values.append(k.joint_angles.fl.hip_y)
+ list_values.append(k.joint_angles.fl.knee)
+
+ #hind right leg joint values
+ list_values.append(k.joint_angles.hr.hip_x)
+ list_values.append(k.joint_angles.hr.hip_y)
+ list_values.append(k.joint_angles.hr.knee)
+
+ #hind left leg joint values
+ list_values.append(k.joint_angles.hl.hip_x)
+ list_values.append(k.joint_angles.hl.hip_y)
+ list_values.append(k.joint_angles.hl.knee)
+
+ #body_pos values:
+ #position of the body in the animation frame
+ list_values.append(k.animation_tform_body.position.x)
+ list_values.append(k.animation_tform_body.position.y)
+ list_values.append(k.animation_tform_body.position.z)
+
+ #body_quat_xyzw values:
+ #rotation of the body in the animation frame
+ list_values.append(k.animation_tform_body.rotation.x)
+ list_values.append(k.animation_tform_body.rotation.y)
+ list_values.append(k.animation_tform_body.rotation.z)
+ list_values.append(k.animation_tform_body.rotation.w)
+
+ #time value:
+ #time, in seconds, when the keyframe position occurs relative to the start of the recording
+ time=timestamp_to_seconds(k.timestamp)
+
+ #if the initial time is negative it's the first timestamp recorded, set that as the initial time
+ if(initial_time<0):
+ initial_time=time
+
+ #adjust the timestamp so it's relative to the start of the recording
+ time=time-initial_time
+ list_values.append(time)
+
+ #contact values:
+ #contact state of each foot; 0 for airborne, 1 for contact with the floor.
+ list_values.append(k.foot_contact_state.fr_contact)
+ list_values.append(k.foot_contact_state.fl_contact)
+ list_values.append(k.foot_contact_state.hr_contact)
+ list_values.append(k.foot_contact_state.hl_contact)
+
+ ifhas_arm:
+ #if the robot has an arm, record the joint values of the arm and gripper
+
+ #arm_joints values:
+ list_values.append(k.joint_angles.arm.shoulder_0.value)
+ list_values.append(k.joint_angles.arm.shoulder_1.value)
+ list_values.append(k.joint_angles.arm.elbow_0.value)
+ list_values.append(k.joint_angles.arm.elbow_1.value)
+ list_values.append(k.joint_angles.arm.wrist_0.value)
+ list_values.append(k.joint_angles.arm.wrist_1.value)
+
+ #gripper value:
+ list_values.append(k.joint_angles.gripper_angle.value)
+
+ #format the values of the keyframe and write them to the *.cha animation file
+ f.write(list_to_formated_string(list_values,spacer_val))
+ f.write("\n")
+
+ print("Animation *.cha file downloaded to: %s"%file_path)
+ returnname+".cha"
+
+
+
[docs]defexecute_choreography(self,choreography_name,client_start_time,
+ choreography_starting_slice,lease=None,**kwargs):
+ """Execute the current choreography sequence loaded on the robot by name.
+
+ Args:
+ choreography_name (string): The name of the uploaded choreography to run. The robot
+ only stores a single choreography at a time, so this name should match the last
+ uploaded choreography.
+ client_start_time (float): The time (in seconds) that the dance should start. This time
+ should be provided in the local clock's timeframe and the client will convert it
+ to the required robot's clock timeframe.
+ choreography_starting_slice (int): Which slice to start the dance at when the start
+ time is reached. By default, it will start with the first slice.
+ lease (lease_pb2.Lease protobuf): A specific lease to use for the request. If nothing is
+ provided, the client will append the next lease sequence in this field by default.
+
+ Returns:
+ The full ExecuteChoreographyResponse message.
+ """
+ req=self.build_execute_choreography_request(choreography_name,client_start_time,
+ choreography_starting_slice,lease)
+
+ returnself.call(
+ self._stub.ExecuteChoreography,
+ req,
+ value_from_response=None,# Return the complete response message
+ error_from_response=_execute_choreography_errors,
+ **kwargs)
+
+
[docs]defexecute_choreography_async(self,choreography_name,client_start_time,
+ choreography_starting_slice,lease=None,**kwargs):
+ """Async version of execute_choreography()."""
+ req=self.build_execute_choreography_request(choreography_name,client_start_time,
+ choreography_starting_slice,lease)
+ returnself.call_async(
+ self._stub.ExecuteChoreography,
+ req,
+ value_from_response=None,# Return the complete response message
+ error_from_response=_execute_choreography_errors,
+ **kwargs)
+
+
[docs]defstart_recording_state(self,duration_secs,continue_session_id=0,**kwargs):
+ """Start (or continue) a manually recorded robot state log.
+
+ Args:
+ duration_secs (float): The duration of the recording request in seconds. This will
+ apply from when the StartRecording rpc is recieved.
+ continue_session_id (int): If provided, the RPC will continue the recording
+ session associated with that ID.
+
+ Returns:
+ The full StartRecordingStateResponse proto.
+ """
+ request=self.build_start_recording_state_request(duration_secs,continue_session_id)
+ returnself.call(
+ self._stub.StartRecordingState,
+ request,
+ value_from_response=None,# Return the complete response message
+ error_from_response=_start_recording_state_errors,
+ **kwargs)
+
+
[docs]defstart_recording_state_async(self,duration_secs,continue_session_id=0,**kwargs):
+ """Async version of start_recording_state()."""
+ request=self.build_start_recording_state_request(duration_secs,continue_session_id)
+ returnself.call_async(
+ self._stub.StartRecordingState,
+ request,
+ value_from_response=None,# Return the complete response message
+ error_from_response=_start_recording_state_errors,
+ **kwargs)
+
+
[docs]defstop_recording_state(self,**kwargs):
+ """Stop recording a manual choreography log.
+
+ Returns:
+ The full StopRecordingStateResponse proto.
+ """
+ request=choreography_sequence_pb2.StopRecordingStateRequest()
+ returnself.call(
+ self._stub.StopRecordingState,
+ request,
+ value_from_response=None,# Return the complete response message
+ error_from_response=common_header_errors,
+ **kwargs)
+
+
[docs]defstop_recording_state_async(self,**kwargs):
+ """Async version of stop_recording_state()."""
+ request=choreography_sequence_pb2.StopRecordingStateRequest()
+ returnself.call_async(
+ self._stub.StopRecordingState,
+ request,
+ value_from_response=None,# Return the complete response message
+ error_from_response=common_header_errors,
+ **kwargs)
+
+
[docs]@staticmethod
+ defbuild_start_recording_state_request(duration_seconds=None,continue_session_id=0):
+ """Generate a StartRecordingStateRequest proto.
+
+ Args:
+ duration_seconds (float): The duration of the recording request in seconds. This will
+ apply from when the StartRecording rpc is recieved.
+ continue_session_id (int): If provided, the RPC will continue the recording
+ session associated with that ID.
+
+ Returns:
+ The full StartRecordingStateRequest proto with fields populated based on the input arguments.
+ """
+ request=choreography_sequence_pb2.StartRecordingStateRequest()
+ request.recording_session_id=continue_session_id
+ ifduration_secondsisnotNone:
+ request.continue_recording_duration.CopyFrom(seconds_to_duration(duration_seconds))
+ returnrequest
-
[docs]defexecute_choreography(self,choreography_name,client_start_time,choreography_starting_slice,lease=None,**kwargs):
- """Execute the current choreography sequence loaded on the robot by name."""
- req=self.build_execute_choreography_request(choreography_name,client_start_time,choreography_starting_slice,lease)
- returnself.call(self._stub.ExecuteChoreography,req,
- value_from_response=None,# Return the complete response message
- error_from_response=_execute_choreography_errors,
- **kwargs)
+
[docs]defdownload_robot_state_log(self,log_type,**kwargs):
+ """Download the manual or automatically collected logs for choreography robot state.
-
[docs]defexecute_choreography_async(self,choreography_name,client_start_time,choreography_starting_slice,lease=None,**kwargs):
- """Async version of execute_choreography()."""
- req=self.build_execute_choreography_request(choreography_name,client_start_time,choreography_starting_slice,lease)
- returnself.call_async(self._stub.ExecuteChoreography,req,
- value_from_response=None,# Return the complete response message
- error_from_response=_execute_choreography_errors,**kwargs)
+ Args:
+ log_type(choreography_sequence_pb2.LogType): Type of log, either manual or the automatically
+ generated log for the latest choreography.
-
[docs]defbuild_execute_choreography_request(self,choreography_name,client_start_time,choreography_starting_slice,lease=None):
+ Returns:
+ A tuple containing the response status (choreography_sequence_pb2.DownloadRobotStateLogResponse.Status) and
+ the choreography_sequence_pb2.ChoreographyStateLog constructed from the streaming response message.
+ """
+ request=choreography_sequence_pb2.DownloadRobotStateLogRequest(log_type=log_type)
+ returnself.call(
+ self._stub.DownloadRobotStateLog,
+ request,
+ value_from_response=_get_streamed_choreography_state_log,# Parses streamed response
+ error_from_response=_download_robot_state_log_stream_errors,
+ **kwargs)
+
+
[docs]defbuild_execute_choreography_request(self,choreography_name,client_start_time,
+ choreography_starting_slice,lease=None):"""Generate the ExecuteChoreographyRequest rpc with the timestamp converted into robot time."""# Note the client_start_time is a time expressed in the client's clock for when the choreography sequence should begin.
- request=choreography_sequence_pb2.ExecuteChoreographyRequest(choreography_sequence_name=choreography_name,
- choreography_starting_slice=float(choreography_starting_slice),
- lease=lease)
+ request=choreography_sequence_pb2.ExecuteChoreographyRequest(
+ choreography_sequence_name=choreography_name,
+ choreography_starting_slice=float(choreography_starting_slice),lease=lease)ifclient_start_timeisnotNone:request.start_time.CopyFrom(self._update_timestamp_filter(client_start_time,self.timesync_endpoint))
@@ -660,39 +1050,245 @@
Source code for bosdyn.choreography.client.choreography
[docs]classAnimationUploadHelper:
+ """Helper class to reduce reuploading animations to a robot multiple times.
+
+ This class will generate a hash (unique ID built from the animation protobuf
+ message's contents) for each animation proto, and include this hash when initially
+ uploading animations. It will track the animations sent to the robot and the hashes, and
+ only sends RPCs to upload an animation when the incoming animation proto is different
+ from the one on robot.
+
+ It initializes the set of known animations on robot already by using the ListAllMoves
+ RPC and reading the existing animation names and hashes.
+
+ The hash function is generated using a library which guarantees consistency, even when
+ restarting the program. As well, the hash is built from the serialized protobuf, and
+ proto guarantees that within the language that the serialized message will be consistent.
+
+ Args:
+ robot (Robot sdk instance): The robot to upload animations to.
+ """
+
+ ANIMATION_MOVE_PREFIX="animation::"
+
+ def__init__(self,robot):
+ self.robot=robot
+ self.choreography_client=robot.ensure_client(ChoreographyClient.default_service_name)
+
+ # Track animation name and current hash on robot.
+ self.animation_name_to_generated_id={}
+
+ # Initialize the list of known animations and their hashes based on the robot's
+ # ListAllMoves RPC response.
+ self.initialize()
+
+
[docs]definitialize(self):
+ """Determine which animations are already uploaded on robot."""
+ # Get a list of all the exisiting animations on robot.
+ initial_move_list=self.choreography_client.list_all_moves()
+
+ # Iterate over the list of moves the robot currently has. Track any animation moves
+ # by name and current animation hash. Any moves uploaded using this helper class will
+ # save the animation's hash in this dictionary and compare new animation hashes to
+ # determine
+ formoveininitial_move_list.moves:
+ ifAnimationUploadHelper.ANIMATION_MOVE_PREFIXinmove.name:
+ # Use the move name without the prefix so that subsequent attempts to upload
+ # that move will still match correctly.
+ move_name=move.name.split(AnimationUploadHelper.ANIMATION_MOVE_PREFIX)[1]
+ gen_id=move.animated_move_generated_id.value
+ self.animation_name_to_generated_id[move_name]=gen_id
+
+
+
[docs]defupload_animated_move(self,animation,**kwargs):
+ """Uploads the animation to robot if the animation protobuf has changed.
+
+ This will only send an UploadAnimatedMove RPC if the incoming animation
+ has a new hash that differs from the current hash for this animation on robot, which
+ indicates that the animation protobuf has changed since the last one uploaded to robot.
+
+ Args:
+ animation_proto(choreography_sequence_pb2.Animation): Animation to maybe upload.
+
+ Returns:
+ The UploadAnimateMoveResponse protobuf message if the animation is actually sent.
+ If the animation protobuf has not changed and is not sent to the robot, then this
+ function returns None.
+ """
+ generated_id=self.generate_animation_id(animation)
+ ifanimation.nameinself.animation_name_to_generated_id:
+ gen_id_on_robot=self.animation_name_to_generated_id[animation.name]
+ ifgen_id_on_robot==generated_id:
+ # Exit early without uploading the animation since it already exists on robot.
+ returnNone
+ result=self.choreography_client.upload_animated_move(animation,generated_id,**kwargs)
+ ifresult.status==choreography_sequence_pb2.UploadAnimatedMoveResponse.STATUS_OK:
+ # Add the move name to the tracked list.
+ self.animation_name_to_generated_id[animation.name]=generated_id
+ returnresult
+
+
+
[docs]defgenerate_animation_id(self,animation_proto):
+ """Serialize an Animation protobuf message and create a hash from the binary string.
+
+ NOTE: Protobuf's serialization will not be consistent across protobuf versions or
+ even just different languages serializing the same protobuf message. This means that for a
+ single protobuf message, there could be multiple different serializations. This is ok for the
+ use-case of the AnimationUploadHelper since the id's are only used for a specific
+ "session" of Choreographer and the robot's boot session. These are not meant to be the same
+ for forever and ever due to the potential inconsistencies mentioned, and should not be used
+ with that expectation.
+
+ Further, if a single animation proto does not generate the same ID for one "session", then
+ it will just be reuploaded and processed by the robot again.
+
+ Args:
+ animation_proto(choreography_sequence_pb2.Animation): Animation to generate a hash for.
+
+ Returns:
+ A string representing a unique hash built from the animation proto.
+ """
+ returnhashlib.sha1(animation_proto.SerializeToString()).hexdigest()
+
+
[docs]classInvalidUploadedChoreographyError(ResponseError):"""The uploaded choreography is invalid and unable to be performed."""
+
[docs]classRobotCommandIssuesError(ResponseError):"""A problem occurred when issuing the robot command containing the dance."""
+
[docs]classLeaseError(ResponseError):"""Incorrect or invalid leases for data acquisition. Check the lease use results."""
+
+
[docs]classAnimationValidationFailedError(ResponseError):
+ """The uploaded animation file is invalid and cannot be used in choreography sequences."""
+
+
+
[docs]classNoRecordedInformation(ResponseError):
+ """The choreography service has no logged robot state data."""
[docs]classRecordingBufferFull(ResponseError):
+ """The recording buffer is full and the current manual log will be truncated."""
+
+
+
[docs]classIncompleteData(ResponseError):
+ """The recording buffer filled up, the returned log will be truncated."""
+
+
_EXECUTE_CHOREOGRAPHY_STATUS_TO_ERROR=collections.defaultdict(lambda:(ResponseError,None))_EXECUTE_CHOREOGRAPHY_STATUS_TO_ERROR.update({choreography_sequence_pb2.ExecuteChoreographyResponse.STATUS_OK:(None,None),
- choreography_sequence_pb2.ExecuteChoreographyResponse.STATUS_INVALID_UPLOADED_CHOREOGRAPHY:(InvalidUploadedChoreographyError,
- InvalidUploadedChoreographyError.__doc__),
+ choreography_sequence_pb2.ExecuteChoreographyResponse.STATUS_INVALID_UPLOADED_CHOREOGRAPHY:
+ (InvalidUploadedChoreographyError,InvalidUploadedChoreographyError.__doc__),choreography_sequence_pb2.ExecuteChoreographyResponse.STATUS_ROBOT_COMMAND_ISSUES:
- (RobotCommandIssuesError,RobotCommandIssuesError.__doc__),
+ (RobotCommandIssuesError,RobotCommandIssuesError.__doc__),choreography_sequence_pb2.ExecuteChoreographyResponse.STATUS_LEASE_ERROR:(LeaseError,LeaseError.__doc__),})
+
@handle_common_header_errors@handle_lease_use_result_errors@handle_unset_status_error(unset='STATUS_UNKNOWN')def_execute_choreography_errors(response):"""Return an exception based on response from ExecuteChoreography RPC, None if no error."""
- returnerror_factory(response,response.status,
- status_to_string=choreography_sequence_pb2.ExecuteChoreographyResponse.Status.Name,
- status_to_error=_EXECUTE_CHOREOGRAPHY_STATUS_TO_ERROR)
+ returnerror_factory(
+ response,response.status,
+ status_to_string=choreography_sequence_pb2.ExecuteChoreographyResponse.Status.Name,
+ status_to_error=_EXECUTE_CHOREOGRAPHY_STATUS_TO_ERROR)
+
+
+_UPLOAD_ANIMATED_MOVE_STATUS_TO_ERROR=collections.defaultdict(lambda:(ResponseError,None))
+_UPLOAD_ANIMATED_MOVE_STATUS_TO_ERROR.update({
+ choreography_sequence_pb2.UploadAnimatedMoveResponse.STATUS_OK:(None,None),
+ choreography_sequence_pb2.UploadAnimatedMoveResponse.STATUS_ANIMATION_VALIDATION_FAILED:
+ (AnimationValidationFailedError,AnimationValidationFailedError.__doc__),
+})
+
+
+@handle_common_header_errors
+@handle_unset_status_error(unset='STATUS_UNKNOWN')
+def_upload_animated_move_errors(response):
+ """Return an exception based on response from UploadAnimatedMove RPC, None if no error."""
+ returnerror_factory(
+ response,response.status,
+ status_to_string=choreography_sequence_pb2.UploadAnimatedMoveResponse.Status.Name,
+ status_to_error=_UPLOAD_ANIMATED_MOVE_STATUS_TO_ERROR)
+
+
+_START_RECORDING_STATE_STATUS_TO_ERROR=collections.defaultdict(lambda:(ResponseError,None))
+_START_RECORDING_STATE_STATUS_TO_ERROR.update({
+ choreography_sequence_pb2.StartRecordingStateResponse.STATUS_OK:(None,None),
+ choreography_sequence_pb2.StartRecordingStateResponse.STATUS_UNKNOWN_RECORDING_SESSION_ID:
+ (UnknownRecordingSessionId,UnknownRecordingSessionId.__doc__),
+ choreography_sequence_pb2.StartRecordingStateResponse.STATUS_RECORDING_BUFFER_FULL:
+ (RecordingBufferFull,RecordingBufferFull.__doc__),
+})
+
+
+@handle_common_header_errors
+@handle_unset_status_error(unset='STATUS_UNKNOWN')
+def_start_recording_state_errors(response):
+ """Return an exception based on response from StartRecordingState RPC, None if no error."""
+ returnerror_factory(
+ response,response.status,
+ status_to_string=choreography_sequence_pb2.StartRecordingStateResponse.Status.Name,
+ status_to_error=_START_RECORDING_STATE_STATUS_TO_ERROR)
+
+
+@handle_common_header_errors
+@handle_unset_status_error(unset='STATUS_UNKNOWN')
+def_download_robot_state_log_stream_errors(response):
+ """Return a custom exception based on download robot state log streaming response, None if no error."""
+ # Iterate through the response since the download request responds with a stream.
+ forrespinresponse:
+ # Handle error statuses from the request.
+ if(resp.status==choreography_sequence_pb2.DownloadRobotStateLogResponse.
+ STATUS_NO_RECORDED_INFORMATION):
+ returnNoRecordedInformation(response=resp,error_message=NoRecordedInformation.__doc__)
+ # All responses (in the iterator) had status_ok
+ returnNone
+
'''Static helper methods.'''
+def_get_streamed_choreography_state_log(response):
+ """Reads a streamed response to recreate a ChoreographyStateLog proto.
+
+ Args:
+ response(choreography_sequence_pb2.DownloadRobotStateLogResponse): Streamed response with the
+ choreography log.
+
+ Returns:
+ A tuple containing the response status (choreography_sequence_pb2.DownloadRobotStateLogResponse.Status) and
+ the choreography_sequence_pb2.ChoreographyStateLog constructed from the streaming response message.
+ """
+ data=''
+ num_chunks=0
+ initial_status=None
+ forrespinresponse:
+ ifnum_chunks==0:
+ initial_status=resp.status
+ data=resp.chunk.data
+ else:
+ data+=resp.chunk.data
+ num_chunks+=1
+ choreography_log=choreography_sequence_pb2.ChoreographyStateLog()
+ if(num_chunks>0):
+ choreography_log.ParseFromString(data)
+ return(initial_status,choreography_log)
+
[docs]defload_choreography_sequence_from_binary_file(file_path):"""Read a choreography sequence file into a protobuf ChoreographySequence message."""ifnotos.path.exists(file_path):
@@ -706,6 +1302,7 @@
Source code for bosdyn.choreography.client.choreography
return choreography_sequence
+
[docs]defload_choreography_sequence_from_txt_file(file_path):ifnotos.path.exists(file_path):LOGGER.error("File not found at: %s"%file_path)
@@ -718,6 +1315,7 @@
Source code for bosdyn.choreography.client.choreography
return choreography_sequence
+
[docs]defsave_choreography_sequence_to_file(file_path,file_name,choreography):"""Saves a choreography sequence to a file."""if(file_nameisNoneorlen(file_name)==0):
diff --git a/docs/html/_modules/bosdyn/client/arm_surface_contact.html b/docs/html/_modules/bosdyn/client/arm_surface_contact.html
index f3f4ea989..8d286c7f8 100644
--- a/docs/html/_modules/bosdyn/client/arm_surface_contact.html
+++ b/docs/html/_modules/bosdyn/client/arm_surface_contact.html
@@ -7,7 +7,7 @@
- bosdyn.client.arm_surface_contact — Spot 2.3.5 documentation
+ bosdyn.client.arm_surface_contact — Spot 3.0.0 documentation
@@ -66,7 +66,7 @@
from .leaseimportadd_lease_wallet_processorsfrombosdyn.client.robot_commandimportNoTimeSyncError,_TimeConverter,_edit_proto
+
[docs]classArmSurfaceContactClient(BaseClient):"""Client for the ArmSurfaceContact service."""default_service_name='arm-surface-contact'
@@ -623,6 +665,7 @@
+
# Tree of proto fields leading to Timestamp protos which need to be converted from# client clock to robot clock values using timesync information from the robot.# Note, the "@" sign indicates a oneof field. The "None" indicates the field which
@@ -632,7 +675,7 @@
+# Copyright (c) 2021 Boston Dynamics, Inc. All rights reserved.
+#
+# Downloading, reproducing, distributing or otherwise using the SDK Software
+# is subject to the terms and conditions of the Boston Dynamics Software
+# Development Kit License (20191101-BDSDK-SL).
+
+"""Client implementation of the AutoReturn service."""
+
+from__future__importprint_function
+
+importcollections
+
+frombosdyn.client.exceptionsimportResponseError
+frombosdyn.client.commonimport(error_factory,handle_common_header_errors,
+ handle_unset_status_error,error_pair,BaseClient)
+frombosdyn.api.auto_returnimportauto_return_pb2
+frombosdyn.api.auto_returnimportauto_return_service_pb2_grpc
+
+
+
[docs]classAutoReturnResponseError(ResponseError):
+ """Error in Auto Return RPC"""
+
+
+
[docs]classInvalidParameterError(AutoReturnResponseError):
+ """One or more parameters were invalid."""
[docs]defconfigure(self,params,leases,**kwargs):
+ """Set the configuration of the AutoReturn system.
+
+ Args:
+ params (bosdyn.api.auto_return.auto_return_pb2.Params): Parameters to use.
+ leases (list of bosdyn.client.Lease)
+
+ Raises:
+ AutoReturnResponseError: An invalid request was received by the service.
+ RpcError: Problem communicating with the service.
+
+ Returns:
+ The bosdyn.api.auto_return_pb2.ConfigureResponse.
+ """
+
+ request=self._configure_request(params,leases)
+ returnself.call(self._stub.Configure,request,None,configure_error,**kwargs)
+
+
[docs]defconfigure_async(self,params,leases,**kwargs):
+ """Async version of the configure() RPC."""
+ request=self._configure_request(params,leases)
+ returnself.call(self._stub.Configure,request,None,configure_error,**kwargs)
+
+
[docs]defget_configuration(self,**kwargs):
+ """Get the configuration of the AutoReturn system.
+
+ Raises:
+ RpcError: Problem communicating with the service.
+
+ Returns:
+ The bosdyn.api.auto_return_pb2.GetConfigurationResponse.
+ """
+ request=auto_return_pb2.GetConfigurationRequest()
+ returnself.call(self._stub.GetConfiguration,request,None,None,**kwargs)
+
+
[docs]defget_configuration_async(self,**kwargs):
+ """Async version of the get_configuration() RPC."""
+ request=auto_return_pb2.GetConfigurationRequest()
+ returnself.call_async(self._stub.GetConfiguration,request,None,None,**kwargs)
+
+
[docs]defstart(self,**kwargs):
+ """Start AutoReturn now.
+ Raises:
+ RpcError: Problem communicating with the service.
+
+ Returns:
+ The bosdyn.api.auto_return_pb2.GetConfigurationResponse.
+ """
+ request=auto_return_pb2.StartRequest()
+ returnself.call(self._stub.Start,request,None,None,**kwargs)
+
+
[docs]defstart_async(self,**kwargs):
+ """Async version of the start() RPC."""
+ request=auto_return_pb2.StartRequest()
+ returnself.call_async(self._stub.Start,request,None,None,**kwargs)
[docs]@handle_common_header_errors
+@handle_unset_status_error(unset='STATUS_UNKNOWN')
+defconfigure_error(response):
+ """Return a custom exception based on the Configure response, None if no error."""
+ returnerror_factory(response,response.status,
+ status_to_string=auto_return_pb2.ConfigureResponse.Status.Name,
+ status_to_error=_CONFIGURE_STATUS_TO_ERROR)
"""Code for downloading robot data in bddf format."""importloggingimportre
+importsslimportsys
-importrequests
-fromurllib3.exceptionsimportInsecureRequestWarning
+fromurllib.requestimportRequest,urlopen
+fromurllib.parseimporturlencodefrombosdyn.client.time_syncimport(TimeSyncEndpoint,TimeSyncClient,NotEstablishedError,robot_time_range_from_nanoseconds,timespec_to_robot_timespan)
@@ -566,6 +608,7 @@
Source code for bosdyn.client.bddf_download
LOGGER=logging.getLogger()REQUEST_CHUNK_SIZE=10*(1024**2)# This value is not guaranteed.
+REQUEST_TIMEOUT=20# Seconds.DEFAULT_OUTPUT="./download.bddf"
@@ -630,11 +673,6 @@
Source code for bosdyn.client.bddf_download
Returns: output filename, or None on error """
- # pylint: disable=no-member
- # Suppress only the single warning from urllib3 needed.
- requests.packages.urllib3.disable_warnings(category=InsecureRequestWarning)
- # pylint: enable=no-member
-
time_sync_endpoint=Noneifnotrobot_time:# Establish time sync with robot to obtain skew.
@@ -660,16 +698,20 @@
# Set default max message length for sending and receiving to 100MB. This value is used when# creating channels in the bosdyn.client.Robot class.
-DEFAULT_MAX_MESSAGE_LENGTH=100*(1024**2)
+DEFAULT_MAX_MESSAGE_LENGTH=100*(1024**2)
token_cb: Callable that returns a tuple of (app_token, user_token) add_app_token (bool): Whether to include an app token in the metadata. This is necessary for compatibility with old robot software. """
+
def__init__(self,token_cb,add_app_token):self._token_cb=token_cbself._add_app_token=add_app_token
@@ -672,6 +714,8 @@
elif'Connect Failed'indebugor'Failed to pick subchannel'indebug:# This error should be checked last because a lot of grpc errors contain said substrings.returnUnableToConnectToRobotError(rpc_error,UnableToConnectToRobotError.__doc__)
-
+
ifcodeisgrpc.StatusCode.UNAVAILABLE:
- if'Socket closed'indebug:
+ if'Socket closed'indebugor'Connection reset by peer'indebug:returnRetryableUnavailableError(rpc_error,RetryableUnavailableError.__doc__)
+ ifstr(502)indetails:
+ returnServiceUnavailableError(rpc_error,ServiceUnavailableError.__doc__)_LOGGER.warning('Unclassified exception: %s',rpc_error)returnRpcError(rpc_error,RpcError.__doc__)
[docs]defgenerate_channel_options(max_send_message_length=None,max_receive_message_length=None):"""Generate the array of options to specify in the creation of a client channel or server. The list contains the values for max allowed message length for both sending and
@@ -732,8 +779,8 @@
NAME='events'
+ def__init__(self,subparsers,command_dict):
+ """Get operator comments from the robot.
+
+ Args:
+ subparsers: List of argument parsers.
+ command_dict: Dictionary of command names which take parsed options.
+ """
+ super(GetDataBufferEventsCommand,self).__init__(subparsers,command_dict)
+ self._parser.add_argument('--type',help='query for only the given event-type')
+ # pylint: disable=no-member
+ self._parser.add_argument(
+ '--level',
+ choices=Event.Level.keys()[1:],# slice skips UNSET
+ help='limit level to this and above')
+
def_run(self,robot,options):"""Implementation of the command.
@@ -1477,7 +1533,11 @@
super(LicenseCommand,self).__init__(subparsers,command_dict)self._parser.add_argument('--proto',action='store_true',help='print listing in proto format')
+ self._parser.add_argument('-f','--feature-codes',nargs='+',
+ help='Optional feature list for GetFeatureEnabled API.')def_run(self,robot,options):"""Implementation of the command.
@@ -1842,13 +1904,27 @@
[docs]defcommon_header_errors(response):"""Return an exception based on common response header. None if no error."""ifresponse.header.error.code==response.header.error.CODE_UNSPECIFIED:
@@ -734,6 +779,10 @@
Source code for bosdyn.client.common
returnwrapper
+
[docs]defmaybe_raise(exc):
+ """raise the provided exception if it is not None"""
+ ifexcisnotNone:
+ raiseexc
[docs]defprint_response(func):"""Decorate "error from response" functions to print for debugging specific messages."""
@@ -852,7 +901,9 @@
Source code for bosdyn.client.common
# be thrown for streaming rpcs if any are going to occur.# Here we make sure that they're translated to our more meaningful exceptions.# Any ResponseErrors or other exception types can be let through untranslated.
- raisetranslate_exception(e)
+ # Use the "raise from None" pattern to reset the exception's context, which produces
+ # confusing stack traces.
+ six.raise_from(translate_exception(e),None)
timeout=kwargs.pop('timeout',DEFAULT_RPC_TIMEOUT)response=rpc_method(request,timeout=timeout,**kwargs)exceptTransportErrorase:
- raisetranslate_exception(e)
+ # Use the "raise from None" pattern to reset the exception's context, which produces
+ # confusing stack traces.
+ six.raise_from(translate_exception(e),None)ifisinstance(rpc_method,grpc.UnaryStreamMultiCallable)orisinstance(rpc_method,grpc.StreamStreamMultiCallable):# The outgoing response is a streaming response.response=self.update_response_iterator(response,logger,rpc_method,is_blocking=True)
- returnself.handle_response_streaming(
- list(response),error_from_response,value_from_response)
+ returnself.handle_response_streaming(list(response),error_from_response,
+ value_from_response)else:response=self._apply_response_processors(response)logger.debug('response: %s%s',rpc_method._method,
@@ -897,7 +950,7 @@
method_name_short=str(method_name).split(BaseClient._SPLIT_METHOD)[-1]# This returns the same instance if it's been created before.returnself.logger.getChild(method_name_short)
- returnself.logger
+ returnself.logger
+
+
[docs]@staticmethod
+ defchunk_message(message,data_chunk_byte_size):
+ """Take a message, and split it into data chunks
+ Args:
+ data_chunk_byte_size: max size of each streamed message
+ """
+ serialized=message.SerializeToString()
+ total_bytes_size=len(serialized)
+ num_chunks=math.ceil(total_bytes_size/data_chunk_byte_size)
+ foriinrange(num_chunks):
+ start_index=i*data_chunk_byte_size
+ end_index=min(total_bytes_size,(i+1)*data_chunk_byte_size)
+ chunk=data_chunk_pb2.DataChunk(total_size=total_bytes_size)
+ chunk.data=serialized[start_index:end_index]
+ yieldchunk
[docs]defacquire_data(self,acquisition_requests,action_name,group_name,data_timestamp=None,metadata=None,**kwargs):"""Trigger a data acquisition to save data and metadata to the data buffer.
@@ -625,48 +679,40 @@
Source code for bosdyn.client.data_acquisition
Raises:
RpcError: Problem communicating with the robot.
+ ValueError: Metadata is not in the right format. Returns: If the RPC is successful, then it will return the acquire data request id, which can be used to check the status of the acquisition and get feedback. """
-
- ifdata_timestampisNone:
- ifnotself._timesync_endpoint:
- data_timestamp=now_timestamp()
- else:
- data_timestamp=self._timesync_endpoint.robot_timestamp_from_local_secs(
- time.time())
- action_id=data_acquisition.CaptureActionId(action_name=action_name,
- group_name=group_name,timestamp=data_timestamp)
-
- metadata_proto=metadata_to_proto(metadata)
- request=data_acquisition.AcquireDataRequest(metadata=metadata_proto,
- acquisition_requests=acquisition_requests,
- action_id=action_id)
+ request=self.make_acquire_data_request(acquisition_requests,action_name,group_name,
+ data_timestamp,metadata)returnself.call(self._stub.AcquireData,request,value_from_response=get_request_id,error_from_response=acquire_data_error,**kwargs)
[docs]defacquire_data_from_request(self,request,**kwargs):
+ """Alternate version of acquire_data() that takes an AcquireDataRequest directly.
+
+ Returns:
+ If the RPC is successful, then it will return the AcquireDataResponse.
+ """
+ returnself.call(self._stub.AcquireData,request,
+ error_from_response=acquire_data_error,**kwargs)
+
+
[docs]defacquire_data_from_request_async(self,request,**kwargs):
+ """Async version of acquire_data_from_request()."""returnself.call_async(self._stub.AcquireData,request,
- value_from_response=get_request_id,error_from_response=acquire_data_error,**kwargs)
+
[docs]defget_status(self,request_id,**kwargs):"""Check the status of a data acquisition based on the request id.
@@ -688,8 +734,8 @@
Source code for bosdyn.client.data_acquisition
[docs]defget_status_async(self,request_id,**kwargs):"""Async version of the get_status() RPC."""request=data_acquisition.GetStatusRequest(request_id=request_id)
- returnself.call_async(self._stub.GetStatus,request,
- error_from_response=_get_status_error,**kwargs)
[docs]defget_service_info(self,**kwargs):"""Get information from a DAQ service to list it's capabilities - which data, metadata,
@@ -727,8 +773,8 @@
Source code for bosdyn.client.data_acquisition
status as well as other information about any possible errors.
"""request=data_acquisition.CancelAcquisitionRequest(request_id=request_id)
- returnself.call(self._stub.CancelAcquisition,request,error_from_response=_cancel_acquisition_error,
- **kwargs)
[docs]defmetadata_to_proto(metadata):"""Checks the type to determine if a conversion is required to create a bosdyn.api.Metadata proto message.
@@ -769,16 +816,23 @@
Source code for bosdyn.client.data_acquisition
with the data returned by the DataAcquisitionService when logged in the data buffer
service.
+ Raises:
+ ValueError: Metadata is not in the right format.
+
Returns: If metadata is provided, this will return a protobuf Metadata message. Otherwise it will return None. """
+ ifmetadataisNone:
+ returnNonemetadata_proto=Noneifisinstance(metadata,data_acquisition.Metadata):metadata_proto=metadataelifisinstance(metadata,dict):metadata_proto=data_acquisition.Metadata()metadata_proto.data.update(metadata)
+ else:
+ raiseValueError('Invalid metadata, not a dict or data_acquisition.Metadata')returnmetadata_proto
@@ -787,8 +841,8 @@
Source code for bosdyn.client.data_acquisition
def acquire_data_error(response):"""Return a custom exception based on the AcquireData response, None if no error."""returnerror_factory(response,response.status,
- status_to_string=data_acquisition.AcquireDataResponse.Status.Name,
- status_to_error=_ACQUIRE_DATA_STATUS_TO_ERROR)
def _get_status_error(response):"""Return a custom exception based on the GetStatus response, None if no error."""returnerror_factory(response,response.status,
- status_to_string=data_acquisition.GetStatusResponse.Status.Name,
- status_to_error=_GET_STATUS_STATUS_TO_ERROR)
+ status_to_string=data_acquisition.GetStatusResponse.Status.Name,
+ status_to_error=_GET_STATUS_STATUS_TO_ERROR)
+
@handle_common_header_errors@handle_unset_status_error(unset='STATUS_UNKNOWN')def_cancel_acquisition_error(response):"""Return a custom exception based on the CancelAcquisition response, None if no error."""returnerror_factory(response,response.status,
- status_to_string=data_acquisition.CancelAcquisitionResponse.Status.Name,
- status_to_error=_CANCEL_ACQUISITION_STATUS_TO_ERROR)
+ status_to_string=data_acquisition.CancelAcquisitionResponse.Status.Name,
+ status_to_error=_CANCEL_ACQUISITION_STATUS_TO_ERROR)
+
def_get_service_info_capabilities(response):returnresponse.capabilities
+
[docs]defissue_acquire_data_request(data_acq_client,acquisition_requests,group_name,action_name,
+ metadata=None):"""Sends the data acquisition request without blocking until the acquisition completes. Args:
@@ -589,19 +631,21 @@
Source code for bosdyn.client.data_acquisition_helpers
indicates the AcquireData rpc failed.
"""# Create action id for the query for this request.
- action_id=data_acquisition_pb2.CaptureActionId(action_name=action_name,
- group_name=group_name)
+ action_id=data_acquisition_pb2.CaptureActionId(action_name=action_name,group_name=group_name)# Send an AcquireData requestrequest_id=Nonetry:request_id=data_acq_client.acquire_data(acquisition_requests=acquisition_requests,
- action_name=action_name,group_name=action_id.group_name,metadata=metadata)
+ action_name=action_name,
+ group_name=action_id.group_name,
+ metadata=metadata)exceptResponseErroraserr:print("Exception raised by issue_acquire_data_request: "+str(err))returnrequest_id,action_id
+
[docs]defacquire_and_process_request(data_acquisition_client,acquisition_requests,group_name,action_name,metadata=None,block_until_complete=True):"""Send acquisition request and optionally block until the acquisition completes.
@@ -621,8 +665,9 @@
Source code for bosdyn.client.data_acquisition_helpers
Boolean indicating if the acquisition completed successfully or not.
"""# Make the acquire data request. This will return our current request id.
- request_id,action_id=issue_acquire_data_request(data_acquisition_client,acquisition_requests,
- group_name,action_name,metadata)
+ request_id,action_id=issue_acquire_data_request(data_acquisition_client,
+ acquisition_requests,group_name,
+ action_name,metadata)ifnotrequest_id:# The AcquireData request failed for some reason. No need to attempt to
@@ -642,7 +687,7 @@
Source code for bosdyn.client.data_acquisition_helpers
print("Exception: %s"%str(err))returnFalseprint("Current status is: %s"%
- data_acquisition_pb2.GetStatusResponse.Status.Name(get_status_response.status))
+ data_acquisition_pb2.GetStatusResponse.Status.Name(get_status_response.status))ifget_status_response.status==data_acquisition_pb2.GetStatusResponse.STATUS_COMPLETE:returnTrueifget_status_response.status==data_acquisition_pb2.GetStatusResponse.STATUS_TIMEDOUT:
@@ -652,11 +697,13 @@
Source code for bosdyn.client.data_acquisition_helpers
print("Data error was received: %s"%get_status_response)returnFalseifget_status_response.status==data_acquisition_pb2.GetStatusResponse.STATUS_REQUEST_ID_DOES_NOT_EXIST:
- print("The acquisition request id %s is unknown: %s"%(request_id,get_status_response))
+ print(
+ "The acquisition request id %s is unknown: %s"%(request_id,get_status_response))returnFalsetime.sleep(0.2)returnTrue
+
[docs]defcancel_acquisition_request(data_acq_client,request_id):"""Cancels an acquisition request based on the request id
@@ -675,9 +722,10 @@
Source code for bosdyn.client.data_acquisition_helpers
try:is_cancelled_response=data_acq_client.cancel_acquisition(request_id)print("Status of the request to cancel the data-acquisition in progress: "+
- data_acquisition_pb2.CancelAcquisitionResponse.Status.Name(is_cancelled_response.status))
+ data_acquisition_pb2.CancelAcquisitionResponse.Status.Name(
+ is_cancelled_response.status))exceptResponseErroraserr:
- print("ResponseError raised when cancelling: "+str(err))
+ print("ResponseError raised when cancelling: "+str(err))# Don't attempt to wait for the cancellation success status.return
@@ -691,11 +739,12 @@
Source code for bosdyn.client.data_acquisition_helpers
[docs]defmake_time_query_params_from_group_name(group_name,data_store_client):"""Create time-based query params for the download request using the group name.
@@ -745,7 +796,8 @@
Source code for bosdyn.client.data_acquisition_helpers
try:saved_capture_actions=data_store_client.list_capture_actions(query_params)exceptExceptionaserr:
- _LOGGER.error("Failed to list the capture action ids for group_name %s: %s",group_name,err)
+ _LOGGER.error("Failed to list the capture action ids for group_name %s: %s",group_name,
+ err)returnNone# Filter all the CaptureActionIds for the start/end time. These end times are already in
@@ -765,7 +817,8 @@
Source code for bosdyn.client.data_acquisition_helpers
end_time =(time_secs,timestamp)ifnot(start_timeandend_time):
- _LOGGER.error("Could not find a start/end time from the list of capture action ids: %s",saved_capture_actions)
+ _LOGGER.error("Could not find a start/end time from the list of capture action ids: %s",
+ saved_capture_actions)returnNone# Ensure the timestamps are ordered correctly and the
@@ -775,14 +828,16 @@
Source code for bosdyn.client.data_acquisition_helpers
start_time[1].seconds-=3end_time[1].seconds+=3
- _LOGGER.info("Downloading data with a start time of %s seconds and end time of %s seconds.",start_time[0],end_time[0])
+ _LOGGER.info("Downloading data with a start time of %s seconds and end time of %s seconds.",
+ start_time[0],end_time[0])# Make the download data request with a time query parameter.query_params=data_acquisition_store_pb2.DataQueryParams(time_range=data_acquisition_store_pb2.TimeRangeQuery(from_timestamp=start_time[1],
- to_timestamp=end_time[1]))
+ to_timestamp=end_time[1]))returnquery_params
+
[docs]defdownload_data_REST(query_params,hostname,token,destination_folder='.',additional_params=None):"""Retrieve all data for a query from the DataBuffer REST API and write it to files.
@@ -808,18 +863,20 @@
Source code for bosdyn.client.data_acquisition_helpers
headers ={"Authorization":"Bearer {}".format(token)}get_params=additional_paramsor{}ifquery_params.HasField('time_range'):
- get_params.update({'from_nsec':query_params.time_range.from_timestamp.ToNanoseconds(),
- 'to_nsec':query_params.time_range.to_timestamp.ToNanoseconds()})
- chunk_size=10*(1024**2)# This value is not guaranteed.
+ get_params.update({
+ 'from_nsec':query_params.time_range.from_timestamp.ToNanoseconds(),
+ 'to_nsec':query_params.time_range.to_timestamp.ToNanoseconds()
+ })
+ chunk_size=10*(1024**2)# This value is not guaranteed.withrequests.get(url,verify=False,stream=True,headers=headers,
- params=get_params)asresp:
+ params=get_params)asresp:print("Download request HTTPS status code: %s"%resp.status_code)# This is the default file name used to download data, updated from response.ifresp.status_code==204:print("No content available for the specified download time range (in seconds): "
- "[%d, %d]"%(query_params.time_range.from_timestamp.ToNanoseconds()/1.0e9,
- query_params.time_range.to_timestamp.ToNanoseconds()/1.0e9))
+ "[%d, %d]"%(query_params.time_range.from_timestamp.ToNanoseconds()/1.0e9,
+ query_params.time_range.to_timestamp.ToNanoseconds()/1.0e9))returnFalsedownload_file=Path(folder,"download.zip")content=resp.headers['Content-Disposition']
@@ -837,7 +894,7 @@
Source code for bosdyn.client.data_acquisition_helpers
Source code for bosdyn.client.data_acquisition_plugin
Raises:
RpcError: Problem communicating with the robot.
+ ValueError: Metadata is not in the right format. Returns: If the RPC is successful, then it will return the acquire data response, which can be
@@ -613,7 +655,7 @@
Source code for bosdyn.client.data_acquisition_plugin
Source code for bosdyn.client.data_acquisition_plugin_service
from bosdyn.clientimportRobotfrombosdyn.client.data_acquisition_storeimportDataAcquisitionStoreClientfrombosdyn.client.data_bufferimportDataBufferClient
-frombosdyn.client.utilimportpopulate_response_header
-frombosdyn.client.server_utilimportResponseContext
+frombosdyn.client.server_utilimportResponseContext,populate_response_header_LOGGER=logging.getLogger(__name__)
@@ -813,8 +853,8 @@
Source code for bosdyn.client.data_acquisition_plugin_service
if future.exception()isNone:self.state.add_saved([data_id])else:
- self.state.add_errors([
- make_error(data_id,'Failed to store data: {}'.format(future.exception()))])
+ self.state.add_errors(
+ [make_error(data_id,'Failed to store data: {}'.format(future.exception()))])returnnotself.state.has_data_errors()
[docs]defstore_metadata_async(self,associated_metadata,data_id,**kwargs):"""Async version of the store_metadata() RPC."""request=data_acquisition_store.StoreMetadataRequest(metadata=associated_metadata,
- data_id=data_id)
+ data_id=data_id)returnself.call_async(self._stub.StoreMetadata,request,error_from_response=common_header_errors,**kwargs)
[docs]defstore_data_async(self,data,data_id,file_extension=None,**kwargs):"""Async version of the store_data() RPC."""request=data_acquisition_store.StoreDataRequest(data=data,data_id=data_id,
- file_extension=file_extension)
+ file_extension=file_extension)returnself.call_async(self._stub.StoreData,request,error_from_response=common_header_errors,**kwargs)
@@ -763,12 +793,15 @@
Source code for bosdyn.client.data_acquisition_store
[docs]deflog_event(# pylint: disable=too-many-arguments,no-member
+ robot,event_type,level,description,start_timestamp_secs,
+ end_timestamp_secs=None,id_str=None,parameters=None,
+ log_preserve_hint=data_buffer_protos.Event.LOG_PRESERVE_HINT_NORMAL):
+ """Add an Event to the Data Buffer.
+
+ Args:
+ robot: A Robot object.
+ event_type (string): The type of event.
+ level (bosdyn.api.Event.Level): The relative importance of the event.
+ description (string): A human-readable description of the event.
+ start_timestamp_secs (float): Start of the event, in local time.
+ end_timestamp_secs (float): End of the event. start_timestamp_secs is used if None.
+ id_str (string): Unique id for event. A uuid is generated if None.
+ parameters ([bosdyn.api.Parameter]): Parameters to attach to the event.
+ log_preserve_hint (bosdyn.api.LogPreserveHint): Whether event should try to preserve log data.
+ """
+
+ data_buffer_client=robot.ensure_client(DataBufferClient.default_service_name)
+
+ ifnotid_str:
+ id_str=str(uuid.uuid1())
+
+ robot.time_sync.wait_for_sync()
+ robot_start_timestamp=robot.time_sync.robot_timestamp_from_local_secs(start_timestamp_secs)
+ ifend_timestamp_secs:
+ robot_end_timestamp=robot.time_sync.robot_timestamp_from_local_secs(end_timestamp_secs)
+ else:
+ robot_end_timestamp=robot_start_timestamp
+
+ # pylint: disable=no-member
+ ifisinstance(log_preserve_hint,bool):
+ iflog_preserve_hint:
+ log_preserve_hint=data_buffer_protos.Event.LOG_PRESERVE_HINT_PRESERVE
+ else:
+ log_preserve_hint=data_buffer_protos.Event.LOG_PRESERVE_HINT_NORMAL
+
+ event=data_buffer_protos.Event(
+ type=event_type,description=description,source=robot.client_name,
+ id=id_str,start_time=robot_start_timestamp,end_time=robot_end_timestamp,
+ level=level,log_preserve_hint=log_preserve_hint)
+
+ ifparameters:
+ forparameterinparameters:
+ proto=event.parameters.add()
+ proto.CopyFrom(parameter)
+
+ data_buffer_client.add_events([event])
+
+
[docs]classDataBufferClient(BaseClient):"""A client for adding to robot data buffer."""
@@ -612,6 +704,7 @@
Args: data (bytes): Binary data of one blob.
- type_id (string): Type of binary data of blob. For example, this could be the full name of
- a protobuf message type.
- channel (string): The name by which messages are typically queried: often the same as
- type_id, or of the form '{prefix}/{type_id}'.
+ type_id (string): Type of binary data of blob. For example, this could
+ be the full name of a protobuf message type.
+ channel (string): The name by which messages are typically queried:
+ often the same as type_id, or of the form
+ '{prefix}/{type_id}'. robot_timestamp (google.protobuf.Timestamp): Time of messages, in *robot time*. Raises:
@@ -675,8 +770,11 @@
"""Log signal schema to the robot. Args:
- variables (List[SignalSchema.Variable]): List of SignalSchema variables defining what is in tick.
+ variables (List[SignalSchema.Variable]): List of SignalSchema variables
+ defining what is in tick. schema_name (string): Name of schema (defined previously by client). Raises:
@@ -755,7 +854,7 @@
Source code for bosdyn.client.data_buffer
"""Internal register stub call."""tick_schema=data_buffer_protos.SignalSchema(vars=variables,schema_name=schema_name)request=data_buffer_protos.RegisterSignalSchemaRequest()
- request.schema.CopyFrom(tick_schema)
+ request.schema.CopyFrom(tick_schema)# pylint: disable=no-member# Schemas are saved internally, according to their schema ID. We need to wait for the# response from the server to get the schema id. The response does not include the schema# itself so use a partial to process the response appropriately.
@@ -764,8 +863,9 @@
[docs]defadd_signal_tick(# pylint: disable=too-many-arguments,no-member
+ self,data,schema_id,encoding=data_buffer_protos.SignalTick.ENCODING_RAW,
+ sequence_id=0,source="client",**kwargs):"""Log signal data to the robot data buffer. Schema should be sent before any ticks.
@@ -784,21 +884,23 @@
[docs]defadd_signal_tick_async(# pylint: disable=too-many-arguments,no-member
+ self,data,schema_id,encoding=data_buffer_protos.SignalTick.ENCODING_RAW,
+ sequence_id=0,source="client",**kwargs):"""Async version of add_signal_tick."""returnself._do_add_signal_tick(self.call_async,data,schema_id,encoding,sequence_id,source,**kwargs)
[docs]defdocking_command_full(self,station_id,clock_identifier,end_time,prep_pose_behavior=None,
+ lease=None,**kwargs):
+ """Identical to docking_command(), except will return the full DockingCommandResponse."""
+ req=self._docking_command_request(lease,station_id,clock_identifier,end_time,
+ prep_pose_behavior)
+ returnself.call(self._stub.DockingCommand,req,
+ error_from_response=_docking_command_error_from_response,**kwargs)
+
+
[docs]defdocking_command_full_async(self,station_id,clock_identifier,end_time,
+ prep_pose_behavior=None,lease=None,**kwargs):
+ """Identical to docking_command_async(), except will return the full DockingCommandResponse."""
+ req=self._docking_command_request(lease,station_id,clock_identifier,end_time,
+ prep_pose_behavior)
+ returnself.call_async(self._stub.DockingCommand,req,
+ error_from_response=_docking_command_error_from_response,**kwargs)
+
+
+
[docs]defdocking_command_feedback_full(self,command_id,**kwargs):
+ """Check the status of a previously issued docking command.
+
+ Args:
+ command_id: The ID returned from a previous docking_command call.
+ Raises:
+ RpcError: problem communicating with the robot
+
+ Returns:
+ DockingCommandFeedbackResponse
+ """
+ req=self._docking_command_feedback_request(command_id)
+ returnself.call(self._stub.DockingCommandFeedback,req,
+ error_from_response=common_header_errors,**kwargs)
+
+
[docs]defdocking_command_feedback_full_async(self,command_id,**kwargs):
+ """Async version of docking_command_feedback_full()."""
+ req=self._docking_command_feedback_request(command_id)
+ returnself.call_async(self._stub.DockingCommandFeedback,req,
+ error_from_response=common_header_errors,**kwargs)
[docs]@deprecated(
+ reason='This function can raise LeaseErrors when the feedback was successfully retrieved. '
+ 'Use docking_command_feedback_full instead.',version='3.0.0',action='always')
+ defdocking_command_feedback(self,command_id,**kwargs):"""Check the status of a previously issued docking command. Args: command_id: The ID returned from a previous docking_command call. Returns:
- Status of type DockingCommandResponse.Status
+ Status of type DockingCommandFeedbackResponse.Status """req=self._docking_command_feedback_request(command_id)returnself.call(self._stub.DockingCommandFeedback,req,self._docking_status_from_response,_docking_feedback_error_from_response,**kwargs)
[docs]@deprecated(
+ reason='This function can raise LeaseErrors when the feedback was successfully retrieved. '
+ 'Use docking_command_feedback_full_async instead.',version='3.0.0',action='always')
+ defdocking_command_feedback_async(self,command_id,**kwargs):"""Async version of docking_command_feedback()."""req=self._docking_command_feedback_request(command_id)returnself.call_async(self._stub.DockingCommandFeedback,req,
@@ -727,7 +816,7 @@
[docs]defblocking_dock_robot(robot,dock_id,num_retries=4,timeout=30):"""Blocking helper that takes control of the robot and docks it. Args:
@@ -749,18 +838,21 @@
Source code for bosdyn.client.docking
# Try to dock the robotwhileattempt_number<num_retriesandnotdocking_success:attempt_number+=1
- cmd_end_time=time.time()+30# expect to finish in 30 seconds
+ converter=robot.time_sync.get_robot_time_converter()
+ start_time=converter.robot_seconds_from_local_seconds(now_sec())
+ cmd_end_time=start_time+timeoutcmd_timeout=cmd_end_time+10# client side bufferprep_pose=(docking_pb2.PREP_POSE_USE_POSEif(attempt_number%2)elsedocking_pb2.PREP_POSE_SKIP_POSE)
- cmd_id=docking_client.docking_command(
- dock_id,robot.time_sync.endpoint.clock_identifier,
- robot.time_sync.robot_timestamp_from_local_secs(cmd_end_time),prep_pose)
+ cmd_id=docking_client.docking_command(dock_id,robot.time_sync.endpoint.clock_identifier,
+ seconds_to_timestamp(cmd_end_time),prep_pose)
- whiletime.time()<cmd_timeout:
- status=docking_client.docking_command_feedback(cmd_id)
+ whileconverter.robot_seconds_from_local_seconds(now_sec())<cmd_timeout:
+ feedback=docking_client.docking_command_feedback_full(cmd_id)
+ maybe_raise(common_lease_errors(feedback))
+ status=feedback.statusifstatus==docking_pb2.DockingCommandFeedbackResponse.STATUS_IN_PROGRESS:# keep waiting/tryingtime.sleep(1)
@@ -806,17 +898,19 @@
Source code for bosdyn.client.docking
"""docking_client=robot.ensure_client(DockingClient.default_service_name)
- # Try and put the robot in a safe position
- cmd_end_time=time.time()+timeout
+ converter=robot.time_sync.get_robot_time_converter()
+ start_time=converter.robot_seconds_from_local_seconds(now_sec())
+ cmd_end_time=start_time+timeoutcmd_timeout=cmd_end_time+10# client side buffer
- cmd_id=docking_client.docking_command(
- dock_id,robot.time_sync.endpoint.clock_identifier,
- robot.time_sync.robot_timestamp_from_local_secs(cmd_end_time),
- docking_pb2.PREP_POSE_ONLY_POSE)
+ cmd_id=docking_client.docking_command(dock_id,robot.time_sync.endpoint.clock_identifier,
+ seconds_to_timestamp(cmd_end_time),
+ docking_pb2.PREP_POSE_ONLY_POSE)
- whiletime.time()<cmd_timeout:
- status=docking_client.docking_command_feedback(cmd_id)
+ whileconverter.robot_seconds_from_local_seconds(now_sec())<cmd_timeout:
+ feedback=docking_client.docking_command_feedback_full(cmd_id)
+ maybe_raise(common_lease_errors(feedback))
+ status=feedback.statusifstatus==docking_pb2.DockingCommandFeedbackResponse.STATUS_IN_PROGRESS:# keep waiting/tryingtime.sleep(1)
@@ -843,16 +937,19 @@
Source code for bosdyn.client.docking
"""docking_client=robot.ensure_client(DockingClient.default_service_name)
- # Try and put the robot in a safe position
- cmd_end_time=time.time()+timeout
+ converter=robot.time_sync.get_robot_time_converter()
+ start_time=converter.robot_seconds_from_local_seconds(now_sec())
+ cmd_end_time=start_time+timeoutcmd_timeout=cmd_end_time+10# client side buffer
- cmd_id=docking_client.docking_command(
- 0,robot.time_sync.endpoint.clock_identifier,
- robot.time_sync.robot_timestamp_from_local_secs(cmd_end_time),docking_pb2.PREP_POSE_UNDOCK)
+ cmd_id=docking_client.docking_command(0,robot.time_sync.endpoint.clock_identifier,
+ seconds_to_timestamp(cmd_end_time),
+ docking_pb2.PREP_POSE_UNDOCK)
- whiletime.time()<cmd_timeout:
- status=docking_client.docking_command_feedback(cmd_id)
+ whileconverter.robot_seconds_from_local_seconds(now_sec())<cmd_timeout:
+ feedback=docking_client.docking_command_feedback_full(cmd_id)
+ maybe_raise(common_lease_errors(feedback))
+ status=feedback.statusifstatus==docking_pb2.DockingCommandFeedbackResponse.STATUS_IN_PROGRESS:# keep waiting/tryingtime.sleep(1)
diff --git a/docs/html/_modules/bosdyn/client/door.html b/docs/html/_modules/bosdyn/client/door.html
index f5a2a9f4b..f42584b8a 100644
--- a/docs/html/_modules/bosdyn/client/door.html
+++ b/docs/html/_modules/bosdyn/client/door.html
@@ -7,7 +7,7 @@
- bosdyn.client.door — Spot 2.3.5 documentation
+ bosdyn.client.door — Spot 3.0.0 documentation
@@ -66,7 +66,7 @@
def_response(self):"""Generate a response for self._challenge."""
@@ -1123,8 +1163,9 @@
Source code for bosdyn.client.estop
self._error('RPC took longer than {:.2f} seconds'.format(self._rpc_timeout),exception=exc)exceptRpcErrorasexc:
- self._error('Transport exception during check-in:\n{}\n'
- ' (resuming check-in)'.format(exc),exception=exc)
+ self._error(
+ 'Transport exception during check-in:\n{}\n'
+ ' (resuming check-in)'.format(exc),exception=exc)exceptEndpointUnknownErrorasexc:# Disable ourself to show we cannot estop any longer.self._error(str(exc),exception=exc,disable=True)
@@ -1164,6 +1205,7 @@
Source code for bosdyn.client.estop
ERROR=1DISABLED=2
+
[docs]defis_estopped(estop_client,**kwargs):"""Returns true if robot is estopped, false otherwise.
@@ -1173,6 +1215,7 @@
[docs]classInvalidRequestError(ResponseError):
- """The provided request arguments are ill-formed or invalid.
-
- This is programmer error, hence it should be fixed and not ignored.
- This error does not depend on the state of the system."""
+ """The provided request arguments are ill-formed or invalid, independent of the system state."""
[docs]classLeaseUseError(ResponseError):
- """Request was rejected due to using an invalid lease.
+ """Request was rejected due to using an invalid lease."""
- This is thrown by services outside of LeaseService."""
[docs]classLicenseError(ResponseError):"""Request was rejected due to using an invalid license."""
[docs]classUnableToConnectToRobotError(RetryableRpcError):"""The robot may be offline or otherwise unreachable."""
[docs]classRetryableUnavailableError(UnableToConnectToRobotError):
- """gRPC service unavailable. Likely transient and can be resolved by retrying the request."""
+ """Service unavailable or channel reset. Likely transient and can be resolved by retrying."""
[docs]defexpress_se2_velocity_in_new_frame(frame_tree_snapshot,frame_b,frame_c,vel_of_a_in_b,
+ validate=True):"""Convert the SE2 Velocity in frame b to a SE2 Velocity in frame c using the frame tree snapshot.
@@ -757,7 +799,9 @@
[docs]defexpress_se3_velocity_in_new_frame(frame_tree_snapshot,frame_b,frame_c,vel_of_a_in_b,
+ validate=True):"""Convert the SE(3) Velocity in frame b to an SE(3) Velocity in frame c using the frame tree snapshot.
@@ -782,6 +826,7 @@
[docs]defget_odom_tform_body(frame_tree_snapshot):"""Get the transformation between "odom" frame and "body" frame from the FrameTreeSnapshot."""returnget_a_tform_b(frame_tree_snapshot,ODOM_FRAME_NAME,BODY_FRAME_NAME)
[docs]defnavigate_route(self,route,cmd_duration,route_follow_params=None,travel_params=None,
+ leases=None,timesync_endpoint=None,command_id=None,
+ destination_waypoint_tform_body_goal=None,**kwargs):"""Navigate the given route. Args: route: Route protobuf of the route to follow.
+ route_follow_params: What should the robot do if it is not at the expected point in the
+ route, or the route is blocked. travel_params: API TravelParams for the route. cmd_duration: Number of seconds the command can run for. leases: Leases to show ownership of necessary resources. Will use the client's leases by default. timesync_endpoint: Use this endpoint for timesync fields. Will use the client's endpoint by default. command_id: If not None, this continues an existing navigate_route command with the given ID. If None, a new command_id will be used.
+ destination_waypoint_tform_body_goal: SE2Pose protobuf of an offset relative to the destination waypoint. kwargs: Passed to underlying RPC. Example: timeout=5 to cancel the RPC after 5 seconds. Returns: Command ID to use in feedback lookup.
@@ -697,8 +743,9 @@
Source code for bosdyn.client.graph_nav
graph_nav.TooDistantError: Time too far in the future. graph_nav.RobotImpairedError: Robot cannot travel a route. graph_nav.IsRecordingError: Robot cannot navigate while recording.
- graph_nav.UnkownRouteElementsError: Unknown edges or waypoints
+ graph_nav.UnknownRouteElementsError: Unknown edges or waypoints graph_nav.InvalidEdgeError: Mismatch between edges and waypoints.
+ graph_nav.NoPathError: No path to the specified route. graph_nav.RobotNotLocalizedToRouteError: The robot is localized somewhere else. graph_nav.ConstraintFaultError: The route involves invalid constraints. graph_nav.RouteNavigationError: A subclass detailing trouble navigating the route.
@@ -706,23 +753,56 @@
[docs]defnavigate_to_anchor(self,seed_tform_goal,cmd_duration,route_params=None,
+ travel_params=None,leases=None,timesync_endpoint=None,
+ goal_waypoint_rt_seed_ewrt_seed_tolerance=None,command_id=None,
+ **kwargs):
+ """Navigate to a pose in seed frame along a route chosen by the GraphNav service.
+
+ Args:
+ seed_tform_goal: SE3Pose protobuf of the goal pose in seed frame.
+ cmd_duration: Number of seconds the command can run for.
+ route_params: API RouteGenParams for the route.
+ travel_params: API TravelParams for the route.
+ leases: Leases to show ownership of necessary resources. Will use the client's leases by default.
+ timesync_endpoint: Use this endpoint for timesync fields. Will use the client's endpoint by default.
+ goal_waypoint_rt_seed_ewrt_seed_tolerance: Vec3 protobuf of the tolerances for goal waypoint selection.
+ command_id: If not None, this continues an existing navigate_to command with the given ID. If None,
+ a new command_id will be used.
+ Returns:
+ int: Command ID to use in feedback lookup.
+ Raises:
+ RpcError: Problem communicating with the robot.
+ LeaseUseError: Error using provided leases.
+ graph_nav.NoTimeSyncError: Missing clock identifier.
+ graph_nav.CommandExpiredError: Command already expired.
+ graph_nav.TooDistantError: Time too far in the future.
+ graph_nav.RobotImpairedError: Robot cannot travel a route.
+ graph_nav.IsRecordingError: Robot cannot navigate while recording.
+ graph_nav.NoAnchoringError: There is no anchoring.
+ graph_nav.NoPathError: No route to goal waypoint, or no goal waypoint found.
+ graph_nav.InvalidPoseError: The requested pose is invalid, or known to be unachievable.
+ graph_nav.RobotNotLocalizedToRouteError: The robot not correctly localized.
+ graph_nav.RouteNavigationError: A subclass detailing trouble navigating the route.
+ """
+ used_endpoint=timesync_endpointorself._timesync_endpoint
+ ifnotused_endpoint:
+ raiseGraphNavServiceResponseError(response=None,error_message='No timesync endpoint!')
+ request=self._build_navigate_to_anchor_request(seed_tform_goal,travel_params,
+ route_params,cmd_duration,leases,
+ used_endpoint,command_id,
+ goal_waypoint_rt_seed_ewrt_seed_tolerance)
+ returnself.call(self._stub.NavigateToAnchor,request,
+ value_from_response=_command_id_from_navigate_route_response,
+ error_from_response=_navigate_to_anchor_error,**kwargs)
[docs]defnavigation_feedback(self,command_id=0,**kwargs):"""Returns the feedback corresponding to the active route follow command.
@@ -810,35 +975,38 @@
[docs]defclear_graph_async(self,lease=None,**kwargs):"""Async version of clear_graph()."""request=self._build_clear_graph_request(lease)returnself.call_async(self._stub.ClearGraph,request,value_from_response=None,
- error_from_response=handle_common_header_errors(common_lease_errors),**kwargs)
[docs]defupload_graph(self,lease=None,graph=None,generate_new_anchoring=False,**kwargs):"""Uploads a graph to the server and appends to the existing graph. Args: leases: Leases to show ownership of necessary resources. Will use the client's leases by default. graph: Graph protobuf that represents the map with waypoints and edges.
+ generate_new_anchoring: Whether to generate an (overwrite the) anchoring on upload. Returns: The response, which includes waypoint and edge id's sorted by whether it was cached. Raises: RpcError: Problem communicating with the robot. LeaseUseError: Error using provided lease. """
- request=self._build_upload_graph_request(lease,graph)
+ request=self._build_upload_graph_request(lease,graph,generate_new_anchoring)returnself.call(self._stub.UploadGraph,request,value_from_response=_get_response,
- error_from_response=handle_common_header_errors(common_lease_errors),**kwargs)
[docs]defupload_graph_async(self,lease=None,graph=None,generate_new_anchoring=False,**kwargs):"""Async version of upload_graph()."""
- request=self._build_upload_graph_request(lease,graph)
+ request=self._build_upload_graph_request(lease,graph,generate_new_anchoring)returnself.call_async(self._stub.UploadGraph,request,value_from_response=_get_response,
- error_from_response=handle_common_header_errors(common_lease_errors),**kwargs)
[docs]defupload_waypoint_snapshot(self,waypoint_snapshot,lease=None,**kwargs):"""Uploads large waypoint snapshot as a stream for a particular waypoint.
@@ -853,10 +1021,11 @@
[docs]defupload_edge_snapshot(self,edge_snapshot,lease=None,**kwargs):"""Uploads large edge snapshot as a stream for a particular edge.
@@ -871,10 +1040,12 @@
[docs]defdownload_waypoint_snapshot(
+ self,
+ waypoint_snapshot_id,
+ download_images=False,
+ do_not_download_point_cloud=False,
+ **kwargs):"""Download a specific waypoint snapshot with streaming from the server. Args: waypoint_snapshot_id: WaypointSnapshot string ID for which snapshot to download from robot. download_images: Boolean indicating whether or not to include images in the download.
+ do_not_download_point_cloud: Boolean indicating if point cloud data should not be downloaded. Returns: The WaypointSnapshot protobuf from the robot's current map. Raises: RpcError: Problem communicating with the robot UnknownMapInformationError: Snapshot id not found """
- request=self._build_download_waypoint_snapshot_request(waypoint_snapshot_id,
- download_images
- )
+ request=self._build_download_waypoint_snapshot_request(
+ waypoint_snapshot_id,
+ download_images,
+ do_not_download_point_cloud)returnself.call(self._stub.DownloadWaypointSnapshot,request,value_from_response=_get_streamed_waypoint_snapshot,error_from_response=_download_waypoint_snapshot_stream_errors,**kwargs)
+
[docs]defdownload_edge_snapshot(self,edge_snapshot_id,**kwargs):"""Downloads a specific edge snapshot with streaming from the server.
@@ -946,11 +1122,15 @@
[docs]@deprecated(reason='Use UnknownRouteElementsError instead',version='3.0.0',action='ignore')
+classUnkownRouteElementsError(RouteError):"""One or more waypoints/edges are not in the map."""
+
+
+
[docs]classUnknownRouteElementsError(UnkownRouteElementsError):
+ """One or more waypoints/edges are not in the map."""
+
+
[docs]classNoPathError(RouteError):"""There is no path to the specified waypoint."""
+
+
[docs]classUnknownWaypointError(RouteError):"""One or more waypoints are not in the map."""
+
[docs]classNoAnchoringError(RouteError):
+ """There is no anchoring."""
+
+
+
[docs]classInvalidPoseError(RouteError):
+ """The requested pose is invalid, or known to be unachievable."""
+
+
[docs]classRouteNavigationError(GraphNavServiceResponseError):"""Errors related to how the robot navigates the route."""
+
+
[docs]classFeatureDesertError(RouteNavigationError):"""Route contained too many waypoints with low-quality features."""
+
+
[docs]classRouteNotUpdatingError(RouteNavigationError):"""Graph nav was unable to update and follow the specified route."""
+
+
[docs]classRobotLostError(RouteNavigationError):"""Cannot issue a navigation request when the robot is already lost."""
+
+
[docs]classRobotNotLocalizedToRouteError(RouteNavigationError):"""The current localization doesn't refer to any waypoint in the route (possibly uninitialized localization)."""
+
+
[docs]classRobotStuckError(RouteNavigationError):"""The robot is stuck or unable to find a way forward. Resend the command with a new ID, or send a different command to try again."""
+
+
[docs]classUnrecongizedCommandError(RouteNavigationError):"""Happens when you try to continue a command that was either expired, or had an unrecognized id."""
+
def_localization_from_response(response):"""Return the localization state from the response."""returnresponse.localization
@@ -1258,6 +1522,24 @@
Source code for bosdyn.client.graph_nav
returnedge_snapshot
+_UPLOAD_GRAPH_STATUS_TO_ERROR=collections.defaultdict(lambda:(ResponseError,None))
+_UPLOAD_GRAPH_STATUS_TO_ERROR.update({
+ graph_nav_pb2.UploadGraphResponse.STATUS_OK:(None,None),
+ graph_nav_pb2.UploadGraphResponse.STATUS_MAP_TOO_LARGE_LICENSE:error_pair(MapTooLargeLicenseError),
+ graph_nav_pb2.UploadGraphResponse.STATUS_INVALID_GRAPH:error_pair(InvalidGraphError),
+})
+
+
+@handle_common_header_errors
+@handle_lease_use_result_errors
+@handle_unset_status_error(unset='STATUS_UNKNOWN')
+def_upload_graph_error(response):
+ """Return a custom exception based on upload graph response, None if no error."""
+ returnerror_factory(response,response.status,
+ status_to_string=graph_nav_pb2.UploadGraphResponse.Status.Name,
+ status_to_error=_UPLOAD_GRAPH_STATUS_TO_ERROR)
+
+
_SET_LOCALIZATION_STATUS_TO_ERROR=collections.defaultdict(lambda:(ResponseError,None))_SET_LOCALIZATION_STATUS_TO_ERROR.update({graph_nav_pb2.SetLocalizationResponse.STATUS_OK:(None,None),
@@ -1294,9 +1576,11 @@
[docs]defbuild_image_request(image_source_name,quality_percent=75,image_format=None):"""Helper function which builds an ImageRequest from an image source name. By default the robot will choose an appropriate format when no image format
@@ -727,10 +775,13 @@
Source code for bosdyn.client.image
filepath(string): The directory to save the image. """# Determine the data type to decode the image.
- ifimage_response.shot.image.pixel_format==image_pb2.Image.PIXEL_FORMAT_DEPTH_U16:
+ ifimage_response.shot.image.pixel_formatin(image_pb2.Image.PIXEL_FORMAT_DEPTH_U16,
+ image_pb2.Image.PIXEL_FORMAT_GREYSCALE_U16):dtype=np.uint16
+ max_val=2**16-1else:dtype=np.uint8
+ max_val=2**8-1num_channels=1pgm_header_number='P5'
@@ -749,7 +800,7 @@
Source code for bosdyn.client.image
num_channels=1else:print("Unsupported pixel format for PGM/PPM: %s."%
- image_pb2.Image.PixelFormat.Name(image_response.shot.image.pixel_format))
+ image_pb2.Image.PixelFormat.Name(image_response.shot.image.pixel_format))returnimg=np.frombuffer(image_response.shot.image.data,dtype=dtype)
@@ -758,7 +809,9 @@
Source code for bosdyn.client.image
try:img=img.reshape((height,width,num_channels))exceptValueErroraserr:
- print("Cannot convert raw image into expected shape (rows %d, cols %d, color channels %d)."%(height,width,num_channels))
+ print(
+ "Cannot convert raw image into expected shape (rows %d, cols %d, color channels %d)."%
+ (height,width,num_channels))print(err)returnifnotfilename:
@@ -771,12 +824,13 @@
Source code for bosdyn.client.image
print("Cannot open file %s. Exception thrown: %s"%(filename,err))return
- max_val=np.amax(img)
- pgm_header=pgm_header_number+' '+str(width)+' '+str(height)+' '+str(max_val)+'\n'
+ pgm_header=pgm_header_number+' '+str(width)+' '+str(height)+' '+str(
+ max_val)+'\n'fd_out.write(pgm_header)img.tofile(fd_out)
- print('Saved matrix with pixel values from camera "%s" to file "%s".'%(
- image_response.source.name,filename))
+ print('Saved matrix with pixel values from camera "%s" to file "%s".'%
+ (image_response.source.name,filename))
+
[docs]defwrite_image_data(image_response,filename="",filepath="."):"""Write image data from image_response to a file.
@@ -799,6 +853,7 @@
Source code for bosdyn.client.image
print('Failed to save "{}".'.format(image_response.source.name))print(err)
+
[docs]defsave_images_as_files(image_responses,filename="",filepath="."):"""Write image responses to files.
@@ -823,6 +878,31 @@
Source code for bosdyn.client.image
else:# Save jpeg format as a jpeg image.write_image_data(image,save_file_name,filepath)
+
+
[docs]defpixel_to_camera_space(image_proto,pixel_x,pixel_y,depth=1.0):
+ """Using the camera intrinsics, determine the (x,y,z) point in the camera frame for
+ the (u,v) pixel coordinates.
+
+ Args:
+ image_proto (image_pb2.Image): The image in which the pixel coordinates are from
+ pixel_x (int): x-coordinate.
+ pixel_y (int): y-coordinate.
+ depth (double): The depth from the camera to the point of interest.
+
+ Returns:
+ An (x,y,z) tuple representing the pixel point of interest now described as a point
+ in the camera frame.
+ """
+ focal_x=image_proto.source.pinhole.intrinsics.focal_length.x
+ principal_x=image_proto.source.pinhole.intrinsics.principal_point.x
+
+ focal_y=image_proto.source.pinhole.intrinsics.focal_length.y
+ principal_y=image_proto.source.pinhole.intrinsics.principal_point.y
+
+ x_rt_camera=depth*(pixel_x-principal_x)/focal_x
+ y_rt_camera=depth*(pixel_y-principal_y)/focal_y
+ return(x_rt_camera,y_rt_camera,depth)
+# Copyright (c) 2021 Boston Dynamics, Inc. All rights reserved.
+#
+# Downloading, reproducing, distributing or otherwise using the SDK Software
+# is subject to the terms and conditions of the Boston Dynamics Software
+# Development Kit License (20191101-BDSDK-SL).
+
+"""A client for the ir-enable-disable service."""
+frombosdyn.client.commonimportcommon_header_errors
+frombosdyn.client.commonimportBaseClient
+frombosdyn.apiimportir_enable_disable_pb2
+frombosdyn.apiimportir_enable_disable_service_pb2_grpc
+
+
+
[docs]classIREnableDisableServiceClient(BaseClient):
+ """Client to enable and/or disable the robot's IR light emitters in the body and hand sensors."""
+
+ # Name of the service in the robot's directory listing.
+ default_service_name='ir-enable-disable-service'
+ # gRPC service proto definition implemented by this service
+ service_type='bosdyn.api.IREnableDisableService'
+
+ def__init__(self):
+ super(IREnableDisableServiceClient,
+ self).__init__(ir_enable_disable_service_pb2_grpc.IREnableDisableServiceStub)
+
+
[docs]defset_ir_enabled(self,enable,**kwargs):
+ """ Enable and/or disable the robot's IR light emitters.
+
+ Args:
+ enable (bool): Whether or not to enable the emitters.
+ """
+ ifenable:
+ request=ir_enable_disable_pb2.IREnableDisableRequest.REQUEST_ON
+ else:
+ request=ir_enable_disable_pb2.IREnableDisableRequest.REQUEST_OFF
+ returnself.call(self._stub.IREnableDisable,
+ ir_enable_disable_pb2.IREnableDisableRequest(request=request),
+ error_from_response=common_header_errors,**kwargs)
+
+
[docs]defset_ir_enabled_async(self,enable,**kwargs):
+ """Async version of set_ir_enabled()"""
+ ifenable:
+ request=ir_enable_disable_pb2.IREnableDisableRequest.REQUEST_ON
+ else:
+ request=ir_enable_disable_pb2.IREnableDisableRequest.REQUEST_OFF
+ returnself.call_async(self._stub.IREnableDisable,
+ ir_enable_disable_pb2.IREnableDisableRequest(request=request),
+ error_from_response=common_header_errors,**kwargs)
[docs]classNoSuchLease(Error):"""The requested lease does not exist."""
+
def__init__(self,resource):self.resource=resource
@@ -629,6 +669,7 @@
Source code for bosdyn.client.lease
[docs]classLeaseNotOwnedByWallet(Error):"""The lease is not owned by the wallet."""
+
def__init__(self,resource,lease_state):self.resource=resourceself.lease_state=lease_state
@@ -644,30 +685,30 @@
[docs]defcompare(self,other_lease,ignore_resources=False):"""Compare two different lease objects. Args:
@@ -723,10 +764,10 @@
Source code for bosdyn.client.lease
* CompareResult.DIFFERENT_EPOCHS if this lease is for a different epoch than other_lease. There is no way to compare recency/time of Leases for two different epochs.
- * CompareResult.INVALID if either this or other_lease is invalid. """# Sequences are only valid for leases with the same resource and epoch.
- ifnot(self.lease_proto.resource==other_lease.lease_proto.resource):
+ ifnot(self.lease_proto.resource
+ ==other_lease.lease_proto.resource)andnotignore_resources:returnself.CompareResult.DIFFERENT_RESOURCESifnot(self.lease_proto.epoch==other_lease.lease_proto.epoch):returnself.CompareResult.DIFFERENT_EPOCHS
@@ -783,6 +824,9 @@
[docs]@staticmethod
+ defcompare_result_to_lease_use_result_status(compare_result,allow_super_leases):
+ """Determines the comparable LeaseUseResult.Status enum value based on the CompareResult enum.
+
+ Args:
+ allow_super_leases(boolean): If true, a super lease will still be considered as "ok"/
+ newer when compared to the active lease.
+
+ Raises:
+ bosdyn.client.lease.Error: Raised if there is an unknown compare result enum value.
+
+ Returns:
+ The corresponding LeaseUseResult.Status enum value.
+ """
+ ifcompare_result==Lease.CompareResult.DIFFERENT_EPOCHS:
+ returnLeaseUseResult.STATUS_WRONG_EPOCH
+ elifcompare_result==Lease.CompareResult.DIFFERENT_RESOURCES:
+ # if the incoming lease's resource doesn't match the active lease resource,
+ # then mark it as unmanaged.
+ returnLeaseUseResult.STATUS_UNMANAGED
+ elifcompare_result==Lease.CompareResult.SUPER_LEASE:
+ # In some cases we may want to allow a super lease, so check an optional boolean
+ # to see if that is the case.
+ ifallow_super_leases:
+ returnLeaseUseResult.STATUS_OK
+ # In the normal case, a super lease is considered older.
+ returnLeaseUseResult.STATUS_OLDER
+ elifcompare_result==Lease.CompareResult.OLDER:
+ returnLeaseUseResult.STATUS_OLDER
+ elif(compare_result==Lease.CompareResult.SAMEor
+ compare_result==Lease.CompareResult.SUB_LEASEor
+ compare_result==Lease.CompareResult.NEWER):
+ returnLeaseUseResult.STATUS_OK
+ else:
+ # Shouldn't hit here. The above set of checks should be exhaustive.
+ raiseError("The comparison result of the leases is unknown/unaccounted for.")
STATUS_OTHER_OWNER=Status.OTHER_OWNERSTATUS_NOT_MANAGED=Status.NOT_MANAGED
- def__init__(self,lease_status,lease_owner=None,lease=None,lease_current=None,client_name=None):
+ def__init__(self,lease_status,lease_owner=None,lease=None,lease_current=None,
+ client_name=None):""" Args: lease_status(LeaseState.Status): The ownership status of the lease.
@@ -889,8 +971,12 @@
Source code for bosdyn.client.lease
lease: Lease to add in the wallet. """withself._lock:
- self._lease_state_map[lease.lease_proto.resource]=LeaseState(
- LeaseState.Status.SELF_OWNER,lease=lease,client_name=self.client_name)
[docs]defremove(self,lease):"""Remove lease from the wallet.
@@ -1066,7 +1152,7 @@
Source code for bosdyn.client.lease
"""Return an acquired lease. Args:
- lease: Lease to return.
+ lease (Lease object): Lease to return. This should be a Lease class object, and not the proto. Raises: InvalidResourceError: Resource is not known to the LeaseService.
@@ -1105,6 +1191,7 @@
[docs]deflist_leases_full(self,include_full_lease_info=False,**kwargs):
+ """Get a list of the leases.
+
+ Args:
+ include_full_lease_info: Whether the returned list of LeaseResources should include
+ all of the available information about the last lease used.
+ Defaults to False.
+
+ Returns:
+ The complete ListLeasesResponse message.
+
+ Raises:
+ InternalServerError: Service experienced an unexpected error state.
+ LeaseUseError: Request was rejected due to using an invalid lease.
+ """
+ req=self._make_list_leases_request(include_full_lease_info)
+ returnself.call(self._stub.ListLeases,req,None,
+ common.common_header_errors,**kwargs)
+
+
[docs]deflist_leases_full_async(self,include_full_lease_info=False,**kwargs):
+ """Async version of the list_leases() function."""
+ req=self._make_list_leases_request(include_full_lease_info)
+ returnself.call_async(self._stub.ListLeases,req,None,
+ common.common_header_errors,**kwargs)
+
+
@staticmethoddef_make_acquire_request(resource):"""Return AcquireLeaseRequest message with the given resource."""
@@ -1322,11 +1435,20 @@
Source code for bosdyn.client.lease
False if the UI thread is wedged, which prevents the application from continuing to keep the Lease alive when it is no longer in a good state.
+ on_failure_callback: If specified, this should be a callable function object
+ which takes the error/exception as an input. The function does not
+ need to return anything. This function can be used to action on any
+ failures during the keepalive from the RetainLease RPC.
+ warnings: Boolean used to determine if the _periodic_check_in function will print lease check-in errors. """def__init__(self,lease_client,lease_wallet=None,resource=_RESOURCE_BODY,
- rpc_interval_seconds=2,keep_running_cb=None):
+ rpc_interval_seconds=2,keep_running_cb=None,host_name="",
+ on_failure_callback=None,warnings=True,return_at_exit=False):"""Create a new LeaseKeepAlive object."""
+ self.host_name=host_name
+ self.print_warnings=warnings
+ self._return_at_exit=return_at_exitifnotlease_client:raiseValueError("lease_client must be set")self._lease_client=lease_client
@@ -1351,6 +1473,9 @@
Source code for bosdyn.client.lease
self._end_check_in_signal=threading.Event()
+ # If the on_failure_callback is not provided, then set the default as a no-op function.
+ self._retain_lease_failed_cb=on_failure_callbackor(lambdaerr:None)
+
# Configure the thread to do check-ins, and begin checking in.self._thread=threading.Thread(target=self._periodic_check_in)self._thread.daemon=True
@@ -1393,6 +1518,8 @@
[docs]deftest_active_lease(incoming_lease_proto,active_lease,sublease_name=None,
+ allow_super_leases=False):
+ """Check if an incoming lease is newer than the current lease.
+
+ Args:
+ incoming_lease_proto(lease_pb2.Lease): The incoming lease proto.
+ active_lease(Lease): A lease object representing the most recent/newest known lease
+ that the incoming lease should be compared against.
+ sublease_name(string): If not NoneType, a sublease of the incoming lease will be
+ created (with sublease_name as the client name) and used to compare to
+ the active lease.
+ allow_super_leases(boolean): If true, a super lease will still be considered as "ok"/
+ newer when compared to the active lease.
+
+ Returns:
+ A tuple containing the lease use result from comparing the incoming and active leases, and
+ then a Lease object made from the incoming lease proto. The lease object will be None if
+ the incoming lease proto was invalid. The lease object will be a sublease of the incoming
+ lease proto if sublease_name is not NoneType.
+ """
+ lease_use_result=LeaseUseResult()
+ lease_use_result.attempted_lease.CopyFrom(incoming_lease_proto)
+
+ # Ensure the incoming lease is valid. If it is valid, create a sublease for the
+ # incoming lease and test with that.
+ try:
+ incoming_lease=Lease(incoming_lease_proto)
+ ifsublease_nameisnotNone:
+ incoming_lease=incoming_lease.create_sublease(client_name=sublease_name)
+ exceptValueError:
+ # Invalid lease proto will throw this error.
+ lease_use_result.status=LeaseUseResult.STATUS_INVALID_LEASE
+ returnlease_use_result,None
+
+ ifactive_leaseisNone:
+ # No active lease means that the incoming lease is good!
+ # Set the incoming lease proto as the latest known lease. There will be no previous
+ # lease since this is the first one for the resource.
+ lease_use_result.latest_known_lease.CopyFrom(incoming_lease.lease_proto)
+ lease_use_result.status=LeaseUseResult.STATUS_OK
+ returnlease_use_result,incoming_lease
+
+ # Ensure the active lease is also valid.
+ ifnotactive_lease.is_valid_lease():
+ # Raise an exception since the incoming lease is invalid.
+ raiseError("The active lease object is invalid.")
+
+ # Set the previous lease as the active lease.
+ lease_use_result.previous_lease.CopyFrom(active_lease.lease_proto)
+
+ # Set the latest known lease as the active lease. This will be overwritten if the incoming
+ # lease is found to be newer.
+ lease_use_result.latest_known_lease.CopyFrom(active_lease.lease_proto)
+
+ # Compare the incoming lease's sublease to the latest known/most recent lease (active lease).
+ compare_result=incoming_lease.compare(active_lease)
+ lease_use_result.status=Lease.compare_result_to_lease_use_result_status(
+ compare_result,allow_super_leases)
+ iflease_use_result.status==LeaseUseResult.STATUS_OKandcompare_result!=Lease.CompareResult.SUPER_LEASE:
+ # Only update the latest known lease if the incoming lease was found as status ok (newer/the same
+ # as the existing lease). Also prevents a "super-lease" from getting set as the
+ # latest known lease (when the allow_super_lease boolean is True) when there have been newer
+ # subleases seen.
+ lease_use_result.latest_known_lease.CopyFrom(incoming_lease.lease_proto)
+ returnlease_use_result,incoming_lease
[docs]defget_local_grid_types_async(self,**kwargs):"""Async version of get_local_grid_types()."""
- returnself.call_async(self._stub.GetLocalGridTypes,local_grid_pb2.GetLocalGridTypesRequest(),
+ returnself.call_async(self._stub.GetLocalGridTypes,
+ local_grid_pb2.GetLocalGridTypesRequest(),value_from_response=lambdares:res.local_grid_type,error_from_response=common_header_errors,**kwargs)
[docs]classInvalidArgument(Error):"""A given argument could not be used."""
-
[docs]@deprecated(
- reason='The log-annotation client and service have been replaced by data_buffer.',
- version='2.1.0',
- action="always")
+
+
[docs]@deprecated(reason='The log-annotation client and service have been replaced by data_buffer.',
+ version='2.1.0',action="always")classLogAnnotationClient(BaseClient):"""A client for adding annotations to robot logs."""
@@ -703,19 +744,17 @@
Source code for bosdyn.client.log_annotation
converter =self._timesync_endpoint.get_robot_time_converter()excepttime_sync.NotEstablishedError:# No timesync. That's OK -- the receiving host will provide the timestamp.
- self.logger.debug('Could not timestamp message of type %s',
- (msg_typeifmsg_typeisnotNone
- else(proto.DESCRIPTOR.full_nameifprotoisnotNone
- else'Unknown')))
+ self.logger.debug(
+ 'Could not timestamp message of type %s',
+ (msg_typeifmsg_typeisnotNoneelse
+ (proto.DESCRIPTOR.full_nameifprotoisnotNoneelse'Unknown')))else:returnconverter.robot_timestamp_from_local_secs(time.time())returnNone
-
[docs]@deprecated(
- reason='The log-annotation client and service have been replaced by data_buffer.',
- version='2.1.0',
- action="always")
+
[docs]@deprecated(reason='The log-annotation client and service have been replaced by data_buffer.',
+ version='2.1.0',action="always")classLogAnnotationHandler(logging.Handler):"""A logging system Handler that will publish text to a bosdyn.api.LogAnnotationService.
diff --git a/docs/html/_modules/bosdyn/client/manipulation_api_client.html b/docs/html/_modules/bosdyn/client/manipulation_api_client.html
index 9d87ea05b..bcb07081c 100644
--- a/docs/html/_modules/bosdyn/client/manipulation_api_client.html
+++ b/docs/html/_modules/bosdyn/client/manipulation_api_client.html
@@ -7,7 +7,7 @@
- bosdyn.client.manipulation_api_client — Spot 2.3.5 documentation
+ bosdyn.client.manipulation_api_client — Spot 3.0.0 documentation
@@ -66,7 +66,7 @@
Source code for bosdyn.client.manipulation_api_client
"""For clients to the Manipulation API service."""
-frombosdyn.client.commonimportBaseClient
+frombosdyn.client.commonimport(BaseClient,handle_common_header_errors,
+ handle_lease_use_result_errors)frombosdyn.apiimportmanipulation_api_service_pb2_grpcfrom.leaseimportadd_lease_wallet_processors
+
[docs]classManipulationApiClient(BaseClient):"""Client for the ManipulationAPI service."""default_service_name='manipulation'
@@ -588,7 +631,9 @@
Source code for bosdyn.client.manipulation_api_client
Returns:
The full ManipulationApiResponse message, which includes a command id for feedback. """
- returnself.call(self._stub.ManipulationApi,manipulation_api_request,**kwargs)
[docs]defmanipulation_api_command_async(self,manipulation_api_request,**kwargs):"""Async version of the manipulation_api_command()."""
@@ -604,11 +649,47 @@
Source code for bosdyn.client.manipulation_api_client
Returns:
The full ManipulationApiFeedbackResponse message. """
- returnself.call(self._stub.ManipulationApiFeedback,manipulation_api_feedback_request,**kwargs)
[docs]defmanipulation_api_feedback_command_async(self,manipulation_api_feedback_request,**kwargs):"""Async version of the manipulation_api_feedback_command()."""
- returnself.call_async(self._stub.ManipulationApiFeedback,manipulation_api_feedback_request,**kwargs)
[docs]defgrasp_override_command(self,grasp_override_request,**kwargs):
+ """Issue a grasp override command to the robot.
+
+ Args:
+ grasp_override_request (manipulation_api_pb2.ApiGraspOverrideRequest): The command request
+ for a grasp override.
+
+ Returns:
+ An ApiGraspOverrideResponse message.
+ """
+ returnself.call(self._stub.OverrideGrasp,grasp_override_request,
+ error_from_response=_grasp_override_command_error_from_response,**kwargs)
+
+
[docs]defgrasp_override_command_async(self,grasp_override_request,**kwargs):
+ """Async version of the grasp_override_command()."""
+ returnself.call_async(self._stub.OverrideGrasp,grasp_override_request,**kwargs)
+# Copyright (c) 2021 Boston Dynamics, Inc. All rights reserved.
+#
+# Downloading, reproducing, distributing or otherwise using the SDK Software
+# is subject to the terms and conditions of the Boston Dynamics Software
+# Development Kit License (20191101-BDSDK-SL).
+
+"""For clients of the graph_nav map processing service."""
+from__future__importprint_function
+importcollections
+frombosdyn.api.graph_navimportmap_pb2
+frombosdyn.api.graph_navimportmap_processing_pb2
+frombosdyn.api.graph_navimportmap_processing_service_pb2
+frombosdyn.api.graph_navimportmap_processing_service_pb2_grpcasmap_processing
+frombosdyn.client.commonimportBaseClient
+frombosdyn.client.commonimport(BaseClient,error_factory,handle_unset_status_error,
+ handle_common_header_errors,handle_lease_use_result_errors,
+ common_header_errors)
+fromenumimportEnum
+frombosdyn.client.exceptionsimportResponseError
+
+
+
[docs]classMapProcessingServiceResponseError(ResponseError):
+ """General class of errors for the GraphNav map processing service."""
+
+
+
[docs]classMissingSnapshotsError(MapProcessingServiceResponseError):
+ """The uploaded map has missing waypoint snapshots."""
[docs]classInvalidGraphError(MapProcessingServiceResponseError):
+ """The graph is invalid topologically, for example containing missing waypoints referenced by edges."""
+
+
+
[docs]classInvalidParamsError(MapProcessingServiceResponseError):
+ """The parameters passed to the optimizer do not make sense (e.g negative weights)."""
+
+
+
[docs]classMaxIterationsError(MapProcessingServiceResponseError):
+ """The optimizer reached the maximum number of iterations before converging."""
+
+
+
[docs]classMaxTimeError(MapProcessingServiceResponseError):
+ """The optimizer timed out before converging."""
+
+
+
[docs]classInvalidHintsError(MapProcessingServiceResponseError):
+ """One or more of the hints passed in to the optimizer are invalid (do not correspond to real waypoints or objects)."""
+
+
+
[docs]classConstraintViolationError(MapProcessingServiceResponseError):
+ """One or more anchors were moved outside of the desired constraints."""
+
+
[docs]classMapModifiedError(MapProcessingServiceResponseError):
+ """The map was modified on the server by another client during processing. Please try again."""
+
+
+@handle_common_header_errors
+@handle_unset_status_error(unset='STATUS_UNKNOWN')
+def_process_topology_common_errors(response):
+ # Handle error statuses from the request.
+ if(response.status==map_processing_pb2.ProcessTopologyResponse.STATUS_INVALID_GRAPH):
+ returnInvalidGraphError(response=response,error_message=InvalidGraphError.__doc__)
+ elif(response.status==
+ map_processing_pb2.ProcessTopologyResponse.STATUS_MISSING_WAYPOINT_SNAPSHOTS):
+ returnMissingSnapshotsError(response=response,error_message=MissingSnapshotsError.__doc__)
+ elif(response.status==
+ map_processing_pb2.ProcessTopologyResponse.STATUS_MAP_MODIFIED_DURING_PROCESSING):
+ returnMapModifiedError(response=response,error_message=MapModifiedError.__doc__)
+ returnNone
+
+
+def_process_topology_streamed_errors(responses):
+ """Return a custom exception based on process topology streaming response, None if no error."""
+ # Iterate through the response since the request responds with a stream.
+ forrespinresponses:
+ exception=_process_topology_common_errors(resp)
+ ifexception:
+ returnexception
+
+ # All responses (in the iterator) had status_ok
+ returnNone
+
+
+__ANCHORING_COMMON_ERRORS={
+ map_processing_pb2.ProcessAnchoringResponse.STATUS_MISSING_WAYPOINT_SNAPSHOTS:
+ MissingSnapshotsError,
+ map_processing_pb2.ProcessAnchoringResponse.STATUS_OPTIMIZATION_FAILURE:
+ OptimizationFailureError,
+ map_processing_pb2.ProcessAnchoringResponse.STATUS_INVALID_GRAPH:
+ InvalidGraphError,
+ map_processing_pb2.ProcessAnchoringResponse.STATUS_INVALID_PARAMS:
+ InvalidParamsError,
+ map_processing_pb2.ProcessAnchoringResponse.STATUS_CONSTRAINT_VIOLATION:
+ ConstraintViolationError,
+ map_processing_pb2.ProcessAnchoringResponse.STATUS_MAX_ITERATIONS:
+ MaxIterationsError,
+ map_processing_pb2.ProcessAnchoringResponse.STATUS_MAX_TIME:
+ MaxTimeError,
+ map_processing_pb2.ProcessAnchoringResponse.STATUS_INVALID_HINTS:
+ InvalidHintsError,
+ map_processing_pb2.ProcessAnchoringResponse.STATUS_MAP_MODIFIED_DURING_PROCESSING:
+ MapModifiedError
+}
+
+
+@handle_common_header_errors
+@handle_unset_status_error(unset='STATUS_UNKNOWN')
+def_process_anchoring_common_errors(response):
+ # Handle error statuses from the request.
+ ifresponse.statusin__ANCHORING_COMMON_ERRORS:
+ type_name=__ANCHORING_COMMON_ERRORS[response.status]
+ returntype_name(response=response,error_message=type_name.__doc__)
+ returnNone
+
+
+def_process_anchoring_streamed_errors(responses):
+ """Return a custom exception based on process anchoring streaming response, None if no error."""
+ # Iterate through the response since the request responds with a stream.
+ forrespinresponses:
+ exception=_process_anchoring_common_errors(resp)
+ ifexception:
+ returnexception
+
+ # All responses (in the iterator) had status_ok
+ returnNone
+
+
+def_get_streamed_topology_response(response):
+ """Reads a streamed response to recreate a merged topology response."""
+ merged_response=map_processing_pb2.ProcessTopologyResponse()
+
+ forrespinresponse:
+ merged_response.MergeFrom(resp)
+ returnmerged_response
+
+
+def_get_streamed_anchoring_response(response):
+ """Reads a streamed response to recreate a merged anchoring response."""
+ merged_response=map_processing_pb2.ProcessAnchoringResponse()
+
+ forrespinresponse:
+ merged_response.MergeFrom(resp)
+ returnmerged_response
+
+
+
[docs]defprocess_topology(self,params,modify_map_on_server,**kwargs):
+ """Process the topology of the map on the server, closing loops and producing a
+ consistent topology.
+
+ Args:
+ params: a ProcessTopologyRequest.Params object
+ modify_map_on_server: if true, the map will be modified on the server. If false,
+ the subgraph returned by this function should be uploaded back to the server if it
+ is to be reused.
+
+ Returns:
+ The ProcessTopologyResponse containing new edges to add to the map.
+
+ Raises:
+ RpcError: Problem communicating with the robot
+ """
+ request=self._build_process_topology_request(params,modify_map_on_server)
+ returnself.call(self._stub.ProcessTopology,request,
+ value_from_response=_get_streamed_topology_response,
+ error_from_response=_process_topology_streamed_errors,**kwargs)
+
+
[docs]defprocess_anchoring(self,params,modify_anchoring_on_server,stream_intermediate_results,
+ initial_hint=None,**kwargs):
+ """Process the anchoring of the map on the server, producing a metrically consistent anchoring.
+
+ Args:
+ params: a ProcessAnchoringRequest.Params object
+ modify_anchoring_on_server: if true, the map will be modified on the server. If false,
+ the anchoring returned by this function should be uploaded back to the server if it
+ is to be reused.
+ stream_intermediate_results: if true, anchorings from earlier optimizer
+ iterations may be included in the response. If false, only the last iteration will be returned.
+ initial_hint: Initial guess at some number of waypoints and world objects and their anchorings.
+ This field is an AnchoringHint object (see map_processing.proto)
+ Returns:
+ The ProcessAnchoringResponse containing a new optimized anchoring.
+ Raises:
+ RpcError: Problem communicating with the robot
+ """
+ request=self._build_process_anchoring_request(params,modify_anchoring_on_server,
+ stream_intermediate_results,initial_hint)
+
+ returnself.call(self._stub.ProcessAnchoring,request,
+ value_from_response=_get_streamed_anchoring_response,
+ error_from_response=_process_anchoring_streamed_errors,**kwargs)
[docs]classSE2Pose(object):"""Class representing an SE2Pose with position and angle."""
@@ -611,8 +654,8 @@
Source code for bosdyn.client.math_helpers
[docs]defto_proto(self):"""Converts the math_helpers.SE2Pose into an output of the protobuf geometry_pb2.SE2Pose."""
- returngeometry_pb2.SE2Pose(
- position=geometry_pb2.Vec2(x=self.x,y=self.y),angle=self.angle)
# the correct height is passed into the function as height_z.returnSE3Pose(x=self.x,y=self.y,z=height_z,rot=Quat.from_yaw(self.angle))
+
[docs]classSE2Velocity(object):"""Class representing an SE2Velocity with linear velocity and angular velocity."""def__init__(self,x,y,angular):
- self.linear_velocity_x=x
- self.linear_velocity_y=y
- self.angular_velocity=angular
+ self.linear_velocity_x=float(x)
+ self.linear_velocity_y=float(y)
+ self.angular_velocity=float(angular)def__str__(self):return'Linear velocity -- X: %0.3f Y: %0.3f Angular velocity -- %0.3f '%(
@@ -733,7 +777,8 @@
Source code for bosdyn.client.math_helpers
[docs]defto_vector(self):"""Creates a 3x1 velocity vector as a numpy array."""# Create a 3x1 vector of [x_d, y_d, r_d]
- returnnumpy.array([self.linear_velocity_x,self.linear_velocity_y,self.angular_velocity]).reshape((3,1))
iftype(se2_vel_vector)==list:iflen(se2_vel_vector)!=3:# Must have 3 elements to be a complete SE2Velocity
- print(
- "Velocity list must have 3 elements. The input has the wrong dimension of: "
- +str(len(se2_vel_vector)))
+ print("Velocity list must have 3 elements. The input has the wrong dimension of: "+
+ str(len(se2_vel_vector)))returnNoneelse:
- returnSE2Velocity(x=se2_vel_vector[0],y=se2_vel_vector[1],angular=se2_vel_vector[2])
+ returnSE2Velocity(x=se2_vel_vector[0],y=se2_vel_vector[1],
+ angular=se2_vel_vector[2])iftype(se2_vel_vector)==numpy.ndarray:ifse2_vel_vector.shape[0]!=3:# Must have 3 elements to be a complete SE2Velocity
@@ -780,12 +825,12 @@
iftype(se3_vel_vector)==list:iflen(se3_vel_vector)!=6:# Must have 6 elements to be a complete SE3Velocity
- print(
- "Velocity list must have 6 elements. The input has the wrong dimension of: "
- +str(len(se3_vel_vector)))
+ print("Velocity list must have 6 elements. The input has the wrong dimension of: "+
+ str(len(se3_vel_vector)))returnNoneelse:returnSE3Velocity(lin_x=se3_vel_vector[0],lin_y=se3_vel_vector[1],
@@ -1021,6 +1065,7 @@
# aligned frame, either "vision", "odom", or "flat_body" to be safely converted from# an SE3Pose to an SE2Pose with minimal loss of height information.se2_angle=(self.rot.closest_yaw_only_quaternion()).to_yaw()
- returnSE2Pose(x=self.x,y=self.y,angle=se2_angle)
def_from_matrix_x(rot):"""Computes the value of the quaternion for the x-axis from a numpy 3x3 rotation matrix."""x=math.sqrt(1+rot[0,0]-rot[1,1]-rot[2,2])*0.5
- returnQuat(
- w=(rot[2,1]-rot[1,2])/(4.0*x),x=x,y=(rot[0,1]+rot[1,0])/(4.0*x),
- z=(rot[0,2]+rot[2,0])/(4.0*x))
+ returnQuat(w=(rot[2,1]-rot[1,2])/(4.0*x),x=x,
+ y=(rot[0,1]+rot[1,0])/(4.0*x),z=(rot[0,2]+rot[2,0])/(4.0*x))@staticmethoddef_from_matrix_y(rot):"""Computes the value of the quaternion for the y-axis from a numpy 3x3 rotation matrix."""y=math.sqrt(1-rot[0,0]+rot[1,1]-rot[2,2])*0.5
- returnQuat(
- w=(rot[0,2]-rot[2,0])/(4.0*y),x=(rot[0,1]+rot[1,0])/(4.0*y),y=y,
- z=(rot[1,2]+rot[2,1])/(4.0*y))
+ returnQuat(w=(rot[0,2]-rot[2,0])/(4.0*y),x=(rot[0,1]+rot[1,0])/(4.0*y),
+ y=y,z=(rot[1,2]+rot[2,1])/(4.0*y))@staticmethoddef_from_matrix_z(rot):"""Computes the value of the quaternion for the z-axis from a numpy 3x3 rotation matrix."""z=math.sqrt(1-rot[0,0]-rot[1,1]+rot[2,2])*0.5
- returnQuat(
- w=(rot[1,0]-rot[0,1])/(4.0*z),x=(rot[0,2]+rot[2,0])/(4.0*z),
- y=(rot[1,2]+rot[2,1])/(4.0*z),z=z)
+ returnQuat(w=(rot[1,0]-rot[0,1])/(4.0*z),x=(rot[0,2]+rot[2,0])/(4.0*z),
+ y=(rot[1,2]+rot[2,1])/(4.0*z),z=z)
[docs]defmult(self,other_quat):"""Computes the multiplication of two math_helpers.Quats."""
- returnQuat(self.w*other_quat.w-self.x*other_quat.x-self.y*other_quat.y-self.z*other_quat.z,
- self.w*other_quat.x+self.x*other_quat.w+self.y*other_quat.z-self.z*other_quat.y,
- self.w*other_quat.y-self.x*other_quat.z+self.y*other_quat.w+self.z*other_quat.x,
- self.w*other_quat.z+self.x*other_quat.y-self.y*other_quat.x+self.z*other_quat.w)
def__mul__(self,other_quat):"""Overrides the '*' symbol to compute the multiplication between two math_helpers.Quats."""
- returnQuat(self.w*other_quat.w-self.x*other_quat.x-self.y*other_quat.y-self.z*other_quat.z,
- self.w*other_quat.x+self.x*other_quat.w+self.y*other_quat.z-self.z*other_quat.y,
- self.w*other_quat.y-self.x*other_quat.z+self.y*other_quat.w+self.z*other_quat.x,
- self.w*other_quat.z+self.x*other_quat.y-self.y*other_quat.x+self.z*other_quat.w)
+ returnQuat(
+ self.w*other_quat.w-self.x*other_quat.x-self.y*other_quat.y-
+ self.z*other_quat.z,self.w*other_quat.x+self.x*other_quat.w+
+ self.y*other_quat.z-self.z*other_quat.y,self.w*other_quat.y-
+ self.x*other_quat.z+self.y*other_quat.w+self.z*other_quat.x,
+ self.w*other_quat.z+self.x*other_quat.y-self.y*other_quat.x+
+ self.z*other_quat.w)
[docs]defnormalize(self):"""Normalizes the quaternion."""
@@ -1262,11 +1328,44 @@
Source code for bosdyn.client.math_helpers
"""Computes a yaw-only math_helpers.Quat from the current roll/pitch/yaw math_helpers.Quat"""mag=math.sqrt(self.w*self.w+self.z*self.z)ifmag>0:
- returnQuat(w=self.w/mag,x=0,y=0,z=self.z/mag)
+ returnQuat(w=self.w/mag,x=0,y=0,z=self.z/mag)else:# If the problem is ill posed (i.e. z-axis of quaternion is [0, 0, -1], then preserve old# behavior and always rotate 180 degrees around the y-axis.
- returnQuat(w=0,x=0,y=1,z=0)
+ returnQuat(w=0,x=0,y=1,z=0)
+
+
[docs]@staticmethod
+ defslerp(a,b,fraction):
+ v0=numpy.array([a.w,a.x,a.y,a.z])
+ v1=numpy.array([b.w,b.x,b.y,b.z])
+ dot=numpy.dot(v0.transpose(),v1)
+ # If the dot product is negative, slerp will not take
+ # the shorter path. Note that v1 and -v1 are equivalent when
+ # the negation is applied to all four components. Fix by
+ # reversing one quaternion.
+ ifdot<0.0:
+ v0*=-1.0
+ dot=-dot
+
+ DOT_THRESHOLD=1.0-1e-4
+ ifdot>DOT_THRESHOLD:
+ # If the inputs are too close for comfort, linearly interpolate
+ # and normalize the result.
+ result=v0+fraction*(v1-v0)
+ result/=numpy.sqrt(numpy.dot(result.transpose(),result))
+ else:
+ # Since dot is in range [0, DOT_THRESHOLD], acos is safe
+ theta_0=math.acos(dot)# theta_0 = angle between input vectors
+ theta=theta_0*fraction# theta = angle between v0 and result
+ sin_theta=math.sin(theta)# compute this value only once
+ sin_theta_0=math.sin(theta_0)# compute this value only once
+
+ s0=math.cos(
+ theta)-dot*sin_theta/sin_theta_0# == sin(theta_0 - theta) / sin(theta_0)
+ s1=sin_theta/sin_theta_0
+
+ result=(s0*v0)+(s1*v1)
+ returnQuat(result[0],result[1],result[2],result[3])
[docs]defskew_matrix_3d(vec3_proto):"""Creates a 3x3 numpy matrix representing the skew symmetric matrix for a vec3. These are used to create the adjoint matrices for SE3Pose's, among other things."""
- returnnumpy.array([[0,-vec3_proto.z,vec3_proto.y],
- [vec3_proto.z,0,-vec3_proto.x],
- [-vec3_proto.y,vec3_proto.x,0]])
[docs]defskew_matrix_2d(vec2_proto):"""Creates a 2x1 numpy matrix representing the skew symmetric matrix for a vec2. These are used to create the adjoint matrices for SE2Pose's, among other things."""returnnumpy.array([[vec2_proto.y,-vec2_proto.x]])
+
[docs]deftransform_se2velocity(a_adjoint_b_matrix,se2_velocity_in_b):""" Changes the frame that the SE(2) Velocity is expressed in. More specifically, it converts the
@@ -1321,10 +1425,12 @@
Source code for bosdyn.client.network_compute_bridge_client
from bosdyn.apiimportnetwork_compute_bridge_service_pb2frombosdyn.apiimportnetwork_compute_bridge_service_pb2_grpc
+
[docs]classExternalServiceNotFoundError(ResponseError):"""The requested service for external computation was not found in the directory."""
+
[docs]classExternalServerError(ResponseError):"""The call to the external server did not complete successfully."""
+
[docs]classNetworkComputeRotationError(ResponseError):"""The robot failed to rotate the image as requested."""
+
[docs]classNetworkComputeBridgeClient(BaseClient):"""Client to either the NetworkComputeBridgeService or the NetworkComputeBridgeWorkerService."""
@@ -599,11 +644,13 @@
Source code for bosdyn.client.network_compute_bridge_client
ExternalServerError: Either the service or worker service threw an error when responding with
the set of all models. """
- returnself.call(self._stub.ListAvailableModels,list_request,None,_list_available_models_error,**kwargs)
[docs]deflist_available_models_command_async(self,list_request,**kwargs):"""Async version of list_available_models_command()."""
- returnself.call_async(self._stub.ListAvailableModels,list_request,None,_list_available_models_error,**kwargs)
[docs]defnetwork_compute_bridge_command(self,network_compute_request,**kwargs):"""Issue the main network compute bridge request to run a model on specific, requested data.
@@ -625,11 +672,14 @@
Source code for bosdyn.client.network_compute_bridge_client
image as requested.
"""
- returnself.call(self._stub.NetworkCompute,network_compute_request,None,_network_compute_error,**kwargs)
[docs]defnetwork_compute_bridge_command_async(self,network_compute_request,**kwargs):"""Async version of network_compute_bridge_command()."""
- returnself.call_async(self._stub.NetworkCompute,network_compute_request,None,_network_compute_error,**kwargs)
[docs]defattach_payload(self,guid,secret,**kw_args):
+ """Attach a payload to the robot.
+
+ Args:
+ guid: The GUID of the payload to attach.
+ secret: Secret of the payload to attach.
+ kw_args: Extra arguments to pass to grpc call invocation.
+
+ Raises:
+ RpcError: Problem communicating with the robot.
+ PayloadDoesNotExistError: A payload with the provided GUID does not exist.
+ InvalidPayloadCredentialsError: The GUID + secret does not match an existing payload.
+ PayloadNotAuthorizedError: The payload you've requested to change is not yet authorized.
+ PayloadRegistrationResponseError: Something went wrong during the payload registration.
+ """
+ request=payload_registration_protos.UpdatePayloadAttachedRequest()
+ request.payload_credentials.guid=guid
+ request.payload_credentials.secret=secret
+ request.request=payload_registration_protos.UpdatePayloadAttachedRequest.REQUEST_ATTACH
+ returnself.call(self._stub.UpdatePayloadAttached,request,
+ error_from_response=_update_payload_attached_error,**kw_args)
+
+
[docs]defattach_payload_async(self,guid,secret,**kw_args):
+ """Attach a payload to the robot.
+
+ Args:
+ guid: The GUID of the payload to attach.
+ secret: Secret of the payload to attach.
+ kw_args: Extra arguments to pass to grpc call invocation.
+
+ Raises:
+ RpcError: Problem communicating with the robot.
+ PayloadDoesNotExistError: A payload with the provided GUID does not exist.
+ InvalidPayloadCredentialsError: The GUID + secret does not match an existing payload.
+ PayloadNotAuthorizedError: The payload you've requested to change is not yet authorized.
+ PayloadRegistrationResponseError: Something went wrong during the payload registration.
+ """
+ request=payload_registration_protos.UpdatePayloadAttachedRequest()
+ request.payload_credentials.guid=guid
+ request.payload_credentials.secret=secret
+ request.request=payload_registration_protos.UpdatePayloadAttachedRequest.REQUEST_ATTACH
+ returnself.call_async(self._stub.UpdatePayloadAttached,request,
+ error_from_response=_update_payload_attached_error,**kw_args)
+
+
[docs]defdetach_payload(self,guid,secret,**kw_args):
+ """Detach a payload from the robot.
+
+ Args:
+ guid: The GUID of the payload to detach.
+ secret: Secret of the payload to detach.
+ kw_args: Extra arguments to pass to grpc call invocation.
+
+ Raises:
+ RpcError: Problem communicating with the robot.
+ PayloadDoesNotExistError: A payload with the provided GUID does not exist.
+ InvalidPayloadCredentialsError: The GUID + secret does not match an existing payload.
+ PayloadNotAuthorizedError: The payload you've requested to change is not yet authorized.
+ PayloadRegistrationResponseError: Something went wrong during the payload registration.
+ """
+ request=payload_registration_protos.UpdatePayloadAttachedRequest()
+ request.payload_credentials.guid=guid
+ request.payload_credentials.secret=secret
+ request.request=payload_registration_protos.UpdatePayloadAttachedRequest.REQUEST_DETACH
+ returnself.call(self._stub.UpdatePayloadAttached,request,
+ error_from_response=_update_payload_attached_error,**kw_args)
+
+
[docs]defdetach_payload_async(self,guid,secret,**kw_args):
+ """Detach a payload from the robot.
+
+ Args:
+ guid: The GUID of the payload to detach.
+ secret: Secret of the payload to detach.
+ kw_args: Extra arguments to pass to grpc call invocation.
+
+ Raises:
+ RpcError: Problem communicating with the robot.
+ PayloadDoesNotExistError: A payload with the provided GUID does not exist.
+ InvalidPayloadCredentialsError: The GUID + secret does not match an existing payload.
+ PayloadNotAuthorizedError: The payload you've requested to change is not yet authorized.
+ PayloadRegistrationResponseError: Something went wrong during the payload registration.
+ """
+ request=payload_registration_protos.UpdatePayloadAttachedRequest()
+ request.payload_credentials.guid=guid
+ request.payload_credentials.secret=secret
+ request.request=payload_registration_protos.UpdatePayloadAttachedRequest.REQUEST_DETACH
+ returnself.call_async(self._stub.UpdatePayloadAttached,request,
+ error_from_response=_update_payload_attached_error,**kw_args)
# Associate proto status errors to python client errors for RegisterPayload
@@ -725,7 +868,7 @@
Source code for bosdyn.client.payload_registration
Source code for bosdyn.client.payload_registration
status_to_string=payload_registration_protos.GetPayloadAuthTokenResponse.Status.Name,status_to_error=_GET_PAYLOAD_AUTH_TOKEN_STATUS_TO_ERROR)
+# Associate proto status errors to python client errors for UpdatePayloadAttachedResponse
+_UPDATE_PAYLOAD_ATTACHED_STATUS_TO_ERROR=collections.defaultdict(lambda:(ResponseError,None))
+_UPDATE_PAYLOAD_ATTACHED_STATUS_TO_ERROR.update({
+ payload_registration_protos.UpdatePayloadAttachedResponse.STATUS_OK:(None,None),
+ payload_registration_protos.UpdatePayloadAttachedResponse.STATUS_DOES_NOT_EXIST:
+ (PayloadDoesNotExistError,PayloadDoesNotExistError.__doc__),
+ payload_registration_protos.UpdatePayloadAttachedResponse.STATUS_INVALID_CREDENTIALS:
+ (InvalidPayloadCredentialsError,InvalidPayloadCredentialsError.__doc__),
+ payload_registration_protos.UpdatePayloadAttachedResponse.STATUS_PAYLOAD_NOT_AUTHORIZED:
+ (PayloadNotAuthorizedError,PayloadNotAuthorizedError.__doc__),
+})
+
+
+# Function to parse all types of errors from update payload attached request
+@handle_common_header_errors
+@handle_unset_status_error(unset='STATUS_UNKNOWN')
+def_update_payload_attached_error(response):
+ """Return a custom exception based on response, None if no error."""
+ returnerror_factory(
+ response,response.status,
+ status_to_string=payload_registration_protos.UpdatePayloadAttachedResponse.Status.Name,
+ status_to_error=_UPDATE_PAYLOAD_ATTACHED_STATUS_TO_ERROR)
+
[docs]classPayloadRegistrationKeepAlive(object):"""Helper class to keep a payload entry registered.
@@ -838,7 +1004,8 @@
Source code for bosdyn.client.payload_registration
self.pay_reg_client.register_payload(self.payload,self.secret)exceptPayloadAlreadyExistsErrorasexc:# If the payload exists, log a warning and continue.
- self.logger.warning('Got a "payload already exists" error: %s\nContinuing to start thread.',str(exc))
+ self.logger.warning(
+ 'Got a "payload already exists" error: %s\nContinuing to start thread.',str(exc))else:self.logger.info('Payload registered.')
diff --git a/docs/html/_modules/bosdyn/client/point_cloud.html b/docs/html/_modules/bosdyn/client/point_cloud.html
index bb76176f6..cce23895d 100644
--- a/docs/html/_modules/bosdyn/client/point_cloud.html
+++ b/docs/html/_modules/bosdyn/client/point_cloud.html
@@ -7,7 +7,7 @@
- bosdyn.client.point_cloud — Spot 2.3.5 documentation
+ bosdyn.client.point_cloud — Spot 3.0.0 documentation
@@ -66,7 +66,7 @@
[docs]classPointCloudResponseError(ResponseError):"""General class of errors for PointCloud service."""
@@ -588,12 +631,17 @@
Source code for bosdyn.client.point_cloud
_STATUS_TO_ERROR=collections.defaultdict(lambda:(PointCloudResponseError,None))_STATUS_TO_ERROR.update({point_cloud_protos.PointCloudResponse.STATUS_OK:(None,None),
- point_cloud_protos.PointCloudResponse.STATUS_UNKNOWN_SOURCE:error_pair(UnknownPointCloudSourceError),
- point_cloud_protos.PointCloudResponse.STATUS_SOURCE_DATA_ERROR:error_pair(SourceDataError),
- point_cloud_protos.PointCloudResponse.STATUS_UNKNOWN:error_pair(UnsetStatusError),
- point_cloud_protos.PointCloudResponse.STATUS_POINT_CLOUD_DATA_ERROR:error_pair(PointCloudDataError),
+ point_cloud_protos.PointCloudResponse.STATUS_UNKNOWN_SOURCE:
+ error_pair(UnknownPointCloudSourceError),
+ point_cloud_protos.PointCloudResponse.STATUS_SOURCE_DATA_ERROR:
+ error_pair(SourceDataError),
+ point_cloud_protos.PointCloudResponse.STATUS_UNKNOWN:
+ error_pair(UnsetStatusError),
+ point_cloud_protos.PointCloudResponse.STATUS_POINT_CLOUD_DATA_ERROR:
+ error_pair(PointCloudDataError),})
+
@handle_common_header_errorsdef_error_from_response(response):"""Return a custom exception based on the first invalid point_cloud response, None if no error."""
@@ -606,6 +654,7 @@
Source code for bosdyn.client.point_cloud
returnresultreturnNone
+
[docs]classPointCloudClient(BaseClient):"""A client handling point clouds."""
@@ -631,8 +680,8 @@
Source code for bosdyn.client.point_cloud
[docs]deflist_point_cloud_sources_async(self,**kwargs):"""Async version of list_point_cloud_sources()"""req=self._get_list_point_cloud_source_request()
- returnself.call_async(self._stub.ListPointCloudSources,req,_list_point_cloud_sources_value,
- common_header_errors,**kwargs)
[docs]defget_point_cloud_from_sources(self,point_cloud_sources,**kwargs):"""Obtain point clouds from sources using default parameters.
@@ -651,12 +700,13 @@
Source code for bosdyn.client.point_cloud
UnsetStatusError: An internal PointCloudService issue has happened PointCloudDataError: Problem with the point cloud data. Only PointCloudSource is filled """
- returnself.get_point_cloud([build_pc_request(src)forsrcinpoint_cloud_sources],**kwargs)
[docs]defget_point_cloud_from_sources_async(self,point_cloud_sources,**kwargs):"""Obtain point clouds from sources using default parameters."""returnself.get_point_cloud_async([build_pc_request(src)forsrcinpoint_cloud_sources],
- **kwargs)
+ **kwargs)
[docs]defget_point_cloud(self,point_cloud_requests,**kw_args):"""Get the most recent point cloud
@@ -714,6 +764,7 @@
[docs]defbuild_pc_request(point_cloud_source_name):"""Helper function which builds an PointCloudRequest from an point cloud source name.
@@ -725,6 +776,7 @@
[docs]classEstoppedError(PowerResponseError):
- """Cannot power on while estopped.
+ """Cannot power on while estopped; inspect EStopState for more info."""
- Inspect EStopState for more info."""
+
[docs]classOverriddenError(PowerResponseError):
+ """The command was overridden and is no longer valid."""
-
[docs]classFaultedError(PowerResponseError):
- """Cannot power on due to a fault.
- Inspect FaultState for more info."""
+
[docs]classFaultedError(PowerResponseError):
+ """Cannot power on due to a fault; inspect FaultState for more info."""
[docs]@deprecated(reason='Replaced by the less ambiguous safe_power_off_motors function.',version='3.0.0',
+ action="ignore")
+defsafe_power_off(command_client,state_client,timeout_sec=30,update_frequency=1.0,**kwargs):
+ """Safely power off motors. See safe_power_off_motors()."""
+ safe_power_off_motors(command_client,state_client,timeout_sec,update_frequency,**kwargs)
+
+
+
[docs]defsafe_power_off_motors(command_client,state_client,timeout_sec=30,update_frequency=1.0,**kwargs):"""Power off robot motors safely. This function blocks until robot safely powers off. This means the robot will attempt to sit before powering motors off.
@@ -813,9 +862,34 @@
[docs]defsafe_power_off_robot(command_client,state_client,power_client,timeout_sec=30,
+ update_frequency=1.0,**kwargs):
+ """Power off the robot motors and then the robot computers safely. This function blocks until
+ robot safely powers off. This means the robot will attempt to sit before powering motors off.
+
+ Args:
+ command_client (RobotCommandClient): client for calling RobotCommandService safe power off.
+ state_client (RobotStateClient): client for monitoring power state.
+ power_client (bosdyn.api.PowerClient): client for calling power service.
+ timeout_sec (float): Max time this function will block for.
+ update_frequency (float): The frequency with which the robot should check if the command
+ has succeeded.
+
+ Raises:
+ RpcError: Problem communicating with the robot.
+ power.CommandTimedOutError: Did not power off within timeout_sec
+ RobotCommandResponseError: Something went wrong with the safe power off.
+ """
+ end_time=time.time()+timeout_sec
+ safe_power_off_motors(command_client,state_client,timeout_sec=end_time-time.time(),
+ update_frequency=update_frequency,**kwargs)
+ power_off_robot(power_client,timeout_sec=end_time-time.time(),
+ update_frequency=update_frequency,**kwargs)
+
+
[docs]defpower_off_robot(power_client,timeout_sec=30,update_frequency=1.0,**kwargs):"""Fully power off the robot. Powering off the robot will stop API comms.
-
+
Args: power_client (bosdyn.api.PowerClient): client for calling power service. timeout_sec (float): Max time this function will block for.
@@ -831,9 +905,34 @@
Source code for bosdyn.client.power
**kwargs)
+
[docs]defsafe_power_cycle_robot(command_client,state_client,power_client,timeout_sec=30,
+ update_frequency=1.0,**kwargs):
+ """Power cycle the robot safely. This function blocks until robot safely powers off. The robot
+ will attempt to sit before powering cycling.
+
+ Args:
+ command_client (RobotCommandClient): client for calling RobotCommandService safe power off.
+ state_client (RobotStateClient): client for monitoring power state.
+ power_client (bosdyn.api.PowerClient): client for calling power service.
+ timeout_sec (float): Max time this function will block for.
+ update_frequency (float): The frequency with which the robot should check if the command
+ has succeeded.
+
+ Raises:
+ RpcError: Problem communicating with the robot.
+ power.CommandTimedOutError: Did not power off within timeout_sec
+ RobotCommandResponseError: Something went wrong with the safe power off.
+ """
+ end_time=time.time()+timeout_sec
+ safe_power_off_motors(command_client,state_client,timeout_sec=end_time-time.time(),
+ update_frequency=update_frequency,**kwargs)
+ power_cycle_robot(power_client,timeout_sec=end_time-time.time(),
+ update_frequency=update_frequency,**kwargs)
+
+
[docs]defpower_cycle_robot(power_client,timeout_sec=30,update_frequency=1.0,**kwargs):"""Power cycle the robot. Power cycling the robot will stop API comms.
-
+
Args: power_client (bosdyn.api.PowerClient): client for calling power service. timeout_sec (float): Max time this function will block for.
@@ -851,7 +950,7 @@
Source code for bosdyn.client.power
[docs]defpower_off_payload_ports(power_client,timeout_sec=30,update_frequency=1.0,**kwargs):"""Power off the robot payload ports.
-
+
Args: power_client (bosdyn.api.PowerClient): client for calling power service. timeout_sec (float): Max time this function will block for.
@@ -868,7 +967,7 @@
Source code for bosdyn.client.power
[docs]defpower_on_payload_ports(power_client,timeout_sec=30,update_frequency=1.0,**kwargs):"""Power on the robot payload ports.
-
+
Args: power_client (bosdyn.api.PowerClient): client for calling power service. timeout_sec (float): Max time this function will block for.
@@ -885,7 +984,7 @@
Source code for bosdyn.client.power
[docs]defpower_off_wifi_radio(power_client,timeout_sec=30,update_frequency=1.0,**kwargs):"""Power off the robot Wi-Fi radio.
-
+
Args: power_client (bosdyn.api.PowerClient): client for calling power service. timeout_sec (float): Max time this function will block for.
@@ -902,7 +1001,7 @@
Source code for bosdyn.client.power
[docs]defpower_on_wifi_radio(power_client,timeout_sec=30,update_frequency=1.0,**kwargs):"""Power off the robot Wi-Fi radio.
-
+
Args: power_client (bosdyn.api.PowerClient): client for calling power service. timeout_sec (float): Max time this function will block for.
@@ -924,7 +1023,7 @@
Source code for bosdyn.client.power
def_power_command(power_client,request,timeout_sec=30,update_frequency=1.0,expect_grpc_timeout=False,**kwargs):"""Helper function to issue command to power client.
-
+
Args: power_client (bosdyn.api.PowerClient): Client for calling power service. request (bosdyn.api.PowerCommandRequest): Request to make to power service.
diff --git a/docs/html/_modules/bosdyn/client/processors.html b/docs/html/_modules/bosdyn/client/processors.html
index 85d4bfa0f..3e87ac280 100644
--- a/docs/html/_modules/bosdyn/client/processors.html
+++ b/docs/html/_modules/bosdyn/client/processors.html
@@ -7,7 +7,7 @@
- bosdyn.client.processors — Spot 2.3.5 documentation
+ bosdyn.client.processors — Spot 3.0.0 documentation
@@ -66,7 +66,7 @@