Skip to content

Latest commit

 

History

History
 
 

niryo_one_python_api

Folders and files

NameName
Last commit message
Last commit date

parent directory

..
 
 
 
 
 
 
 
 
 
 
 
 

Niryo One Python API

To use Python API :

  1. Connect to Niryo One via ssh On Linux, open a terminal and type ssh niryo@(ip_address). When asked for a password, type "robotics". This is the default password, you can change it if you want.

    On Windows, open an ssh client with Putty. (username : "niryo", password : "robotics")

    you can find robot's IP by clicking on the “Search for robots in network” button on Niryo One Studio. On Linux, you can also use the nmap client to find hosts in your network.

    For more information, please refer to: https://niryo.com/docs/niryo-one/developer-tutorials/connect-to-the-raspberry-pi-3-via-ssh/

  2. Create a Python file

    touch test_python_api.py
    chmod +x test_python_api.py
    
  3. Use this template to write your code

    By writing directly in ssh on the raspberry with nano text editor

    nano test_python_api.py
    

    Follow this python template:

    #!/usr/bin/env python
    from niryo_one_python_api.niryo_one_api import *
    import rospy
    import time
    
    rospy.init_node('niryo_one_example_python_api')
    
    n = NiryoOne()
    
    try:
        # Your code here
    except NiryoOneException as e:
        print e
        # Handle errors here

    You can find some examples here.

    You can also write the python script on your own computer with your favorite text editor. Then send it to the raspberry by scp:

    scp -p SOURCE_FOLDER_PATH/FILE.py niryo@ROBOT_IP:DESTINATION_FOLDER_PATH
    

    Ex: This command will send test.txt file to the /home/niryo folder:

    scp -p ./test.txt [email protected]:
    
  4. Run your code on RPi

```
python test_python_api.py
```

Documentation

Constants

You can use those predefined constants (instead of numbers) in the Python API methods. Please read the examples to see how to use them.

Tools IDs
  • TOOL_NONE
  • TOOL_GRIPPER_1_ID
  • TOOL_GRIPPER_2_ID
  • TOOL_GRIPPER_3_ID
  • TOOL_ELECTROMAGNET_1_ID
  • TOOL_VACUUM_PUMP_1_ID
Digital pins
  • PIN_MODE_OUTPUT
  • PIN_MODE_INPUT
  • PIN_HIGH
  • PIN_LOW
  • GPIO_1A
  • GPIO_1B
  • GPIO_1C
  • GPIO_2A
  • GPIO_2B
  • GPIO_2C
  • SW_1
  • SW_2
Status
  • OK
  • KO
For shift_pose function
  • AXIS_X
  • AXIS_Y
  • AXIS_Z
  • ROT_ROLL
  • ROT_PITCH
  • ROT_YAW
For Conveyor
  • CONVEYOR_DIRECTION_BACKWARD
  • CONVEYOR_DIRECTION_FORWARD
  • CONVEYOR_ID_ONE
  • CONVEYOR_ID_TWO
Vision Color
  • COLOR_RED
  • COLOR_GREEN
  • COLOR_BLUE
  • COLOR_ANY
Vision Shape
  • SHAPE_CIRCLE
  • SHAPE_SQUARE
  • SHAPE_ANY

Custom type

RobotState
position:
    x: (float)
    y: (float)
    z: (float)
rpy:
    roll: (float)
    pitch: (float)
    yaw: (float)

Class methods

Calibration

calibrate_auto()

Calibrate robot motors automatically (by moving axis). If calibration is not needed, this method will do nothing.

calibrate_manual()

Calibrate robot motors manually (the robot just needs to be in 'home' position and to have been auto calibrated at least once). If calibration is not needed, this method will do nothing.

Learning mode

activate_learning_mode(set_bool)

Parameters:

  • activate (0 or 1)

Activate or deactivate learning mode (= disable/enable torque on motors).

get_learning_mode()

Returns a boolean that indicates whether learning mode is activated or not.

Move

move_joints(joints)

Parameters:

  • joints (array of 6 joints) (rad) in float

Move the arm with a joint command.

move_pose(x, y, z, roll, pitch, yaw)

Parameters:

  • x: value of x position (m) in float
  • y: value of y position (m) in float
  • z: value of z position (m) in float
  • roll: value of roll rotation (rad) in float
  • pitch: value of pitch rotation (rad) in float
  • yaw: value of yaw rotation (rad) in float

Move the arm with a pose command.

shift_pose(axis, value)

Parameters:

Move the arm by shifting the current pose on by .

set_arm_max_velocity(percentage)

Parameters:

  • percentage (1-100)

Set the arm max velocity scaling factor.

enable_joystick(enable)

Parameters:

  • enable (0 or 1)

Enable or disable joystick mode (control the robot with a joystick controller).

Robot positions

get_saved_position_list()

Get all saved positions on the robot

get_joints()

Returns an array containing the current angles for all 6 axis (in radian).

get_arm_pose()

Returns a RobotState object (see in niryo_one_msgs package) containing the pose (position in meters + orientation in radian) of the end effector tool.

I/O

pin_mode(pin_id, pin_mode)

Parameters:

Set a digital I/O pin on INPUT or OUTPUT mode.

digital_write(pin_id, pin_state)

Parameters:

Set a digital I/O pin to LOW or HIGH. Note that the pin must have been previously set as OUTPUT.

digital_read(pin_id)

Parameters:

Returns the current pin state (0: LOW, 1: HIGH).

get_digital_io_state()

Returns a DigitalIOState object (see in niryo_one_msgs package) containing information (mode: input or output + state: high or low) for all the 6* 5V digital pins + 2* 12V switches.

End effectors

change_tool(tool_id)

Parameters:

Change current attached tool. Before you execute any action on a tool, you have to select it with this method.

get_current_tool_id()

Returns the current tool id.

open_gripper(tool_id, speed)

Parameters:

  • tool id (Tools IDs)
  • open speed (between 0 and 1000, recommended : between 100 and 500)

Open gripper at selected speed.

close_gripper(tool_id, speed)

Parameters:

  • tool id (Tools IDs)
  • close speed (between 0 and 1000, recommended : between 100 and 500)

Close gripper at selected speed. The gripper will stop when it detects the object to grab.

pull_air_vacuum_pump(tool_id)

Parameters:

Activate vacuum pump (pick object).

push_air_vacuum_pump(tool_id)

Parameters:

Deactivate vacuum pump (place object)

setup_electromagnet(electromagnet_id, pin)

Parameters:

Setup electromagnet on digital I/O (set the pin mode to OUTPUT). You need to select and setup the electromagnet before using it.

activate_electromagnet(electromagnet_id, pin)

Parameters:

Activate electromagnet on digital I/O (pick object). This will set the pin state to HIGH.

deactivate_electromagnet(electromagnet_id, pin)

Parameters:

Deactivate electromagnet on digital I/O (place object). This will set the pin state to LOW.

grab_with_tool(tool_id)

Parameters:

Call grab function depending on too_id. Equivalent to close_gripper, activate_electromagnet and pull_air_vacuum_pump.

release_tool(tool_id)

Parameters:

Call release function depending on too_id. Equivalent to open_gripper, deactivate_electromagnet and push_air_vacuum_pump.

Others

get_hardware_status()

Returns a HardwareStatus object (see in niryo_one_msgs package) containing useful info about the motors state, connection, temperature, etc. (temperature unit: °C)

wait(time)

Parameters:

  • time (seconds)

Blocks and wait for seconds.

Workspace

create_workspace(name, pose_origin, pose_1, pose_2, pose_3)

Parameters:

  • name of the workspace: WORKSPACE_NAME
    • Notes: Needs to be one of the available workspaces
  • robot pose when pointing at marker 0 with the pointing tool: POSE_0 (e.g. [0.2, 0.2, 0.0, 0.0, 0.0, 0.0] correspond to [x, y, z, roll, pitch, yaw] )
  • robot pose when pointing at marker 1 with the pointing tool: POSE_1 (e.g. [0.2, 0.1, 0.0, 0.0, 0.0, 0.0])
  • robot pose when pointing at marker 2 with the pointing tool: POSE_2 (e.g. [0.1, 0.1, 0.0, 0.0, 0.0, 0.0])
  • robot pose when pointing at marker 3 with the pointing tool: POSE_3 (e.g. [0.1, 0.2, 0.0, 0.0, 0.0, 0.0])

The marker number 0 no would be the "special" one with a black point in the center, and the others would be as follows by turning clockwise.

Creates or overwrites the workspace with the specified name. The pose arguments have to be such that the calibration tip touches the markers.

remove_workspace(name)

Parameters:

  • name of the workspace: WORKSPACE_NAME

Removes the workspace with the specified name.

get_workspace_ratio(name)

Parameters:

  • name of the workspace: WORKSPACE_NAME

Returns the aspect ratio of the specified workspace. The result will be width/height assuming that the special marker is in the top left corner.

get_workspace_list()

Returns the names of all existing workspaces.

Compute relative pose

get_target_pose_from_rel(workspace, height_offset, x_rel, y_rel, yaw_rel)

Parameters:

  • name of the workspace: WORKSPACE_NAME
  • height offset: HEIGHT_OFFSET
    • expressed in meters
  • workspace relative pose: X_REL (in [0,1]), Y_REL (in [0,1]), YAW_REL (in [-π,π])

Given a pose (xrel, yrel, yawrel) relative to a workspace, this function returns the robot pose in which the current tool will be able to pick an object at this pose. The height_offset argument (in m) defines how high the tool will hover over the workspace. If height_offset = 0, the tool will nearly touch the workspace. Returns: target_pose

Make sure to call change_tool before!

get_target_pose_from_cam(workspace, height_offset, shape, color)

Parameters:

  • name of the workspace: WORKSPACE_NAME
  • height offset: HEIGHT_OFFSET
    • expressed in meters
  • object shape to detect: SHAPE
  • object color to detect: COLOR

First detects the specified object using the camera and then returns the robot pose in which the object can be picked with the current tool. Returns: [object found] (bool), object pose, object shape, object color

Make sure to call change_tool before!

Vision Pick and place functions

vision_pick(workspace, height_offset, shape, color)

Parameters:

  • name of the workspace: WORKSPACE_NAME
  • height offset for picking: HEIGHT_OFFSET
    • expressed in m
  • object shape to detect: SHAPE
  • object color to detect: COLOR

Picks the specified object from the workspace. This function has multiple phases:

  1. detect object using the camera
  2. prepare the current tool for picking
  3. approach the object
  4. move down to the correct picking pose
  5. actuate the current tool
  6. lift the object

Please ensure that the camera is parallel to the workspace and that the 4 markers are in the field of view.

Returns: [object found] (bool), object shape, object color Make sure to call change_tool before!

move_to_object(workspace, height_offset, shape, color)

Same as get_target_pose_from_cam but directly moves to this position. Returns: [object found] (bool), object shape, object color

Make sure to call change_tool before!

pick_from_pose(x, y, z, roll, pitch, yaw)

Parameters:

  • x: value of x position (m) in float
  • y: value of y position (m) in float
  • z: value of z position (m) in float
  • roll: value of roll rotation (rad) in float
  • pitch: value of pitch rotation (rad) in float
  • yaw: value of yaw rotation (rad) in float

Pick at the specified robot pose. This function has multiple phases:

  1. move the specified pose with an offset in z
  2. move down to the specified pose
  3. actuate (close) the current tool
  4. lift the object

Make sure to call change_tool before!

place_from_pose(x, y, z, roll, pitch, yaw)

Parameters:

  • x: value of x position (m) in float
  • y: value of y position (m) in float
  • z: value of z position (m) in float
  • roll: value of roll rotation (rad) in float
  • pitch: value of pitch rotation (rad) in float
  • yaw: value of yaw rotation (rad) in float

Place at the specified robot pose. This function has multiple phases:

  1. move the specified pose with an offset in z
  2. move down to the specified pose
  3. actuate (open) the current tool
  4. move up again

Make sure to call change_tool before!

detect_object( workspace, shape, color):

Parameters:

  • name of the workspace: WORKSPACE_NAME
  • object shape to detect: SHAPE
  • object color to detect: COLOR

Please ensure that the camera is parallel to the workspace and that the 4 markers are in the field of view.

Detects the specified object using the camera. Returns [object found] (bool), relative object pose, object shape, object color

Camera image

get_compressed_image()

Returns the most recent camera image in compressed format. Uncompress e.g. using the following code snippet:

import numpy as np
import cv2


def f(compressed_image):
    np_arr = np.fromstring(compressed_image, np.uint8)
    img = cv2.imdecode(np_arr, cv2.IMREAD_COLOR)
    return img
get_calibration_object()

Return intrinsics matrix, distortion coefficients, new intrinsics matrix

Conveyor

set_conveyor(conveyor_id, activate):

Parameters:

  • conveyor_id (int) id of the conveyor (Conveyor Enum)
  • activate (bool) equip / unequip the conveyor

Set the given conveyor (ID) as being equipped/not unequipped (similar to change_tool)

control_conveyor(conveyor_id, control_on, speed, direction)

Parameters:

  • conveyor_id (int) id of the conveyor (Conveyor Enum)
  • control_on (bool) on/off conveyor
  • speed (int) speed as percentage
  • direction (int) (Conveyor Enum)

Control the given conveyor with enabled + speed + direction

update_conveyor_id(old_id, new_id)

Parameters:

Update the current conveyor id (old_id) to the conveyor id wanted (new_id)

get_conveyor_1_feedback()

Return the current conveyor_1 feedback (id / is_equipped / is_running / speed / direction)

get_conveyor_2_feedback()

Return the current conveyor_2 feedback (id / is_equipped / is_running / speed / direction)

Some useful functions

list_to_robot_state_msg(list_pos)

Parameters:

  • list_pos (float) which represents a pose as following [x, y, z, roll, pitch, yaw]

Return RobotState conversion

robot_state_msg_to_list(robot_state):

Parameters:

Return a float array conversion that represents the pose as following [x, y, z, roll, pitch, yaw]