To use Python API :
-
Connect to Niryo One via ssh On Linux, open a terminal and type ssh niryo@(ip_address). When asked for a password, type "robotics". This is the default password, you can change it if you want.
On Windows, open an ssh client with Putty. (username : "niryo", password : "robotics")
you can find robot's IP by clicking on the “Search for robots in network” button on Niryo One Studio. On Linux, you can also use the nmap client to find hosts in your network.
For more information, please refer to: https://niryo.com/docs/niryo-one/developer-tutorials/connect-to-the-raspberry-pi-3-via-ssh/
-
Create a Python file
touch test_python_api.py chmod +x test_python_api.py
-
Use this template to write your code
By writing directly in ssh on the raspberry with nano text editor
nano test_python_api.py
Follow this python template:
#!/usr/bin/env python from niryo_one_python_api.niryo_one_api import * import rospy import time rospy.init_node('niryo_one_example_python_api') n = NiryoOne() try: # Your code here except NiryoOneException as e: print e # Handle errors here
You can find some examples here.
You can also write the python script on your own computer with your favorite text editor. Then send it to the raspberry by scp:
scp -p SOURCE_FOLDER_PATH/FILE.py niryo@ROBOT_IP:DESTINATION_FOLDER_PATH
Ex: This command will send test.txt file to the /home/niryo folder:
scp -p ./test.txt [email protected]:
-
Run your code on RPi
```
python test_python_api.py
```
You can use those predefined constants (instead of numbers) in the Python API methods. Please read the examples to see how to use them.
- TOOL_NONE
- TOOL_GRIPPER_1_ID
- TOOL_GRIPPER_2_ID
- TOOL_GRIPPER_3_ID
- TOOL_ELECTROMAGNET_1_ID
- TOOL_VACUUM_PUMP_1_ID
- PIN_MODE_OUTPUT
- PIN_MODE_INPUT
- PIN_HIGH
- PIN_LOW
- GPIO_1A
- GPIO_1B
- GPIO_1C
- GPIO_2A
- GPIO_2B
- GPIO_2C
- SW_1
- SW_2
- OK
- KO
- AXIS_X
- AXIS_Y
- AXIS_Z
- ROT_ROLL
- ROT_PITCH
- ROT_YAW
- CONVEYOR_DIRECTION_BACKWARD
- CONVEYOR_DIRECTION_FORWARD
- CONVEYOR_ID_ONE
- CONVEYOR_ID_TWO
- COLOR_RED
- COLOR_GREEN
- COLOR_BLUE
- COLOR_ANY
- SHAPE_CIRCLE
- SHAPE_SQUARE
- SHAPE_ANY
position:
x: (float)
y: (float)
z: (float)
rpy:
roll: (float)
pitch: (float)
yaw: (float)
Calibrate robot motors automatically (by moving axis). If calibration is not needed, this method will do nothing.
Calibrate robot motors manually (the robot just needs to be in 'home' position and to have been auto calibrated at least once). If calibration is not needed, this method will do nothing.
Parameters:
- activate (0 or 1)
Activate or deactivate learning mode (= disable/enable torque on motors).
Returns a boolean that indicates whether learning mode is activated or not.
Parameters:
- joints (array of 6 joints) (rad) in float
Move the arm with a joint command.
Parameters:
- x: value of x position (m) in float
- y: value of y position (m) in float
- z: value of z position (m) in float
- roll: value of roll rotation (rad) in float
- pitch: value of pitch rotation (rad) in float
- yaw: value of yaw rotation (rad) in float
Move the arm with a pose command.
Parameters:
- axis (Shift Pose Enum)
- value (m/rad)
Move the arm by shifting the current pose on by .
Parameters:
- percentage (1-100)
Set the arm max velocity scaling factor.
Parameters:
- enable (0 or 1)
Enable or disable joystick mode (control the robot with a joystick controller).
Get all saved positions on the robot
Returns an array containing the current angles for all 6 axis (in radian).
Returns a RobotState object (see in niryo_one_msgs package) containing the pose (position in meters + orientation in radian) of the end effector tool.
Parameters:
- pin (Digital pins)
- mode (0: OUTPUT, 1: INPUT)
Set a digital I/O pin on INPUT or OUTPUT mode.
Parameters:
- pin (Digital pins)
- state (0: LOW, 1: HIGH)
Set a digital I/O pin to LOW or HIGH. Note that the pin must have been previously set as OUTPUT.
Parameters:
- pin (Digital pins)
Returns the current pin state (0: LOW, 1: HIGH).
Returns a DigitalIOState object (see in niryo_one_msgs package) containing information (mode: input or output + state: high or low) for all the 6* 5V digital pins + 2* 12V switches.
Parameters:
- tool id (Tools IDs)
Change current attached tool. Before you execute any action on a tool, you have to select it with this method.
Returns the current tool id.
Parameters:
- tool id (Tools IDs)
- open speed (between 0 and 1000, recommended : between 100 and 500)
Open gripper at selected speed.
Parameters:
- tool id (Tools IDs)
- close speed (between 0 and 1000, recommended : between 100 and 500)
Close gripper at selected speed. The gripper will stop when it detects the object to grab.
Parameters:
- tool id (Tools IDs)
Activate vacuum pump (pick object).
Parameters:
- tool id (Tools IDs)
Deactivate vacuum pump (place object)
Parameters:
- tool id (Tools IDs)
- pin (Digital pins)
Setup electromagnet on digital I/O (set the pin mode to OUTPUT). You need to select and setup the electromagnet before using it.
Parameters:
- tool id (Tools IDs)
- pin (Digital pins)
Activate electromagnet on digital I/O (pick object). This will set the pin state to HIGH.
Parameters:
- tool id (Tools IDs)
- pin (Digital pins)
Deactivate electromagnet on digital I/O (place object). This will set the pin state to LOW.
Parameters:
- tool id (Tools IDs)
Call grab function depending on too_id. Equivalent to close_gripper, activate_electromagnet and pull_air_vacuum_pump.
Parameters:
- tool id (Tools IDs)
Call release function depending on too_id. Equivalent to open_gripper, deactivate_electromagnet and push_air_vacuum_pump.
Returns a HardwareStatus object (see in niryo_one_msgs package) containing useful info about the motors state, connection, temperature, etc. (temperature unit: °C)
Parameters:
- time (seconds)
Blocks and wait for seconds.
Parameters:
- name of the workspace: WORKSPACE_NAME
- Notes: Needs to be one of the available workspaces
- robot pose when pointing at marker 0 with the pointing tool: POSE_0 (e.g.
[0.2, 0.2, 0.0, 0.0, 0.0, 0.0]
correspond to[x, y, z, roll, pitch, yaw]
) - robot pose when pointing at marker 1 with the pointing tool: POSE_1 (e.g.
[0.2, 0.1, 0.0, 0.0, 0.0, 0.0]
) - robot pose when pointing at marker 2 with the pointing tool: POSE_2 (e.g.
[0.1, 0.1, 0.0, 0.0, 0.0, 0.0]
) - robot pose when pointing at marker 3 with the pointing tool: POSE_3 (e.g.
[0.1, 0.2, 0.0, 0.0, 0.0, 0.0]
)
The marker number 0 no would be the "special" one with a black point in the center, and the others would be as follows by turning clockwise.
Creates or overwrites the workspace with the specified name. The pose arguments have to be such that the calibration tip touches the markers.
Parameters:
- name of the workspace: WORKSPACE_NAME
Removes the workspace with the specified name.
Parameters:
- name of the workspace: WORKSPACE_NAME
Returns the aspect ratio of the specified workspace. The result will be width/height assuming that the special marker is in the top left corner.
Returns the names of all existing workspaces.
Parameters:
- name of the workspace: WORKSPACE_NAME
- height offset: HEIGHT_OFFSET
- expressed in meters
- workspace relative pose: X_REL (in [0,1]), Y_REL (in [0,1]), YAW_REL (in [-π,π])
Given a pose (xrel, yrel, yawrel) relative to a workspace, this function returns the robot pose in which the current tool will be able to pick an object at this pose. The height_offset argument (in m) defines how high the tool will hover over the workspace. If height_offset = 0, the tool will nearly touch the workspace. Returns: target_pose
Make sure to call change_tool
before!
Parameters:
- name of the workspace: WORKSPACE_NAME
- height offset: HEIGHT_OFFSET
- expressed in meters
- object shape to detect: SHAPE
- must be one of: Shape Enum
- object color to detect: COLOR
- must be one of: Color Enum
First detects the specified object using the camera and then returns the robot pose in which the object can be picked with the current tool. Returns: [object found] (bool), object pose, object shape, object color
Make sure to call change_tool
before!
Parameters:
- name of the workspace: WORKSPACE_NAME
- height offset for picking: HEIGHT_OFFSET
- expressed in m
- object shape to detect: SHAPE
- must be one of: Shape Enum
- object color to detect: COLOR
- must be one of: Color Enum
Picks the specified object from the workspace. This function has multiple phases:
- detect object using the camera
- prepare the current tool for picking
- approach the object
- move down to the correct picking pose
- actuate the current tool
- lift the object
Please ensure that the camera is parallel to the workspace and that the 4 markers are in the field of view.
Returns: [object found] (bool), object shape, object color
Make sure to call change_tool
before!
Same as get_target_pose_from_cam
but directly moves to this position. Returns: [object found] (bool), object shape, object color
Make sure to call change_tool
before!
Parameters:
- x: value of x position (m) in float
- y: value of y position (m) in float
- z: value of z position (m) in float
- roll: value of roll rotation (rad) in float
- pitch: value of pitch rotation (rad) in float
- yaw: value of yaw rotation (rad) in float
Pick at the specified robot pose. This function has multiple phases:
- move the specified pose with an offset in z
- move down to the specified pose
- actuate (close) the current tool
- lift the object
Make sure to call change_tool
before!
Parameters:
- x: value of x position (m) in float
- y: value of y position (m) in float
- z: value of z position (m) in float
- roll: value of roll rotation (rad) in float
- pitch: value of pitch rotation (rad) in float
- yaw: value of yaw rotation (rad) in float
Place at the specified robot pose. This function has multiple phases:
- move the specified pose with an offset in z
- move down to the specified pose
- actuate (open) the current tool
- move up again
Make sure to call change_tool
before!
Parameters:
- name of the workspace: WORKSPACE_NAME
- object shape to detect: SHAPE
- must be one of: Shape Enum
- object color to detect: COLOR
- must be one of: Color Enum
Please ensure that the camera is parallel to the workspace and that the 4 markers are in the field of view.
Detects the specified object using the camera. Returns [object found] (bool), relative object pose, object shape, object color
Returns the most recent camera image in compressed format. Uncompress e.g. using the following code snippet:
import numpy as np
import cv2
def f(compressed_image):
np_arr = np.fromstring(compressed_image, np.uint8)
img = cv2.imdecode(np_arr, cv2.IMREAD_COLOR)
return img
Return intrinsics matrix, distortion coefficients, new intrinsics matrix
Parameters:
- conveyor_id (int) id of the conveyor (Conveyor Enum)
- activate (bool) equip / unequip the conveyor
Set the given conveyor (ID) as being equipped/not unequipped (similar to change_tool)
Parameters:
- conveyor_id (int) id of the conveyor (Conveyor Enum)
- control_on (bool) on/off conveyor
- speed (int) speed as percentage
- direction (int) (Conveyor Enum)
Control the given conveyor with enabled + speed + direction
Parameters:
- old_id (int) id of the current conveyor_id (Conveyor Enum)
- new_id (int) id to set as current conveyor_id (Conveyor Enum)
Update the current conveyor id (old_id) to the conveyor id wanted (new_id)
Return the current conveyor_1 feedback (id / is_equipped / is_running / speed / direction)
Return the current conveyor_2 feedback (id / is_equipped / is_running / speed / direction)
Parameters:
- list_pos (float) which represents a pose as following
[x, y, z, roll, pitch, yaw]
Return RobotState conversion
Parameters:
- robot_states: RobotState
Return a float array conversion that represents the pose as following
[x, y, z, roll, pitch, yaw]