-
Notifications
You must be signed in to change notification settings - Fork 0
Home
- Page Overview
- What is UM FingerVision?
- Tactile Sensors: A Background
- Manufacturing
- Assembly
- Hardware
- Algorithmic Operating Principles
- Usage Guide
- Appendix
The UM FingerVision project is a concept for a low cost camera based sensor used to detect shear and normal forces on a robotic gripper. The project at University of Michigan is an extension and redesign of the original FingerVision module designed by Chris Atkeson and Akihiko Yamaguchi at Carnegie Mellon University [1].
The fingertip sensor, shown in figure 1, senses the shear and normal forces applied to the front face of the fingertip. A major utility of such a mechanism is the ability to detect failure in grasps of target objects.
Figure 1:Completed Fingertip Sensor
By comparing the shear force transduced by the sensor to expected shear force from the target object, the sensor can determine if the object is still held by the manipulator. This is of chief importance for the manipulation of deformable objects wherein a visual verification of grasp does not imply the force closure of the grasp, as the object may not have sufficient rigidity in the given grasp configuration. Additionally, the sensor has the capability of localizing the object on front panel of the fingertip, though this capability has yet to be reliably modeled.
This sensor is designed for a Robotiq three-finger gripper and a Raspberry Pi 3. The repository includes the CAD files of the fingertip and the files required to run the sensor algorithm.
Applied force sensing capability is useful for robotic end effectors. An ability to detect how much force is being applied to a grasped object allows for fine control over the force exerted on deformable objects as well as an ability to control the force to perform a stable release or transfer of said object to another robot or person.
However, the current sensors on the Robotiq gripper robot provide limited data. In addition, these sensors are incapable of detecting shear force. Without accurate shear force data, the required grip force, primarily a function of friction between the gripper and the object, can not be exactly determined, thereby adding uncertainty to the success of the grasp.
The construction of sensors for robotic end effectors is a well developed study, and several solutions have been proposed for these sensors, as summarized in Saudabayev and Varol [2]. Chief among these solutions are resistive, strain gauge, piezoelectric, and optical sensor technologies. An optical tactile sensor will provide more types of data (normal, shear forces, supplementary image data) for perception and manipulation algorithms to utilize. Optical sensors also have the advantage of a long history of prior work, Atkeson [1] provides the basis for mechanical design of such sensors, papers such as Reconstructing the Shape of a Deformable Membrane from Image Data [3] provide the mathematical foundation for these sensors, and papers such as [4] detail similar tactile sensors with varying geometries, acting as robust benchmarks for the project.
The sensor is constructed out of four components:
- Camera module
- Fingertip housing
- Fisheye lens
- Fingertip front panel
With the exception of the camera module, which is unprocessed after purchase, each component requires some preprocessing before assembly. These processing steps are handled in the subsections below.
The fingertip housing is a 3D printed structure that houses the rest of the components of the sensor. The design is printed on the Formlabs Form2 printer using Durable V2 resin. It is designed to interface with the Robotiq 3 finger gripper with two 8-32 X 3/8 Socket Head Cap screws. It also contains two snap fit attachment features, which connect to the fingertip front panel.
The snap fit attachments allow for no fasteners on the component interface, eliminating manufacturing complexity. However due to the fracture toughness required of a snap fit cantilever, the design must be printed with a resin with sufficient mechanical properties. Thus, if the design is to be constructed with any other medium, it must have ultimate tensile strength and Young’s Modulus equal to or greater than that of the durable resin. The mechanical properties of the resin are shown in figure 2 below.
Figure 2: Mechanical properties of durable resin used in manufacture of fingertip housing
In addition, any manufacturing method for the fingertip must have resolution of 0.05mm or below. The fingertip housing is provided as both STL and PRT files.
After the fingertip was printed, it was then sprayed with matte black paint in order to shield the module from external lighting.
The fisheye lens is a 130108MPF-M7 lens. This lens, while designed for the form factor of the listed camera module, is threaded to an M6 screw, which including thread pitch has an inner diameter of 6.23mm. By comparison, the M7 threaded lens has experimentally measure outer diameter of 6.67mm. As such, the outer diameter of the lens must be turned down to fit into the housing of the camera. This was done with a lathe and a standard 10mm collet.
It should be noted that though the lens has with it a matching M7 mounting module, the process of removing the existing lens mounting module is sensitive and has an extremely high likelihood of destroying the camera. As such, altering the lens is the preferred method of manufacture.
The fingertip front panel is a composite constructed from an acrylic plate connected to a deformable membrane of silicone. This composite is shown in figure 3.
Figure 3: Fingertip front panel composite structure
The fingertip front panel requires significant manufacturing time. The materials required in the process are listed below:
- 2mm thickness acrylic plate
- Silicones, Inc. XP- 565, transparent, shore hardness A-16
- ComposiMold
- Black American Craft 1mm diameter Microbeads
- White silicone pigment
The 2mm thickness acrylic plate was cut using a laser cutter according the dimensions given in both STL and PRT files. The acrylic plate is cut into a rectangle, which after machine tolerance is 1.15748in by 1.30in. The first of these dimensions, thereafter called the main axis is the key dimension of the panel as it engages with the snap fit components on the housing.
The silicone layers must be cast in a mold. In order to do this, first three components are 3D printed, herein named the inner blank, outer blank, and the weighting panel, shown in figure 4. The mechanical properties of these components are not critical.
Figure 4: Inner Blank Component (left), Outer blank component (center), Weighting Panel (right)
The first of these two components are used to construct the negatives (the mold shapes) of the silicone membrane: the mold which the silicone will then be poured into. In order to do this, fasten the blanks into two separate containers with a strong adhesive such as double sided tape. Then heat the ComposiMold material in the microwave (heat varies based on volume), and pour over the blanks. Then cool the mold rapidly in freezer, or at room temperature overnight. The resulting molds can then be removed from the container, resulting in two mold structures shown in figure 5.
Figure 5: Mold in container (left), Mold of outer blank (right)
Note should be taken that the silicone molds will contract over time, and as such must be recast periodically once they contract past the initial tolerance of the design.
The silicone layup is the process of casting the silicone into the aforementioned molds. The first step in this process in to use a set of tweezers to place a single 1mm diameter bead into each of the dimples present on the surface of the inner mold. After the beads are placed in the dimples, the now cut acrylic plate should be placed on the ridges of the inner mold. This should leave two large holes on either side of the acrylic plate. This process is shown in figure 6.
Figure 6: Process of putting beads and acrylic plate in mold in preparation of silicone layer
Once this is mold is constructed, the two component agents of the XP-565 silicone should be mixed in a 10:1 mass ratio as dictated by the documentation printed on the resin. Each panel requires approximately 15grams of clear resin. Once this is sufficiently mixed, this should be degassed in a vacuum chamber at between 26-28psi. The mixture should be held under vacuum for around 1 hour, the time required for all air introduced in the mixing process to leave the mixture and the mixture to turn clear. Once this is complete, the vacuum can be released and then the silicone should then be poured into the mold through the two holes in the mold. This must be done slowly so as to not dislodge the beads. The working time for the silicone is approximately 4 hours, so the process need not be rushed. Silicone can be poured over the acrylic or over the mold as well: it is not difficult to remove post-hoc.
Once the silicone is poured into the mold the weighting panel should be placed on the plate so that the two ridges are aligned with the main axis of the panel. Finally weight should be placed on the panel to prevent the panel from rising while the silicone cures. The completed structure is shown in figure 7.
Figure 7: Silicone while curing in mold
Allow the silicone to partially cure: approximately 36 hours, and then remove from the mold by flexing the mold and prying the silicone out. After the silicone is removed, the beads should have adhered to the silicone and the silicone to the acrylic front panel.
The second and final layer of silicone is the opaque layer which blocks external light from entering the cameras vision. In order to do this, prepare the silicone as before, including the degassing step, however add a small (approximately 20mg) amount of white silicone dye to the silicone while mixing. This will create an opaque white gel. Pour this silicone after degassing on the outer mold and then place the previously constructed acrylic-clear silicone-bead composite and place on top of this mixture, applying firm downward pressure with the weighting panel. After 48 hours remove the panel. This completed panel is shown in figure 8.
Figure 8: Completed panel with silicone
If there is excess silicone in the mold it will adhere to the silicone and as such it should be removed using a small blade. Ensure that there is no silicone on the back of the acrylic panel and on the front of the panel near the snap-fit attachment points.
Note, if at anytime during operation the silicone panel comes apart from the acrylic, it can be readily glued to the acrylic panel using any clear drying adhesive. To ensure that this does not alter the deformation of the membrane, only apply a small amount of the adhesive to the edges of the silicone and never directly atop or near any beads.
The camera should initially be connected to a raspberry pi (details given in following sections). Once its operation is verified, the lens that is already connected to the camera should be removed with small pliers. Once this is finished, the camera should be placed on the mounting location, shown in figure 9, and fastened with non-expanding adhesive.
Figure 9: Mounting location for camera
The prepared fisheye lens should then be placed in the lens hole of this camera. Once the lens is loosely placed, snap fit the fingertip front panel onto the fingertip using the provided snap fits. Now using small tweezers adjust the height of the fisheye lens in the camera housing to bring the beads into focus. Then fasten the lens to the camera using quick dry epoxy. The assembly is now complete. The front panel can be removed and replaced as required during operation.
The software for computing the forces on the fingertip sensor runs on a standard Raspberry Pi 3 running on an Ubuntu Mate operating system with ROS kinetic installed. The sensor is currently being controlled and output monitored through a keyboard/mouse and an hdmi display module. Although data output from the sensor is currently monitored directly on the Raspberry Pi, a ROS publish/subscribe framework is in place for sending displacement and force data over the network to another computer for further processing
In addition to peripherals for controlling and monitoring the sensor, LEDs are also attached to the edges of the fingertip to aid in the visibility of the tracking dots in a variety of lighting conditions. These LEDs are connected in parallel to the Raspberry Pi 5V supply pin. Lastly, the camera module connects via the camera connector specific to the Raspberry Pi 3.
The sensor operates by detecting the initial position of the beads embedded in the fingertip front panel and then monitoring the displacement of the beads due to the deformation of the silicone surface while the fingertip applies force to an object. This displacement is then detected and converted to a force computation.
To track the beads and thereby compute their displacement, OpenCV’s blob detection class, SimpleBlobDetector is used. This module tracks blobs in a sample image by generating a set of binary images based on a set of intensity thresholds. The centers of various blobs in each binary image is averaged (between the images) to derive a set of key points in the image. The key points are then filtered for various parameters: notably convexity, area and circularity, resulting in a filter corresponding to the location of the beads in the image.
The current algorithm assumes linear dependence between the average displacement of the beads and the average force applied to the fingertip panel. As such, the calculated displacement is multiplied by a constant gain, previously determined experimentally, to derive and estimated force. These constants are determined using the calibration procedure as dictated in the calibration section.
The algorithm assumes independent deformation in each axis of deformation. Using the coordinate system provided in figure 10 below, the shear force detected corresponds to the force in the x-z plane; normal force is normal to this plane.
Figure 10: Coordinate system of fingertip sensor
In theory, shear forces (in the x and z direction) can be inferred from the average displacement of dots in the x or z direction, and the normal force can be inferred from the changes in the size of the dots (a large dot would indicate a closer bead and thus a normal force). In practice however, the size change of the dots is so minute, that any normal force measurement based off this variable would be too noisy. This issue was experienced in Atkeson’s and Yamaguchi’s experiments as well as our own [1].
Thus, to extract normal forces reliably, two algorithms were implemented. The first approach was the same approach used by Atkeson’s team for deriving normal forces: that of computing the Euclidean norm of the two shear displacements. The approach works in the following way:
- A local “displacement” in the y direction is calculated as follows: dy= sqrt(dx^2+dz^2)
- The displacement is multiplied by a predetermined constant to get the local normal force: fy=cy *dy
- The local normal forces are averaged to get a global normal force
This approach works particularly well when pure normal forces are applied (Figure 11 left) as a positive normal force will be registered, but zero shear forces are measured as the average displacements cancel each other out. However, the normal force measurement is less accurate, or even erroneous when shear forces are present. If for example a pure shear is applied (Figure 11 center) the sensor will register a positive normal force even when no normal forces are present.
Figure 11: Four beads under pure normal force (left), Four beads under pure shear (center), Four beads under shear and normal force (right)
In an attempt to combat this issue, a second algorithm was implemented to better isolate measurements in the shear and normal directions. The algorithm makes use of the fact that normal forces on the sensor cause a divergence in the dots from the center of the sensor. This divergence can be captured by the standard deviation of the dots from the center of the shear force on the sensor.
- The mean of dx and dz are also computed to determine directions of shear forces applied (and thus the center of all the dot movements)
- The (euclidean distance)^2 of each dot from this center point is taken (cx-dx)^2+(cz-dz)^2
- all the distance^2 values are averaged and the square root of the result is taken (obtaining the standard deviation of the dot distances from the shear center).
- This standard deviation is correlated with a normal force measurement on the sensor
A more detailed description of the function of the code can be found in this repository's readme: https://github.com/mpatankar/UM-FingerVision
The following software dependencies are required to run the UM version of fingervision:
Table 1: Dependencies required for tactile sensing algorithm
Dependency/Library | Usage |
---|---|
OpenCV3 | Image Processing and Blob Detection |
ROS Kinetic | Communication between blob detection program |
Python 2.7 | If Using Provided Scripts |
The software consists of a ROS publisher node which performs the dot tracking (this is a lightly modified version of the simple_blob_tracker4 from Atkeson's and Yamaguchi's fingervision repository) and a ROS subscriber node that translates dot movements to forces. The ROS publisher node can be instantiated with the command:
roscd fv_standalone
rosrun fv_standalone fv_node
Currently, the publisher node must be run on the Raspberry Pi 3 with a monitor display as currently there is not a form of video streaming implemented for performing the dot tracking elsewhere. The publisher node also has a bit of a user interface. Pressing the 'c' key will bring up a series of sliders for adjusting the intensity thresholds and noise filtering of raw images. Pressing 'c' while the sliders are present will re-calibrate the tracked dots and zero displacements. Pressing the 'p' key saves the current parameters of the simple_blob_tracker4.cpp program and stores it in a YAML file. Pressing the 'l' key loads the parameter file.
The subscriber node can be run with the following command:
cd fv_standalone/src
python vs_test_force.py -y [norm|dist]
where the "norm" option will calculate normal forces based of the norm of the x and z displacements and the "dist" option will calculate normal forces based of the standard deviation of the distance of dots from the shear center. This program currently print the results of the force calculations. Although a separate ROS message could be added if the transmission of this data is desired.
In order to calibrate the sensor, a holding block and a set of tilted holders must be manufactured. These calibration blocks were manufactured in the Form2. The holding block mounts to the fingertip via two 8-32 X 3/8 Socket Head Cap screws. This block then fits into the set of tilted holding blocks which angle the fingertip at 15,30 and 45 degrees in either one or two axis. Along with rotation of the holding block, this generates 13 possible configurations of the calibration set, shown in figure 13.
Figure 12: Set of Calibration blocks used to angle the fingertip
In order to calibrate, several masses should be selected and weighted. A set of calibration tests can be constructed by choosing one such mass and one calibration block. Then the data for the given calibration test is to be inputted into a config file test_config.json. Each calibration configuration is represented by a json list of 4 floats: mass of the weight in grams, rotation of the sensor in the x direction (in degrees), rotation of the sensor in the y direction (in degrees), torque on the sensor. Although torque is included in the configuration file, no calibration is currently being done on torque measurements. The following is an example of the json file for a small test set:
{"tests":[
[135.6, 0, 0, 0],
[584.2, 0, 0, 0],
[685.2, 0, 0, 0],
[135.6, 15, 0, 0],
[584.2, 15, 0, 0],
[685.2, 15, 0, 0]
]}
Once the tests are saved in the config file, a test set can be generated:
cd fv_standalone/src
python gentests.py -j test_config.json
This command will generate a test set consisting of json files containing expected shear and normal forces and recorded blob displacements. The script also saves images of the dots under certain configurations.
After the generation of a test set, linear coefficients for each force direction can be determined by running the following python script:
python optimize.py -y [norm|dist]
This script scans through each json file in the test set and fits a linear least squares curve to the displacements vs expected force data. These coefficients can then be placed in the fscale[] list in the vs_test_force.py and vs_force_pca.py scripts. the "norm" and "dist" options indicated which method to utilize when fitting the curve for the normal force
[1] A. Yamaguchi and C. G. Atkeson, “Implementing tactile behaviors using FingerVision,” 2017 IEEE-RAS 17th International Conference on Humanoid Robotics (Humanoids), 2017.
[2] A. Saudabayev and H. A. Varol, "Sensors for Robotic Hands: A Survey of State of the Art," in IEEE Access, vol. 3, pp. 1765-1782, 2015.
[3] N. J. Ferrier and R. W. Brockett, “Reconstructing the Shape of a Deformable Membrane from Image Data,” The International Journal of Robotics Research, vol. 19, no. 9, pp. 795–816, 2000.
[4] M. Ohka, H. Kobayashi and Y. Mitsuya, "Sensing characteristics of an optical three-axis tactile sensor mounted on a multi-fingered robotic hand," 2005 IEEE/RSJ International Conference on Intelligent Robots and Systems, 2005, pp. 493-498.