This content is password protected. To view it please enter your password below:

]]>

In this research, six brands of soft drinks are decided to be picked up by a robot with a monocular Red Green Blue (RGB) camera. The drinking bottles need to be located and classified with brands before being picked up. The Mask Regional Convolutional Neural Network (R-CNN), a mask generation network improved from Faster R-CNN, is trained with common object in contest datasets to detect and generate the mask on the bottles in the image. The Inception v3 is selected for the brand classification task. Around 200 images are taken or found at first; then, the images are augmented to 1500 images per brands by using random cropping and perspective transform. The result shows that the masked image can be labeled with its brand name with at least 85% accuracy in the experiment.

Under the lower birth rate and aging society, the cost of human labor is becoming higher. In a warehouse, the picking task for goods sorting takes more than half of the total cost [1]. During the festival and special events, the drinks are randomly put in a big box or a cooler box with water and ice. The existing picking robot can hardly process the overlapping of the objects without modeling or the same objects [2]. In this paper, the image processing for the robot picking task is discussed.

Random picking is a challenging problem in the robotics and computer vision fields. The aim of this task is to pick up objects which are manipulated under structured layout by using a robot arm’s end-tip effector. Bin picking was studied when Amazon started the picking challenge. By using a 3D image sensor, the position and

pose of the object to be picked up are calculated [3].

On the other hand, for the industrial random picking robots, FANUC, YASKAWA, etc. have developed the bin picking robot by using the structured light or binocular camera.

In some special application such as bottles being put in the ice water, a normal 3D sensor cannot get the correct depth information. In this paper, a deep-learning-based image processing method is purposed to detect and segment the randomly ordered Polyethylene Terephthalate (PET) bottle by using a monocular Red Green Blue (RGB) camera instead of a depth sensor.

Additionally, this research also discusses the brands’ recognition under the overlapped conditions by using the Inception v3 [4] without the knowledge on the target.

In this research, random piled up drinking bottles of different sizes and brands are required to be picked up. The bottle is not limited to one type of bottle, so deep learning-based detection method are used to solve this problem. The whole process is divided into two stages: the training stage and the detection stage. The training stage is to train the network in order to get the corresponding kernel and bias value. The detection stage is to detect and generate a mask on each bottle and find out the brands of the bottle.

The network training is divided into five steps as shown in Figure 1. First, the Mask Regional Convolutional Neural Network (R-CNN) [5] is pretrained by the Microsoft Common Object in Contest (COCO) dataset. The COCO dataset has a large number of images with labels and segmentation lines. To prevent overfitting, the Mask R-CNN is trained with all 80 classes of COCO dataset. Second, around 200 photos are taken or found for each brand of bottle. Next, the dataset of bottles is used for fine tune the Mask R-CNN. Then, all the images are augmented with random cutting and perspective transform to increase the dataset size to 1500 brands per brands. Finally, the augmented images are used for training for brand recognition.

The training of Mask R-CNN takes 160 epochs in total, where 40 epochs for the classification head, and 120 epochs for the ResNet-101 backbone.

Around 80% of the images are randomly selected for the training, and the remaining 20% of images are used for validation. The training is performed with learning rate 0.01 and the training is stopped when the validation accuracy no longer rising along with the training accuracy.

The detection stage contains four steps as shown in Figure 2. First, the Region of Interest (ROI) box and the mask are needed to be generated by using Mask R-CNN. Next, the mask is bitwise-AND with the original image. Then, by using the ROI generated in the

first step, a bottle is cut out from the image with a black background. Finally, for each image with only one bottle visible are sent to the Inception v3 network for brand recognition.

The output of the Inception v3 is a vector with six elements that indicate the confidence of each class. The evaluation of the brand recognition is based on the comparison between the ground truth, human labeled and the highest confidence of network output, so that it is possible to see the accuracy of brand recognition by using Inception v3 compare to human beings.

Assuming the set of objects with size N is S. The total image of objects detected from the image is $S = S_{human} \cup S_{inception}$ and images cannot be classified by Inception v3 or human is Sfailed and Se as shown in Figure 3. The accuracy of brand recognition P is calculated by formula (1).

$$P_{inception}=\frac{N_{inception}}{N}\times 100\% $$ $$P_{human}=\frac{N_{human}}{N}\times 100\% $$

Based on the method mentioned in the previous section, the experiment is performed. To run the different network on the same

machine, a library called “Protocol Buffer” is used as data exchange.

The bottles are the primary detection target in this research. However, the number of images that can be used to retrain the whole network is limited. The COCO dataset comes with 80 classes for object detection plus 1 class for background. So, all 81 classes are used for training the whole network at first. Then the bottles taken from the test subject are labeled with a class name and a mask as shown in Figure 4.

The training rate in this step is set to 0.01 and only training the mask and classification parts in the network.

The brand recognition is implemented by the Inception v3. Retraining the whole network will cause too much time and easy to get overfitted. So, the initial weight of Inception v3 is transferred from the object recognition network. In this research, six kinds of

drinking in the Japan market including Oiocha, Coca-Cola, Calpis,

Afternoon tea, Irohasu, and Nama cha are selected as test subject. For each brand, around 150–200 images are collected from the Internet or taken directly. Then, the images are processed randomly with cropping, perspective transform, rotation and zooming to increase the number of images up to 2000 for each brand as shown in Figure 5.

During the training stage, 20% of all the images in the dataset is selected as the validation dataset. The accuracy of the training is record on the end of each batch. The training of the Inception v3 stops, based on the accuracy that convergence to around 0.85 as shown in Figure 6. The validation accuracy stops increase after around 3500 training steps.

Figure 7 shows the result of the mask and ROI generation. The evaluation is based on the real image taken from a normal monocular camera as shown in Figure 7a. By using the mask and ROI generated from the network shown in Figure 7b, the original image can be masked and cut off as shown in Figures 7c and 7d.

As shown in the result, bottles with color can be correctly detected

from the image. However, bottles with a transparent appearance

have a lower detection rate.

The brand recognition is based on the image cut off from the Figure 7d. These images are resized to 299 × 299 and sent to Inception v3 for the brand recognition one-by-one. The output with the highest score is selected as the result. Here, we select one more group of test data besides the images in Figure 7, and the result of the brand recognition is shown in Table 1.

Ground truth | Machine labeled | Machine confidence | Human labeled |
---|---|---|---|

Oiocha | Oiocha | 0.995 | Oiocha |

Calpis | Calpis | 0.996 | Calpis |

Coca-Cola | Coca-Cola | 0.977 | Coca-Cola |

Namacha | Namacha | 0.984 | Namacha |

Namacha | Namacha | 0.971 | Namacha |

Coca-Cola | Coca-Cola | 0.900 | Coca-Cola |

Irohasu | Irohasu | 0.994 | Irohasu |

Afternoon tea | Afternoon tea | 0.557 | Afternoon tea |

Namacha | Namacha | 0.986 | Namacha |

Calpis | Calpis | 0.995 | Calpis |

Namacha | Namacha | 0.999 | Namacha |

Coca-Cola | Coca-Cola | 0.882 | Coca-Cola |

Coca-Cola | Coca-Cola | 0.986 | Coca-Cola |

Irohasu | Irohasu | 0.693 | Unknown |

Coca-Cola | Irohasu | 0.989 | Unknown |

Number of correct answers Total accuracy | 13 of 15 results 86% |

As the result shows, although the network gives out all the correct result, the score of output is not very satisfying in some cases, because the confidence under 0.6 will be unacceptable result.

As the data in Table 1 shows, after filtering out the confidence lower than 0.6, the total accuracy is around 85% which is like human labeled.

The Inception v3 can partially treat with the transparent objects in the image.

The combination of the Mask R-CNN and Inception v3 can detect and recognize the brand of the bottles with overlapping in at least 85% accuracy which gives a near result compare to human beings.

https://www.tensorflow.org/tutorials/images/classification

https://github.com/matterport/Mask_RCNN

You can adjust the time threshold on line 34 as you want (Default 5 min). ]]>

The Dependency DLL includes

VTK, Librealsense, Tessertact, HDF.

]]>The robot structure is shown in the figure below:

* The general solution for the forward kinematics of the manipulator can be found Here*

Suppose the initial state of the manipulator that all nodes are shown on the same line as shown in following figure.

And we define all the joints to rotate only around the Z axis or the Y axis, so that this initial state all the rotation matrix ${\mathbf{B}}_{i}$ is all unit matrix.

Define the base axis ${\mathbf{P}}_{0}$; The length of the base to the first joint is ${l_0}$, The link length from ${J_1}$ to ${J_2}$ is ${l_1}$. The link length from ${J_2}$ to ${J_3}$is ${l_2}$ etc.

$${\mathbf{A}}_{i} = {\begin{bmatrix} {\mathbf{X}}_{i} & {\mathbf{Y}}_{i} & {\mathbf{Z}}_{i} \end{bmatrix}} $$

According to the method of forward kinematics, we can get:

$$ {\mathbf{P}}_{1} = {\mathbf{P}}_{0}+ {{\mathbf{A}}_{0}}{\begin{bmatrix} 0\\0\\{l_0} \end{bmatrix}} $$

$$ {\mathbf{P}}_{2} = {\mathbf{P}}_{1}+ {{\mathbf{A}}_{0}}{{\mathbf{A}}_{1}}{\begin{bmatrix} 0\\0\\{l_1} \end{bmatrix}} $$

$$ {\mathbf{P}}_{3} = {\mathbf{P}}_{2}+ {{\mathbf{A}}_{0}}{{\mathbf{A}}_{1}}{{\mathbf{A}}_{2}}{\begin{bmatrix} 0\\0\\{l_2} \end{bmatrix}} $$

$$ {\mathbf{P}}_{4} = {\mathbf{P}}_{3}+ {{\mathbf{A}}_{0}}{{\mathbf{A}}_{1}}{{\mathbf{A}}_{2}}{{\mathbf{A}}_{3}}{\begin{bmatrix} 0\\0\\{l_3} \end{bmatrix}} $$

$$ {\mathbf{P}}_{5} = {\mathbf{P}}_{4}+ {{\mathbf{A}}_{0}}{{\mathbf{A}}_{1}}{{\mathbf{A}}_{2}}{{\mathbf{A}}_{3}}{{\mathbf{A}}_{4}}{\begin{bmatrix} 0\\0\\{l_4} \end{bmatrix}} $$

$$ {\mathbf{P}}_{6} = {\mathbf{P}}_{5}+ {{\mathbf{A}}_{0}}{{\mathbf{A}}_{1}}{{\mathbf{A}}_{2}}{{\mathbf{A}}_{3}}{{\mathbf{A}}_{4}}{{\mathbf{A}}_{5}}{\begin{bmatrix} 0\\0\\{l_5} \end{bmatrix}} $$

$$ {\mathbf{P}}_{E} = {\mathbf{P}}_{6}+ {{\mathbf{A}}_{0}}{{\mathbf{A}}_{1}}{{\mathbf{A}}_{2}}{{\mathbf{A}}_{3}}{{\mathbf{A}}_{4}}{{\mathbf{A}}_{5}}{{\mathbf{A}}_{6}}{\begin{bmatrix} 0\\0\\{l_6} \end{bmatrix}} $$

The inverse kinematics of the 6-dof manipulator requires a complete endpoint manipulator position and pose.

Before solving, the Euler Angle must be transform into a rotation matrix. For more information about the transform between Euler Angle and rotation matrix, Please refer to this article

First, assume the position of the manipulator is ${\mathbf{P}}_E$ and the rotation matrix is ${\begin{bmatrix} {\mathbf{X_6}} & {\mathbf{Y_6}} & {\mathbf{Z_6}} \end{bmatrix}}$

The range of joint 2,3 and 5 is $\left [0,\frac{\pi}{2} \right ]$, The range of joint 1,4,6 is $\left [-\pi,\pi \right ]$

The method to solve the rotation angle of each joint is as follows.

Known the manipulator rotation matrix ${\begin{bmatrix} {\mathbf{X_6}} & {\mathbf{Y_6}} & {\mathbf{Z_6}} \end{bmatrix}}$,as the rotation angle of $J_6$ is not related to manipulator, so we can construct vector

$${\vec{{J_5}E}} = ({l_5}+{l_6}){\mathbf{Z_6}}$$

The vector is shown in the simplified diagram of the mechanical below.

To calculate the position of $J_5$ ${\mathbf{P}}_5$, we only need minus ${\vec{{J_5}E}}$ from the endpoint position

$${\mathbf{P}_5} = {{\mathbf{P}}_E} – {\vec{{J_5}E}} $$

From a mechanical arm structure, the ${J_1} $to ${J_5} $ is on the same plane. The projection of ${J_1} $${J_5} $ to the x-y plane, as shown in the figure below. The angle between the projection line and X axis is ${\theta_1} $.

Here we assume ${\mathbf{P}_5} = {\begin{bmatrix} {x_5} \\ {y_5} \\{z_5} \end{bmatrix}}$

So,

$${\theta_1} = \operatorname{atan2}({y_5},{x_5}) $$

Because we know the position of ${J_5}$ and the position of ${J_2}$ ${\mathbf{P}_2}$is fixed, connect ${J_2}$and ${J_5}$to make a triangle, as shown in the figure below.

We can calculate $\vec{{J_2}{J_5}} = {\mathbf{P}_5} – {\mathbf{P}_2}$

By the law of cosines:

$${\cos{\theta_3}} = \frac{{\left \| \vec{{J_2}{J_5}} \right \|}^2 – {l_2^2} -{l_3^2}}{2{l_2}{l_3}} $$

So:

$${\theta_3} = \arccos{\left ( \frac{{\left \| \vec{{J_2}{J_5}} \right \|}^2 – {l_2^2} -{l_3^2}}{2{l_2}{l_3}} \right )} $$

As shown in figure before, draw a line cross ${J_5} $be perpendicular to the extension line of ${J_2} {J_3} $. The extension cord is $S $

From ${J_2} $in the plane of ${\bigtriangleup} {J_2} {J_3} {J_5} $draw a perpendicular line to the Z axis, and make a perpendicular line across the ${J_5}$perpendicular to this line at $T $

Connect ${J_2}{J_5}$ and name this line$A$

In ${\bigtriangleup}{J_2}{J_3}{J_5}$, apply Pythagorean theorem, we can get:

$$A = \sqrt{(({l_3}+{l_4})\sin{\theta_3})^2 + (({l_3}+{l_4})\sin{\theta_3}+{l_2})^2}$$

$$\beta = \arcsin{\left ( \frac{{\mathbf{P}_5}*{\mathbf{Z}_0} – {l_1}}{A} \right )} $$

$$\alpha = \arccos{\left ( \frac{{l_2^2}+{A^2}-{(l_3+l_4)}^2}{2A{l_2}}\right )}$$

So:

$${\theta_2} = \frac{ \pi}{2} – \alpha – \beta$$

So far, the rotation of the joint 1,2,3 has been calculated. We can use forward kinematics to calculate the position of ${J_3}$ ${\mathbf{P}}_3$and its rotation matrix $\begin{bmatrix} {{\mathbf{X}}_3} & {{\mathbf{Y}}_3} &{{\mathbf{Z}}_3} \end{bmatrix}$

From the forward solution of the rotation matrix of ${\theta_5}$, we can know:

$${\mathbf{Z}}_5 = \sin{\theta_5}{\mathbf{X}_4} + \cos{\theta_5}{\mathbf{Z}}_4 $$

And ${\mathbf{X}_4} \perp {\mathbf{Z}_4}, \left \| {\mathbf{Z}_4} \right \| =1$. Multiply both sides of the above equation by ${\mathbf{Z}_4}$, we can get:

$$\cos{\theta_5} = {\mathbf{X}_4}\cdot{\mathbf{Z}_4}$$

Because of the rotation of the joint 4,6 around the Z axis,

$${\mathbf{Z}}_5 = {\mathbf{Z}}_6 , {\mathbf{Z}_4} ={\mathbf{Z}}_3 $$

$$\cos{\theta_5} = {\mathbf{X}_3}\cdot{\mathbf{Z}_6}$$

$${\theta_5} = \arccos{{\mathbf{X}_3}\cdot{\mathbf{Z}_6}}$$

From the forward solution of the rotation matrix of ${\theta_4}$:

$${\mathbf{X}}_4 = \cos{\theta_4}{\mathbf{X}}_3 + \sin{\theta_4}{\mathbf{Y}_3} $$

Substitute the above equation into the ${\mathbf{Z}}_5$.

$${\mathbf{Z}}_5 = \sin{\theta_5}(\cos{\theta_4}{\mathbf{X}}_3 + \sin{\theta_4}{\mathbf{Y}_3}) + \cos{\theta_5}{\mathbf{Z}}_3$$

And ${\mathbf{X}_3} \perp {{\mathbf{Y}}_3}, {\mathbf{X}_3} \perp {\mathbf{Z}_3} \left \| {\mathbf{X}_3} \right \| =1$

Multiply both sides of the above equation by ${\mathbf{X}_3}$:

$${{\mathbf{Z}}_5}\cdot {{\mathbf{X}}_3} = \sin{\theta_5} \cos{\theta_4}$$

Multiply both sides of the above equation by ${\mathbf{Y}_3}$:：

$${{\mathbf{Z}}_5}\cdot {{\mathbf{Y}}_3} = \sin{\theta_5} \sin{\theta_4}$$

Make fraction of two equation:

$${\theta_4} = \operatorname{atan2}{\left ( {{{\mathbf{Z}}_6} \cdot {{\mathbf{Y}}_3}},{{{\mathbf{Z}}_6} \cdot {{\mathbf{X}}_3}} \right )} $$

At this point, the rotation matrix at the end of the fifth joint can be obtained according to the rotation Angle of the 1st to 5th joint. $\begin{bmatrix} {{\mathbf{X}}_5} & {{\mathbf{Y}}_5} &{{\mathbf{Z}}_5} \end{bmatrix}$

From the forward solution of the rotation matrix of ${\theta_6}$:

$${\mathbf{X}}_6 = \cos{\theta_6}{\mathbf{X}}_5 + \sin{\theta_6}{\mathbf{Y}_5} $$

Multiply both sides of the above equation by ${\mathbf{Y}_5}$, and $\left \| {\mathbf{Y}_5} \right \| =1$

So:

$${\theta_6} = \operatorname{atan2}{({{\mathbf{X}}_6}\cdot {{\mathbf{Y}}_5} , {{\mathbf{X}}_6}\cdot {{\mathbf{X}}_5} )} $$

The rotation matrix in linear algebra is used to make the vector rotate in the Euler space.

In robotics, the rotation matrix is used to solve the posture (orientation) of the robot joint. In three-dimensional space, the eigenvalue of the rotation matrix is 1.

When calculating the rotation matrix, it is usually decomposed into three rotating matrices in X Y Z axis. In the right-hand Cartesian coordinates, the rotation direction is assumed to be the right-handed helix direction. The rotation matrix of each axis is shown as followed

- X-rotation (Roll)

\({\mathcal {R}}_{x}({\theta} _{x})= {\begin{bmatrix} 1&0&0 \\0 & \cos {\theta _{x}}&-\sin {\theta _{x}} \\ 0&\sin {\theta _{x}}&\cos {\theta _{x}}\end{bmatrix}}=\exp \left({\begin{bmatrix}0&0&0\\0&0&-\theta _{x}\\0&\theta _{x} & 0 \end{bmatrix}}\right)\) - Y-rotation (Pitch)

\({\mathcal {R}}_{y}(\theta _{y})={\begin{bmatrix}\cos {\theta _{y}}&0&\sin {\theta _{y}}\\0&1&0\\-\sin {\theta _{y}}&0&\cos {\theta _{y}}\end{bmatrix}}=\exp \left({\begin{bmatrix}0&0&\theta _{y}\\0&0&0\\-\theta _{y}&0&0\end{bmatrix}}\right)\) - Z-rotation (Yaw)

\({\mathcal {R}}_{z}(\theta _{z})={\begin{bmatrix}\cos {\theta _{z}}&-\sin {\theta _{z}}&0\\\sin {\theta _{z}}&\cos {\theta _{z}}&0\\0&0&1\end{bmatrix}}=\exp \left({\begin{bmatrix}0&-\theta _{z}&0\\\theta _{z}&0&0\\0&0&0\end{bmatrix}}\right)\)

To calculate the final rotation matrix of the rigid body, multiply three rotation matrices.Then we can get \({\mathcal {R}}={\mathcal {R}}_{z}(\theta _{z})\,{\mathcal {R}}_{y}(\theta _{y})\,{\mathcal {R}}_{x}(\theta _{x})\)

The euler Angle here refers to the order x-y-z (Tait–Bryan angles) instead of the traditional Euler angle order z-x-z. In order to distinguish between classical Euler angles, here we use\({\theta}_{x},{\theta}_{y},{\theta}_{z}\) to indicate the angles.

**Convert Euler Angle to the rotation matrix**

As mentioned in the previous section, the rotation matrix can be obtained by multiplying three rotating matrices, so the equation of euler Angle to the rotation matrix is

\({\mathbf{R}} = {\begin{bmatrix} 1&0&0 \\0 & \cos {\theta _{x}}&-\sin {\theta _{x}} \\ 0&\sin {\theta _{x}}&\cos {\theta _{x}}\end{bmatrix}}{\begin{bmatrix}\cos {\theta _{y}}&0&\sin {\theta _{y}}\\0&1&0\\-\sin {\theta _{y}}&0&\cos {\theta _{y}}\end{bmatrix}}{\begin{bmatrix}\cos {\theta _{z}}&-\sin {\theta _{z}}&0\\\sin {\theta _{z}}&\cos {\theta _{z}}&0\\0&0&1\end{bmatrix}}\)

Here we define

\({s}_{x} = \sin {{\theta}_{x}}; {c}_{x} = \cos {{\theta}_{x}}; {s}_{y} = \sin {{\theta}_{y}}; {c}_{y} = \cos {{\theta}_{y}}; {s}_{z} = \sin {{\theta}_{z}}; {c}_{z} = \cos {{\theta}_{z}}\)

As a result

\({\mathbf{R}} = {\begin{bmatrix} {c_y}{c_z} & {c_z}{s_x}{s_y} – {c_x}{s_z} & {s_x}{s_z} + {c_x}{c_z} {s_y} \\ {c_y}{s_z} & {c_x}{c_z}+{s_x}{s_y}{s_z} & {c_x}{s_y}{s_z}-{s_x}{c_z} \\ -{s_y} & {c_y}{s_x} & {c_x}{c_y} \end{bmatrix}}\)

[codesyntax lang=”csharp” lines=”normal” title=”View the C# code here” blockstate=”collapsed”]

/// <summary> /// Convert the Euler Angle to the Rotation Matrix /// </summary> /// <param name="euler">Euler Angle Vector</param> /// <returns>Rotation Matrix</returns> public static Matrix3F Euler2RMatrix(Vector3F euler) { double sx = Math.Sin(euler.X), cx = Math.Cos(euler.X), sy = Math.Sin(euler.Y), cy = Math.Cos(euler.Y), sz = Math.Sin(euler.Z), cz = Math.Cos(euler.Z); return new Matrix3F( cy * cz, cz * sx * sy - cx * sz, sx * sz + cx * cz * sy, cy * sz, cx * cz + sx * sy * sz, cx * sy * sz - sx * cz, -sy, cy * sx, cx * cy ); }

[/codesyntax]

**Convert the rotation matrix to the Euler Angle**

Assume the rotation matrix is

\(\mathbf{R} = {\begin{bmatrix} {r_{11}} & {r_{12}} & {r_{13}} \\ {r_{21}} & {r_{22}} & {r_{23}} \\ {r_{31}} & {r_{32}} & {r_{33}} \end{bmatrix}} \)

Solve the equation mentioned in the last section we can get:

\({{\theta}_{x}} = \operatorname{atan2}({r_{32}},{r_{33}})\)

\({{\theta}_{y}} = \operatorname{atan2}(-{r_{31}},\sqrt{{r_{32}^2}+{r_{33}^2}})\)

\({{\theta}_{y}} = \operatorname{atan2}({r_{21}},{r_{11}})\)

atan2 is the function whose value range is $\left ( -\pi,\pi \right ] $ with the definition below:

\(

\operatorname{atan2}(y, x) = \begin{cases}

\arctan\left(\frac y x\right) & \qquad x > 0 \\

\arctan\left(\frac y x\right) + \pi& \qquad y \ge 0 , x < 0 \\

\arctan\left(\frac y x\right) – \pi& \qquad y < 0 , x < 0 \\ +\frac{\pi}{2} & \qquad y > 0 , x = 0 \\

-\frac{\pi}{2} & \qquad y < 0 , x = 0 \\

\text{undefined} & \qquad y = 0, x = 0

\end{cases}\)

See the Wikipedia for detail information

[codesyntax lang=”csharp” lines=”normal” title=”View the C# code for this section” blockstate=”collapsed”]

/// <summary> /// Convert the rotation matrix to the euler angle /// </summary> /// <param name="rmat">The roatation matrix</param> /// <returns>The Euler Angle vector</returns> public static Vector3F RMatrix2Euler(Matrix3F rmat) { return new Vector3F( Math.Atan2(rmat[2, 1], rmat[2, 2]), Math.Atan2(-rmat[2, 0], Math.Sqrt(rmat[2, 1] * rmat[2, 1] + rmat[2, 2] * rmat[2, 2])), Math.Atan2(rmat[1, 0], rmat[0, 0])); }

[/codesyntax]

In robotics, the vector is usually rotated several times in order to solve the position of the manipulator. In general, such a rotation is a relative rotation to the previous one. In this case, the new rotation matrix can be obtained by multiplying the rotation matrix to the right of the original rotation matrix.

Assume the rotation matrix relative to previous joint is $\mathbf{R}$, The rotation matrix of the previous node relative to the base is ${\mathbf{R}}_{i-1}$, then the rotation matrix of the current node relative to the base frame. $ {\mathbf{R}}_{i}$ is:

$$ {\mathbf{R}}_{i} = {{\mathbf{R}}_{i-1}}{\mathbf{R}} $$

In order to solve the position of the manipulator, the model of the manipulator is simplified to addition of the vectors. The end position can be obtained by adding up from the bottom up.

Assume the position of each joint is $ {\mathbf{P}}_{i} = {\begin{bmatrix} x \\ y \\ z \end{bmatrix}} $, the link length is${L}_{i}$

For each joint the pose can be calculated by

- Define the initial state of the manipulator.
- Calculate the rotation matrix in the initial state.${\mathbf{B}}_{i}$If all the joints facing the same direction, then${\mathbf{B}}_{i} = {\begin{bmatrix} 1&0&0 \\ 0&1&0 \\ 0&0&1 \end{bmatrix}}$
- Calculate the relative rotation matrix in movement ${\mathbf{A}}_{i}$
- Use the following formula to calculate the rotation matrix of the current joint relative to the base.

$$ {\mathbf{R}}_{i} = ({\mathbf{X}}_{i},{\mathbf{Y}}_{i},{\mathbf{Z}}_{i}) = {\mathbf{B}_{0}}{\mathbf{A}_{1}}{\mathbf{B}_{1}}{\mathbf{A}_{2}} \cdots {\mathbf{B}_{i-1}} {\mathbf{A}_{i}} $$ - The current joint position is

$$ {\mathbf{P}}_{i} = {\mathbf{P}}_{i-1} + {\mathbf{R}}_{i} * ({{L}_{i}}*{{\mathbf{Z}}_{0}}) $$ - The euler Angle of the endpoint can be obtained through the formula of the first section.

- Wikipedia Author, “Rotation matrix,” 18 2 2018. https://en.wikipedia.org/wiki/Rotation_matrix
- 高濑国克, “マニピュレータの基礎理論,” 1983.
- “旋转矩阵、欧拉角、四元数理论及其转换关系,” 21 5 2017. http://blog.csdn.net/lql0716/article/details/72597719.

x86 (32 bits) Version:

sed-4.4-win32-vs141

x64 (64 bits) Version

sed-4.4-win64-vs141

Compiler: Microsoft Visual Studio 2017

Windows SDK Version: 10.0.16299.0

Refer to https://github.com/mbuilov/sed-windows on how to compile

]]>This content is password protected. To view it please enter your password below:

]]>

[codesyntax lang=”c”]

/* *Function:str_replace *Parameter:char* search,char* replace,char* str *Call:char *str_replace(char* search,char* replace,char* str); *Return: string *Required:malloc.h *Required:string.h *Description: replace the string in the string */ char *str_replace(char* search,char* replace,char* str) { int lstr,lse,lre; char* r,*p,*nptr; lse=strlen(search); lre=strlen(replace); lstr=strlen(str); if(lse>lstr) { return NULL; } r=(char* )malloc(lstr+1); if(r==NULL) { printf("Failed to allocate memory"); exit(-2); } strcpy(r,str); /*Copy the string to new memory*/ p=strstr(r,search); while(p!=NULL) { if(lse==lre) { memcpy(p,replace,lre); /*Just Copy the string*/ }else if(lse>lre) /*No allocation required*/ { memset(p,' ',lse); /*Clear it first*/ memcpy(p,replace,lre); /*Copy data*/ memcpy(p+lre,p+lse,lstr-(p-r+lse)+1);/*Remove the blanks*/ }else{ nptr=realloc(r,lstr+(lre-lse));/*Expand the space first*/ if(nptr==NULL) { printf("Failed to allocate memory"); exit(-2); } r=nptr; memcpy(p+lre,p+lse,lstr-(p-r+lse)+1);/*Move data now*/ memcpy(p,replace,lre); } p=strstr(p+lre,search); /*To prevent the replace string contained the search string*/ } return r; }

[/codesyntax]

]]>In this part, we start to prepare the tool chains.

0. Before Starting

0.1. Brief Introduction to C Language

C language is a computer language widely used in desktop software, hardware, operating system, driver, MCU has extensive application. The C language has been 40 years of age. At present, the C language has multiple variants including C, C++ and C#. As the widely used C language has a different compiler, this part will be covered in the next section.

0.2. Compiler

There are many compiler for C language which can be apply to different platforms and system. Here only two kinds of compiler is introduced.

a) Microsoft Visual C++

The compiler developed by Microsoft for developing software under windows system. The latest version is 2013 (the major version number: 12).

b) GCC

A C/C++ Compiler under Linux/Unix system, open source. Pre-installed in most Linux distribution. If you want to use this compiler please use MinGW or Cygwin.

NOTICE: In this passage, Visual C++ 2013 is used for Example, Normally it **doesn’t** support Windows XP System.

0.3. Source Code Editor

Normally Visual C++ has been able to meet demand, but if you need to view or edit the highlighted Code which Visual C++ is not installed on the computer, Notepad++ is recommended here.

The advantage of code highlighting in comparison。

not highlighted:

void ShowMainCmdMenu(){ HANDLE hStd=GetStdHandle(STD_OUTPUT_HANDLE); SetConsoleTextAttribute(hStd,FOREGROUND_INTENSITY|FOREGROUND_BLUE|FOREGROUND_GREEN); printf("\n\nMain menu:\n"); printf("-------------------------------------------------------------\n"); printf( " [F]ind Open[T]oken LED[O]n D[A]taMenu\n" " GetS[N] GenP[I]D GenRando[M] Cr[Y]ptMenu\n" " User[P]IN [S]OPIN [R]eset Set[U]pMenu\n" " LE[D]Off [C]lose E[X]it\n"); SetConsoleTextAttribute(hStd,FOREGROUND_GREEN|FOREGROUND_BLUE|FOREGROUND_RED); }

highlighted:

[codesyntax lang=”c”]

void ShowMainCmdMenu(){ HANDLE hStd=GetStdHandle(STD_OUTPUT_HANDLE); SetConsoleTextAttribute(hStd,FOREGROUND_INTENSITY|FOREGROUND_BLUE|FOREGROUND_GREEN); printf("\n\nMain menu:\n"); printf("-------------------------------------------------------------\n"); printf( " [F]ind Open[T]oken LED[O]n D[A]taMenu\n" " GetS[N] GenP[I]D GenRando[M] Cr[Y]ptMenu\n" " User[P]IN [S]OPIN [R]eset Set[U]pMenu\n" " LE[D]Off [C]lose E[X]it\n"); SetConsoleTextAttribute(hStd,FOREGROUND_GREEN|FOREGROUND_BLUE|FOREGROUND_RED); }

[/codesyntax]

The code highlighting plugin is used in this webpage, normally Visual C++ will highlight more code as well as point out the mistakes in the code.

0.4. The Coding Standard

The code write in standard way can help others understand the what the code better.

In the C language, comments that begin with a slash (two from the left to the right under), to represent a single line comment or a slash with a star begins, an asterisk plus a slash ended, all part of the middle are comments.

**According to the Sonar specification, the left bracket'{‘should be placed in the front line, right parenthesis’}’ on the next line*

0.5. Attachment

a) Microsoft Visual Studio Express 2013 (Visual C++ Express Included)

English Version:ed2k://|file|en_visual_studio_express_2013_for_windows_desktop_x86_dvd_3009419.iso|828051456|6AEF0A01DCD74E7958606AE6D5CF259E|/

b) MinGW/GCC

Source Forge Link

For Advanced Users:

Please view Visual Studio Premium or Ultimate Version