NEWSROOM

Find recent updates in the packaging industry

2023-09-03

What are Pick-and-Place Mechanisms, Systems and Robotics?

In the industrial world, the technique of pick-and-place plays a vital role that works for capturing an object from one location and placing it in another predetermined location. [1]

Depending on the kinematics and engineering, a pick-and-place mechanism can vary in nature and may be a complex robot that combines a vision system and sophisticated programming or sometimes just a relatively simple mechanism with mechanicals and pneumatics and works as a fixed automation system.

END-OF-LINE WITH SCARA ROBOT
A Pick-and-Place Robot for Square-Shaped Object
Credit: ELITER Packaging Machinery

In this article, we discuss about:

  • the different approaches to industrial pick-and-place mechanisms and their applications.
  • then we will provide a complementary and advanced case study on how to design a pick-and-place robot on your own that works for locating square objects.

* Note: The content will involve engineering, programming, and mathematical details that you may need some professional background to fully understand.


Introduction: How Pick-and-Place Automation Has been Evolving

The modern industry and the processes of assembly, manufacturing, and packaging frequently involve the task of picking specific objects from one place to another. The traditional approach to the pick-and-place mechanism usually comes in the form of fixed automation that only works for a product that it is programmed for, only follows a given trajectory, and does repetitive work.

ROTARY FEEDER - PICK AND PLACE
The rotary feeder is a form of pick-an-place mechanism
Credit: ELITER Packaging Machinery

The technologies in automation have evolved to such a level that pick-and-place can now either be a combination of mechanical mechanisms at how it is traditionally or a complex system of robotics with sophisticated programming like those advanced robots.

What are the Common Forms of Pick-and-Place Systems or Mechanisms?

A pick-and-place mechanism, depending on its purpose of use, can be developed into either a separate machine or a system to be integrated into a whole end-of-line or automated production facility.

Pick-and-Place Mechanisms Built as Fixed Automation

PICK-AND-PLACE MECHANISM AS FIXED AUTOMATIONPneumatic Pick and Place Actuator
Credit: ELITER Packaging Machinery

The simplest approach to designing a pick and place system is on the basis of fixed automation to develop it with some mechanism transmission and kinematics joined by an end actuator that works for catching the tart objects such as bottles, plastic flow-wrapped packages, etc.

Without the necessity of motion control, visual detection system, or any advanced technologies, such a pick-and-place mechanism (not referred to as a robot) would usually serve only a given form target object and can hardly be refit for other purposes.

The motion and trajectory in most of the cases are as fixed as it is developed for only the purpose it was originally expected.

Pick-and-Place Machines as Standalone Systems

SACARA ROBOTS A common example of this classification of a pick-and-place system is the Surface Mount Technology Pick-and-Place Machines.[2]

SMT pick-and-place machines are widely used in electronics manufacturing, as the majority of electronic devices today utilize SMT components. These machines automate the process of accurately picking up SMT components from feeders, typically in the form of reels and placing them onto the correct locations on the PCB.

Robotics Pick-and-Place Systems

A pick-and-place robot is an industrial robot employed to handle and position products along a production line. Primarily utilized in high-volume manufacturing, these robots excel at swiftly and precisely placing products onto conveyor belts or other machinery.

Pick-and-place robots are usually flexible in application and play a crucial role in automating repetitive tasks, increasing production efficiency, and reducing the need for manual labor.[4]

A CASE PACKING END-OF-LINE WITH SCARA ROBOT
A Cases Packing End-of-Line with ESTUN [3] SCARA Robots
Credit: ELITER Packaging Machinery

 

What are those situations where a pick-and-place system is needed?

Pick and the place is usually preferred to be a complementary option to situations where normal approaches to index articles to a system or connect the flow between two sections should be handled delicately or where a normal mechanical connection is not feasible like picking up bags or pieces of stick-shaped objects that come on a conveyor irregularly and should be placed to another equipment in a organized direction.

In Automated Warehousing System: Robotic Palletizers

ABB IRB-4600 ROBOTPalletizing requires moving heavy objects and stacking them into regular piles that manual work is hardly capable of handling, and even so, counting on labor to move such large and heavy packaging of corrugated cardboard cases will have the workers exposed to the probability of severe injury.

As an alternative, robotics are usually integrated into automatic palletizers to handle the corrugated cardboard boxes that come from, in most cases, the case packer in the front stream of the packaging line.

 

 

On a Packaging End-of-Line for Handling Delicate Objects

The traditional engineering of integrating several packaging systems counts on the hard connection of the flow of products between the equipment – things may rush on a conveyor and then arrive at the receiving unit of the next machine where it is stopped all of a sudden and head for another direction.

However, this will not be an ideal pattern for handling the flow of products along the packaging line, especially for fragile and delicate objects: glass jars, syringes, bulbs, and some products in the cosmetics industry with a focus on appearance and must be handled with care to avoid defect in packaging.

An example is the handling of flow-wrapped medical needles – what we did in a past project where the plastic packaging with needles coming from the flow wrapper must be carefully placed on to the bucket conveyor of the cartoner. The pick-and-place system is integrated with Bernoulli grippers to provide gentle handling solutions.

delicate handling of medical needles with pick and place machines
Pick-and-Place System with Bernoulli Grippers
Credit: ELITER Packaging Machinery

 

Case Study: Design and Error Test of a Pick and Place Robot

The case study is an advanced step-by-step guide to design a pick-and-place mechanism joined by ABB robots to pick and place square objects and then involves a quantitative and mathematical analysis that works for track the error and displacement during the operation of this system.

The system, design and test will involve the following parts and components:

  • End-Effector
  • Measurement Frame
  • ABB Robot
  • Target Object (Square Shape)

CASE STUDY - DESIGN AND ERROR TEST OF A PICK AND PLACE ROBOT
Pick and Place Robot
Credit: ELITER Packaging Machinery

Structure Design of the Pick-and-Place Robot

The pick-and-place robot involved in this case study is to manipulate square objects such as a piece of square-shaped glass or a piece of aluminum plate, etc. The design will consider as well the measurement and analysis will be carried out later mathematically afterward to inspect and test the error and displacement of such a system.

End Effector

The EF (End Effector) is the component mounted with suction plates and CCD sensors that works for pick directly the TO (target object) and meanwhile to measure the displacement between the TO and itself.

The EF is designed on the basis of a square framework as shown in the figure below. The frame carries 8 CCD Sensors (CCD 1 ~ CCD 8) to detect displacement with laser and each 2 of them are mounted at the 4 sides of the EF. The distance between each paralleled CCD sensor is defined as \(d\) and the distance symmetrically between the 2 CCD sensors is defined as \(h\).

A CS (Coordination System) is built from the central point of EF with one axis defined as \(x_e\) and another axis defined as \(y_e\). The axis perpendicular to the EF’s surface is defined at \(z_e\).

The original readings of CCD sensors listed above of CCD 1 ~ CCD 8 reflect the ideal position of EF expressed in the CS of the frame (which will be introduced in the following chapter), and are defined as \(c^\text{ini}_1\) ~ \(c^\text{ini}_8\).

END EFFECTOR of pick and place robot
End Effector of the Pick and Place Robot
Credit: ELITER Packaging Machinery

Measurement Frame

The measurement frame is a structure that surrounds the end effector, and it has also 8 CCD sensors at 4 sides. The 8 CCD sensors, marked as CCD 9 ~ CCD 16, is organized symmetrically around the frame’s structure with side length as \(H\) and paralleled length as \(D\).

The measurement frame has its own Coordination System with an axis defined as \(x_f\) and another axis defined as \(y_f\).

The frame works as a medium to measure the deviation of TO defined in the CS of the frame and then to measure the position of itself in the CS of EF, thus to indirectly measure the pose of TO in the CS of EF.

The original readings of CCD sensors listed above of CCD 9 ~ CCD 16 reflect the initial pose of the TO (target object) and are defined as \(c^\text{ini}_9\) ~ \(c^\text{ini}_\text{16}\)

 

MEASUREMENT FRAME AND OVERVIEW OF THE PICK AND PLACE SYSTEM
Measurement Frame, EF and TO
Credit: ELITER Packaging Machinery

Target Object

TO is not specified in this test regarding what it is. But only referred to as a square-shaped object with length and width defined as \(T\).

The Fundamental Concept of the Design to Displacement Measurement

deviation and error expressed in coordination system of pick and place system
3 DOFs expressed in the CSs of TO and EF
Credit: ELITER Packaging Machinery

The above pick and place system is actually designed with the purpose of the measurement of deviation between TO and OE. It can be simply explained by overlapping the two components’ coordination system with which it is expressed as a 2D problem to solve.

The analysis involves 3 critical DOFs (Degree of Freedom) as follows:

  • The deviation of TO with regard to EF’s coordination system along the \(x_e\) axis, defined as \( \Delta x_\text{te}\);
  • The deviation of TO with regard to the EF’s coordination system along the \(x_y\) axis, defined as \( \Delta y_\text{te}\);
  • The deviation of TO around the axis-\(z_e\) of the EF’s coordination system, defined as \( \Delta z_\text{te}\);

While the previous approach works for test the relative deviation between TO and EF, it is not a direct measure to the deviation of EF. To solve this issue, the design engages the structure of the measurement frame which works as the medium to check the deviation between EF and the frame and as a tool to transfer the expression of deviation of TO to the CS of EF.

 

Modeling of the Pick and Place System

The modeling of above designed pick-and-place robot is for the ultimate goal of calculating the deviation of the target object in the coordination system of the end effector.

The above-mentioned method of indirect measurement is calculated with the following 3 parts:

  • Calculating the deviation of TO in the Frame’s CS
  • Calculating the deviation of Frame in the EF’s CS
  • Calculating the deviation of TO in the EF’s CS

A step by step modeling following above parts will be explained herebyafter.

i. Calculating the deviation of TO in the Frame’s CS

The following figure helps to understand this step. The terms involved in this calculation are explained below:

  • The dashed blue squre expresses the original pose of the TO
  • The actual pose of the TO is represented by the solid blue square
  • The laser reaching from CCD 9 ~ CCD 16 are represented by solid red lines
  • The frame is represented by the solid black squre
  • The frame’s CS is represented by \( \left\{ O_f \rightarrow x_f \,\,\, y_f \,\,\,  z_f \right\} \)
  • The TO’s CS in ideal pose is represented by \( \left\{ O^\text{,}_t \rightarrow x^\text{,}_t  \,\,\, y^\text{,}_t \,\,\, z^\text{,}_t \right\} \)
  • The TO’s CS in actual pose is representaed by \( \left\{ O_t \rightarrow x_t  \,\,\, y_t \,\,\,  z_t \right\} \)

We are also able to define the deviation with a vector \( \mathscr e _\text{tf} = [   \Delta X_\text{tf}   \,\,\, \Delta Y_\text{tf} \,\,\,  \Delta Z_\text{tf}  ] \)

Calculating the deviation of TO in the Frame's CS
Relationship between TO and Frame’s CS
Credit: ELITER Packaging Machinery

According to the above figure, the actual pose of the TO detected CCD 9 ~ CCD 16 expressed in the CS of Measurement Frame \( \left\{ O_f \rightarrow x_f \,\,\, y_f \,\,\,  z_f \right\} \),  \( (x_\text{ti}\,\,\, ,y_\text{ti}) \,\,\, i=(9~16)\), marked by the red dots and can be expressed as follows:

$$
x_\text{ti} =
\begin{cases}
T/2 – (c_i – c^\text{ini}_i) & (i=9,10) \\
-T/2 – (c_i + c^\text{ini}_i) & (i=13,14) \\
D/2  & (i=11,16) \\
-D/2  & (i=12,15) \\
\end{cases}
$$

$$
y_\text{ti} =
\begin{cases}
T/2 – (c_i – c^\text{ini}_i) & (i=11,12) \\
-T/2 – (c_i + c^\text{ini}_i) & (i=15,16) \\
D/2  & (i=10,13) \\
-D/2  & (i=9,14) \\
\end{cases}
$$

 

The 4 sides of TO which are \( TO (P_\text{t15} P_\text{t9}, \,\,\, P_\text{t9} P_\text{t11}, \,\,\,P_\text{t11} P_\text{t13}, \,\,\,P_\text{t13} P_\text{t15})\)  expressed by \( l_\text{ti} (i=9,11,13,15) \), can be derived from the laser points as:

 

$$ l_\text{ti} : ( y_\text{t(t+1)} – y_\text{ti})x – ( x_\text{t(t+1)} – x_\text{ti})y – ( y_\text{t(t+1)} – y_\text{ti})x_\text{ti} – ( x_\text{t(t+1)} – x_\text{ti})y_\text{ti} = 0 $$

 

The 4 points of TO which are intersects each 2 adjacent lines of the TO’s 4 sides, epxressed as \( P_\text{t9}(x_\text{pt9}\,,y_\text{pt9}) \,\,,  P_\text{t11}(x_\text{pt11}\,,y_\text{pt11}) \,\,,P_\text{t13}(x_\text{pt13}\,,y_\text{pt13}) \,\,, P_\text{t15}(x_\text{pt15}\,,y_\text{pt15})  \) can be acquired by combining the equations of each two lines, as expressed as follows:

 

$$
\begin{bmatrix}
x_\text{pti}\\
y_\text{pti} \\
\end{bmatrix}=
\begin{cases}
\begin{bmatrix}
y_\text{t(i+1)} – y_\text{ti} & -x_\text{t(i+1)} – x_\text{ti} \\
y_\text{t(i+3)} – y_\text{t(t+2)} & -x_\text{t(i+3)} – x_\text{t(i+2)} \\
\end{bmatrix}^\text{-1}\begin{bmatrix}
(y_\text{t(i+1)} – y_\text{ti})x_\text{ti} – (x_\text{t(t+1)} – x_\text{ti})y_\text{ti} \\
(y_\text{t(i+3)} – y_\text{t(i+2)})x_\text{t(i+2)} – (x_\text{t(i+3)} – x_\text{t(i+2)})y_\text{t(i+2)} \\
\end{bmatrix}(i=9,11,13)\\
\begin{bmatrix}
(y_\text{t(i+1)} – y_\text{ti}) & – (x_\text{t(t+1)} – x_\text{ti})\\
(y_\text{t(i-5)} – y_\text{t(i-6)})& – (x_\text{t(i-5)} – x_\text{t(i-6)}) \\
\end{bmatrix}^\text{-1}\begin{bmatrix}
(y_\text{t(i+1)} – y_\text{ti})x_\text{ti} – (x_\text{t(t+1)} – x_\text{ti})y_\text{ti}\\
(y_\text{t(i-5)} – y_\text{t(i-6)})x_\text{t(i-6)} – (x_\text{t(i-5)} – x_\text{t(i-6)})y_\text{t(i-6)} \\
\end{bmatrix}(i=15)
\end{cases}
$$

 

At this stage, we proceed to the equation that stands for the central point of TO defined in the measurement frame’s CS of \( \left\{ O_f \rightarrow x_f \,\,\, y_f \,\,\,  z_f \right\} \), represented by \( O_\text{t}(\Delta x_\text{tf},\,\,\Delta y_\text{tf}) \), and is also the average point of all the 4 points of interssected 4 lines, is as follows:

 

$$
[\Delta x_\text{tf}  \,\,\,\,\,\,\,  \Delta y_\text{tf}]^T =\frac {1}{4} \cdot \sum_\text{k=1}^4 [x_\text{pt(2k+7)} \,\,\,\,\,\,\, y_\text{pt(2k+7)}]^T
$$

 

Similarly, we are also able to calculate the rotative deviation of TO compared to its original pose expressed in the measurement frame’s CS of \( \left\{ O_f \rightarrow x_f \,\,\, y_f \,\,\,  z_f \right\} \), which is:

 

$$
\Delta \theta_\text{tf} = \frac {1}{4} \cdot \sum_\text{k=1}^4 arctan(\frac{C_\text{2k+8}-c_\text{2k+8}^\text{ini} – c_\text{2k+7} + c_\text{2k+7}^\text{ini} }{D})
$$

 

ii. Calculating the deviation of the measurement frame in the EF’s CS

Calculating the deviation of the measurement frame in the EF's CS
Relationship between the Coordination Systems
of the measurement frame and the end effector
Credit: ELITER Packaging Machinery

With a similar figure, we can interpret how the deviation is expressed between the measurement frame and the end-effector’s coordination system. This is a more complex step of calculation due that the results are calculated and transformed indirectly.

  • The actual pose of TO is expressed by the solid black square
  • The actual CCD’s laser beams are represented by solid red lines (\(P_\text{e7}P_\text{f7} \), e.g.)
  • The Coordination System of the End-Effector at the original pose is represented by \( \left\{ O_e^, \rightarrow x_e^,  \,\,\, y_e^, \,\,\,  z_e^, \right\} \)
  • The Coordination System of the Frame is represented by \( \left\{ O_f \rightarrow x_f  \,\,\, y_f \,\,\,  z_f \right\} \), coincides with that of End-Effector in original pose
  • The actual pose of the TO is expressed by \( \left\{ O_e \rightarrow x_e  \,\,\, y_e \,\,\,  z_e \right\} \)
  • The vector to express the deviation of CS of the Measurement Frame in the CS of the End-Effector is \( \mathscr e _\text{fe} = [   \Delta X_\text{fe}   \,\,\, \Delta Y_\text{fe} \,\,\,  \Delta Z_\text{fe}  ]^T \)

The above vector of \( \mathscr e _\text{fe}  \) that represents the deviation of these 2 coordination systems can be written as a Homogeneous Transformation Matrix:

$$ ^\text{e}T_f =
\begin{bmatrix}
cos(\Delta \theta_\text{fe}) & -sin(\Delta \theta_\text{fe}) & \Delta x_\text{fe} \\
sin(\Delta \theta_\text{fe}) & cos(\Delta \theta_\text{fe}) & \Delta y_\text{fe} \\
0 & 0 & 1
\end{bmatrix}
$$

which works to express and transform the result in the coordination system in the measurement frame into that in the coordination system of the end-effector.

As vice versa, the result in the latter CS can be transformed into a result in that of the former with an inverse matric of  \( ^\text{e}T_f \), defined as \( ^\text{f}T_e \), which is as follows:

$$ ^\text{f}T_e =
\begin{bmatrix}
cos(\Delta \theta_\text{fe}) & sin(\Delta \theta_\text{fe}) & -cos(\Delta \theta_\text{fe})\cdot \Delta x_\text{fe}   -sin(\Delta \theta_\text{fe})\cdot \Delta y_\text{fe} \\
-sin(\Delta \theta_\text{fe}) & cos(\Delta \theta_\text{fe}) & sin(\Delta \theta_\text{fe})\cdot \Delta x_\text{fe}   -cos(\Delta \theta_\text{fe})\cdot \Delta y_\text{fe} \\
0 & 0 & 1
\end{bmatrix}
$$

With regard to the figure, we are able to acquire the original pose of the CCD sensors expressed by \( (x_\text{ei}^, ,\,\,\,  y_\text{ei}^, ) \),

$$
x_\text{ei}^, =
\begin{cases}
H/2 – (c_i – c^\text{ini}_i) & (i=1,2) \\
-H/2 – (c_i + c^\text{ini}_i) & (i=5,6) \\
d/2  & (i=3,8) \\
-d/2  & (i=4,7) \\
\end{cases}
$$

$$
y_\text{ei}^, =
\begin{cases}
H/2 – (c_i – c^\text{ini}_i) & (i=3,4) \\
-H/2 – (c_i + c^\text{ini}_i) & (i=7,8) \\
d/2  & (i=2,5) \\
-d/2  & (i=1,6) \\
\end{cases}
$$

They can then be transformed to being expressed in the coordination system of the measurement frame, defined as \( P_\text{ei} ( x_\text{ei}, \,\,\, y_\text{ei} ) \), as marked by those black points in the figure, with the following left-multiplication:

$$ [ x_\text{ei} \,\,\,\,  y_\text{ei}]^T =\, ^\text{f}T_e \cdot [ x_\text{ei}^,\,\,\,\,  y_\text{ei}^,]^T  $$

The length of the laser beam reaching from the CCD sensors is represented by the solid red lines such as \( P_\text{e1} P_\text{f1} \) with length \( l_i^, \) can be calculated as follows:

$$
l_\text{i}^, :=
\begin{cases}
x=d/2 –   & (i=3,8) \\
x=-d/2 –   & (i=4,7) \\
y=d/2 –   & (i=2,5) \\
y=-d/2 –   & (i=1,6) \\
\end{cases}
$$

We can utilize the above calculated HTM to transform the above results of laser beam into that in the coordination system of the measurement frame as:

$$
l_\text{i} :=
\begin{cases}
cos(\Delta \theta_\text{fe})\cdot x – sin(\Delta \theta_\text{fe})\cdot y + \Delta x_\text{fe} – d/2    & (i=3,8) \\
cos(\Delta \theta_\text{fe})\cdot x – sin(\Delta \theta_\text{fe})\cdot y + \Delta x_\text{fe} + d/2    & (i=4,7) \\
sin(\Delta \theta_\text{fe})\cdot x + cos(\Delta \theta_\text{fe})\cdot y + \Delta y_\text{fe} – d/2    & (i=2,5) \\
sin(\Delta \theta_\text{fe})\cdot x + cos(\Delta \theta_\text{fe})\cdot y + \Delta y_\text{fe} + d/2    & (i=1,4) \\
\end{cases}
$$

where \( cos(\Delta \theta_\text{fe})\cdot x – sin(\Delta \theta_\text{fe})\cdot y + \Delta x_\text{fe} \) and  \(  sin(\Delta \theta_\text{fe})\cdot x + cos(\Delta \theta_\text{fe})\cdot y + \Delta y_\text{fe} \) are be acquired by left-multiplying \( [x \,\,\,\, y \,\,\,\, l ]^T  \) with \( ^\text{e}T_f \).

And the 4 sides of the measurement frame \( l_\text{fi},\,\,\, (i=1,3,5,7) \) can be represented by the following equation:

$$
l_\text{fi} :=
\begin{cases}
x=H/2 –   & (i=1) \\
x=-H/2 –   & (i=3) \\
y=H/2 –   & (i=5) \\
y=-H/2 –   & (i=7) \\
\end{cases}
$$

Hereby, we can now calculate the points that fall on the measurement frame intersected by the laser beam from the CCD sensors on the pick-and-place robot’s end-effector, expressed in the coordination system of the frame of \( \left\{ O_f \rightarrow x_f  \,\,\, y_f \,\,\,  z_f \right\} \), defined as \( P_\text{fi} (x_\text{fi},\,\,\, y_\text{fi}) \), by equation of laser beams \( l_i , (i=1~8) \) and the equation of measurement frame’s sides of \( l_\text{fk}, k=(1~4) \),

$$
[x_\text{fi},\,\,\, y_\text{fi} ]^T =
\begin{cases}
\text{intersection} (l_i,l_\text{f1})  & (i=1,2) \\
\text{intersection} (l_i,l_\text{f2})  & (i=3,4) \\
\text{intersection} (l_i,l_\text{f3})  & (i=5,6) \\
\text{intersection} (l_i,l_\text{f4})  & (i=7,8) \\
\end{cases}
$$

Now that the length of the laser beam reaching from the CCD to the measurement frame is represented by the solid red lines that can be represented as a vector between \( P_\text{ei}(x_\text{ei},\,\,\,y_\text{ei}) \) and \( P_\text{fi} ( x_\text{fi},\,\,\, y_\text{fi}) \) and then get expressed in the coordination system of the frame with coordinates corresponding along the \(x_f\) – axis and \(y_f\) – axis, the equation is as follows:

$$
\begin{cases}
x_\text{fi} – x_\text{ei} = cos (\Delta \theta _\text{fe} \cdot c_i , & y_\text{fi} – y_\text{ei} = -sin(\Delta \theta _\text{fe} \cdot c_i & (i=1,2) \\
x_\text{fi} – x_\text{ei} = sin(\Delta \theta _\text{fe} \cdot c_i , & y_\text{fi} – y_\text{ei} = cos(\Delta \theta _\text{fe} \cdot c_i & (i=3,4) \\
x_\text{fi} – x_\text{ei} = -cos(\Delta \theta _\text{fe} \cdot c_i , & y_\text{fi} – y_\text{ei} = sin(\Delta \theta _\text{fe} \cdot c_i & (i=5,6) \\
x_\text{fi} – x_\text{ei} = -sin(\Delta \theta _\text{fe} \cdot c_i , & y_\text{fi} – y_\text{ei} = -cos(\Delta \theta _\text{fe} \cdot c_i & (i=7,8) \\
\end{cases}
$$

A full set of equations for CCD sensor CCD 1 ~ CCD 8 can be calculated into the below matrix format:

$$ \mathbf A_\text{16×2} \cdot [\Delta x_\text{fe} \,\,\,\, \Delta y_\text{fe}]^T = \mathbf B_\text{16×2}  $$

which can solved further into:

$$ [\Delta x_\text{fe} \,\,\,\, \Delta y_\text{fe}]^T = (\mathbf A^T  \mathbf A )^\text{-1} \mathbf A ^ T \mathbf B $$

Finally, the rotative deviation of the pick-and-place robot’s measurement frame in the coordination system of the end-effector can be expressed as:

$$
\Delta \theta_\text{fe} = \frac {1}{4} \cdot \sum_\text{k=1}^4 arctan(\frac{C_\text{2k}^\text{ini}-c_\text{2k} – c_\text{2k-1}^\text{ini} + c_\text{2k-1} }{d})
$$

With the previous and latest 2 equations, the deviation of the measurement frame in the EF’s CS is acquired and written as \( \mathscr e _\text{fe} = [   \Delta X_\text{fe}   \,\,\, \Delta Y_\text{fe} \,\,\,  \Delta Z_\text{fe}  ]^T \)

iii. Calculating the deviation of TO in the EF’s CS

The above steps have managed to get the deviation of:

  • Target object in the coordination system of the measurement frame \( \mathscr e _\text{tf} = [   \Delta X_\text{tf}   \,\,\, \Delta Y_\text{tf} \,\,\,  \Delta Z_\text{tf}  ]^T \)
  • the measurement frame’s deviation in the coordination system of the end effector \( \mathscr e _\text{fe} = [   \Delta X_\text{fe}   \,\,\, \Delta Y_\text{fe} \,\,\,  \Delta Z_\text{fe}  ]^T \)

The deviation of the target object in the coordination system of the end-effector can be obtained with the figure shown below which expresses the relationship between the 3 CSs:

  • \( \left\{ O_e \rightarrow x_e  \,\,\, y_e \,\,\,  z_e \right\} \)
  • \( \left\{ O_f \rightarrow x_f  \,\,\, y_f \,\,\,  z_f \right\} \)
  • \( \left\{ O_t \rightarrow x_t  \,\,\, y_t \,\,\,  z_t \right\} \)

Calculating the deviation of TO in the EF's CS
The relationship between the 3 components’ coordination system
Credit: ELITER Packaging Machinery

The Homogeneous Transformation Matrix that works for transformation between \( \left\{ O_e \rightarrow x_e  \,\,\, y_e \,\,\,  z_e \right\} \) and
\( \left\{ O_f \rightarrow x_f  \,\,\, y_f \,\,\,  z_f \right\} \) has been given previously which is as follows:

 

$$^\text{e}T_f =
\begin{bmatrix}
cos(\Delta \theta_\text{fe}) & -sin(\Delta \theta_\text{fe}) & \Delta x_\text{fe} \\
sin(\Delta \theta_\text{fe}) & cos(\Delta \theta_\text{fe}) & \Delta y_\text{fe} \\
0 & 0 & 1
\end{bmatrix}
$$

 

while the similar transformation matrix for \( \left\{ O_f \rightarrow x_f  \,\,\, y_f \,\,\,  z_f \right\} \) and \( \left\{ O_t \rightarrow x_t  \,\,\, y_t \,\,\,  z_t \right\} \) can be calculated as follows:

 

$$^\text{f}T_t =
\begin{bmatrix}
cos(\Delta \theta_\text{tf}) & -sin(\Delta \theta_\text{tf}) & \Delta x_\text{tf} \\
sin(\Delta \theta_\text{tf}) & cos(\Delta \theta_\text{tf}) & \Delta y_\text{tf} \\
0 & 0 & 1
\end{bmatrix}
$$

 

combining the above 2 HTMs, the one for \( \left\{ O_e \rightarrow x_e  \,\,\, y_e \,\,\,  z_e \right\} \) and \( \left\{ O_t \rightarrow x_t  \,\,\, y_t \,\,\,  z_t \right\} \) can be obtained as:

 

$$^\text{e}T_t = \,^\text{e}T_f   \,^\text{f}T_t = \\
\begin{bmatrix}
cos(\Delta \theta _\text{fe} + \Delta \theta _\text{tf}) &  -sin(\Delta \theta _\text{fe} + \Delta \theta _\text{tf})  & cos(\Delta \theta _\text{fe}) \cdot \Delta x_\text{tf} – sin(\Delta \theta_\text{fe}) \cdot \Delta y_\text{tf} +  \Delta x_\text{fe}\\
sin(\Delta \theta _\text{fe} + \Delta \theta _\text{tf}) &  cos(\Delta \theta _\text{fe} + \Delta \theta _\text{tf})  & sin(\Delta \theta _\text{fe}) \cdot \Delta x_\text{tf} – cos(\Delta \theta_\text{fe}) \cdot \Delta y_\text{tf} +  \Delta x_\text{fe}\\
0 & 0 & 1
\end{bmatrix}
$$

 

with this equation, the final result to inspect the deviation of such a pick-and-place robot in handling with square objects referred to at TO with relation to the coordination system of the end-effector, defined as:

 

$$ \mathscr e _\text{te} = [   \Delta X_\text{te}   \,\,\, \Delta Y_\text{te} \,\,\,  \Delta Z_\text{te}  ]^T  =
\begin{cases}
\Delta x_\text{te} = cos(\Delta \theta _\text{fe}) \cdot \Delta x_\text{tf} – sin(\Delta \theta _\text{fe}) \cdot \Delta y_\text{tf} + \Delta x_\text{fe} \\
\Delta y_\text{te} = sin(\Delta \theta _\text{fe}) \cdot \Delta x_\text{tf} – cos(\Delta \theta _\text{fe}) \cdot \Delta y_\text{tf} + \Delta y_\text{fe} \\
\Delta \theta _\text{te} = \Delta \theta _\text{fe} + \Delta \theta _\text{tf}
\end{cases}
$$

 

Bibliography

 

Appendix

DOF (Degree of Freedom): DOF stands for Degree of Freedom. In the context of robotics or mechanical systems, Degree of Freedom refers to the number of independent ways in which a system can move or change its configuration. It represents the number of variables or parameters needed to describe the system’s position or state.

In robotics, DOF describes the number of independent axes or joints that a robot or robotic arm has. Each joint provides a specific degree of freedom, allowing the arm to move in a particular direction or rotate around an axis. For example, a robotic arm with three joints would be described as having three degrees of freedom.

CS (Coordination System): CS stands for Coordination System, which refers to a defined framework used to define and describe the positions and orientations of objects in a specific space. It provides a reference point or origin, axes, and units of measurement to establish a common coordinate framework for various applications such as robotics, computer graphics, computer-aided design (CAD), and spatial analysis.

CCD Sensors: CCD stands for Charge-Coupled Device. CCD sensors are electronic devices used to capture and convert optical images into digital signals. CCD sensors consist of an array of tiny light-sensitive elements called pixels. Each pixel converts incoming light into an electrical charge proportional to the intensity of the light.

EF (End Effector): End Effector refers to the tool or device placed at the end of a robotic arm or manipulator. The end effector is the part of the robot that directly interacts with the objects or materials being manipulated or worked upon.

                               
About the Authors

Zixin Yuan
Digital Marketing Coordinator
zixin.yuan@eliter-packaging.com
Zixin Yuan - LinkedIn



Zhiwei Bao
Company Owner  zhiwei.bao@eliter-packaging.com
Zhiwei Bao - LinkedIn


About the Company
ÉLITER Packaging Machinery Co., Ltd
No.1088, Jing Ye Rd, Economic Development Zone, Dong Shan District, Ruian Wenzhou Zhejiang, China 325200
+86 (0577) 6668 2128
info@Eliter-Packaging.com
ÉLITER Packaging Machinery