Visual Servoing Control for Robot Arm

Visual Servoing Control for Robot Arm

Mr. Atthaphan Paksakunnee 6230350483

Mr. Tathipan Chaiwattanapan 6230340038

Mr. Bantoon Saengpairao 6230340054

03607499 Engineering Project for Robotics and Automation System III
This project is part of the Bachelor of Engineering curriculum.
Program in Robotic and Automation Systems Engineering (International), Faculty of Engineering, Sriracha Kasetsart University
The academic year 2023
June 30, 2023
Engineering project certificate
Robotic and Automation System Engineering (International)
Project name Visual servo controller for robot arm
By Mr. Bantoon Saengpairao 6230340054
Mr. Atthaphan Paksakunnee 6230305089
Mr. Tathipan Chaiwattanapan 6230340038
Bachelor Degrees Bachelor of Engineering
Major Robotic and Automation System Engineering (International)
B.E. 2023
Project Advisors asst. Prof. Dr. Kittipong Yaovaja

Faculty of Engineering Sriracha Kasetsart University This project was approved as part of the Bachelor of Engineering degree program.
…………………………………………….. Advisors
Asst. Prof. Dr. Kittipong Yaovaja


Acknowledgment
This project was completed thanks to the contributions of many benefactors. The project team would like to express their gratitude to the faculty members who provided advice, procured various equipment, and helped troubleshoot any defects that arose, which made it possible for the project to be completed successfully.
We would like to thank Kasetsart University Faculty of Engineering Sriracha Campus and all personnel who kindly provided the location and facilities necessary for the project to be carried out smoothly and successfully.
Finally, we would like to express our deepest gratitude to our parents, relatives, and all those who have been a constant source of encouragement. We would also like to extend our thanks to fellow students and all those who provided assistance and advice throughout the project.
Regards
Bantoon Saengpairao
Atthaphan Paksakunnee
Tathipan Chaiwattanapan

Project name Visual servo controller for robot arm
By Mr. Bantoon Saengpairao 6230340054
Mr. Atthaphan Paksakunnee 6230305089
Mr. Tathipan Chaiwattanapan 6230340038
Bachelor Degrees Bachelor of Engineering
Major Robotic and Automation System Engineering (International)
B.E. 2022
Project Advisors asst. Prof. Dr. Kittipong Yaovaja


Abstract
In this project, we developed a visual servoing controller for a robotic arm. The goal was to create a controller that would allow the robot arm to accurately track a moving target for the robotic arm using visual feedback. The controller was designed to work in real-time, using images captured by a camera mounted on the robot arm.
The visual servo controller, which consisted of a feature extractor using computer vision techniques to detect and track the target in the camera image, and a control system using MOVEIT packaged in ROS to adjust the robot arm’s position in real-time based on the target’s location, was evaluated for performance through a series of experiments in a simulated environment using ABB IRB120 robot to communicate with the ROS program.
Overall, the results demonstrate the feasibility and effectiveness of using visual servo control for robotic arm tracking. This technology has potential applications in a range of fields, such as manufacturing, logistics, and medical robotics, where precise and efficient tracking of moving targets is important. Further research is needed to optimize the controller’s performance and explore its potential applications.

TABLE OF CONTENTS
CONTENTS
INTRODUCTION 13
1.1 Introduction 13
1.2 Objective 13
1.3 Scope of the project 14
1.4 Assigned duties 14
1.5 Research Schedule and Detailed Activity 15
1.6 Expected Benefits 16
CHAPTER 2 17
Theories 17
2.1 Ubuntu 17
2.1.1 Ubuntu 20.04 version 18
2.2 ROS 20
2.2.1 ROS Noetic Ninjemys 21
2.2.2 Basic ROS 23
2.2.3 MoveIt /RViz 24
2.3 RealSense D435i depth camera 26
2.3.1 RealSense SDK 2.0 27
2.4 Aruco marker 28
2.4.1 Marker Detection 29
2.4.2 Pose Estimation 31
2.5 Robot industrial 32
2.5.1 ABB 32
2.6 Robotic eye-in-hand 35
CHAPTER 3 37
Materials and Methods 37
3.1 VirtualBox 37
3.1.1 Install and setup Ubuntu 38
3.2 ROS 42
3.2.1 Install and setup ROS 42
3.3 ROS with ABB 45
3.3.1 Install and setup Package Summary 45
3.3.2 Simulation ABB with RViz and Gazebo 49
3.3.3 Simulation ABB with RViz and Robot Studio 52
3.3.4 Simulation ABB with RViz and Real robot IRB120 66
3.4 RealSense D435i depth camera 76
3.5 Aruco maker detected. 79
3.6 Visual servoing 83
3.8 Collision Safety. 88
3.9 Combine program. 93
CHAPTER 4 95
Results 95
4.1 Flow chart 95
4.2 Results Visual servoing 96
CHAPTER 5 98
Conclusion 98
5.1 Performance Summary 98
5.2 Problems and suggestions 99
References Error! Bookmark not defined.

LIST OF TABLES
Table 1 Create Tasks 58
Table 2 Create Signals 59
Table 3 Tie Signals to the System Outputs 60
Table 4 Load Modules to Tasks 61

LIST OF FIGURES
Figure 1 Ubuntu logo 17
Figure 2 Ubuntu website 17
Figure 3 Ubuntu 20.04 version 18
Figure 4 Ubuntu 20.04.6 LTS 19
Figure 5 ROS logo 20
Figure 6 ROS website 20
Figure 7 ROS Community 21
Figure 8 ROS Noetic Ninjemys 22
Figure 9 ROS Noetic installation 22
Figure 10 MoveIt logo 24
Figure 11 RViz interface 25
Figure 12 Transform tree 26
Figure 13 RealSense D435i depth camera 26
Figure 14 RealSense D435i depth camera structured 27
Figure 15 Example of Aruco markers 28
Figure 16 Aruco detection 30
Figure 17 Aruco pose estimate 31
Figure 18 robot industrial 32
Figure 19 ABB logo 33
Figure 20 ABB IRB120 33
Figure 21 Robotic eye-in-hand 35
Figure 22 VirtualBox logo 37
Figure 23 VirtualBox 7.0.8 platform packages download Windows hosts 37
Figure 24 Oracle VM VirtualBox Manager 38
Figure 25 Create VM 38
Figure 26 Choose the base memory VM 39
Figure 27 Virtual Hard Disk Size 39
Figure 28 Virtual summary 40
Figure 29 Setting VM 40
Figure 30 Storage Ubuntu 41
Figure 31 Click Start Ubuntu 41
Figure 32 Ubuntu 20.04 Linux 42
Figure 33 check rosversion 44
Figure 34 catkin workspace 45
Figure 35 ROS-Industrial Overview 46
Figure 36 ABB Experimental files 49
Figure 37 ABB IRB 120 manipulator flies 49
Figure 38 ABB with RViz and Gazebo 50
Figure 39 OMPL RRTConnectkConfigDefault. 50
Figure 40 Planning box 51
Figure 41 RobotStudio IRB120 52
Figure 42 Add controller Robotstudio 52
Figure 43 Controller options 53
Figure 44 Copy the files ROS to folder Robot studio create 53
Figure 45 Network Connection windows 54
Figure 46 Ipconfig command prompt 55
Figure 47 setting Network VM 55
Figure 48 ROS_socket.sys RAPID 56
Figure 49 Open the FlexPendant 57
Figure 50 Flexpendant running in MANUAL 57
Figure 51 Create Tasks on Flexpendant 58
Figure 52 Create Tasks on Robotstudio 58
Figure 53 Create Signals on Flexpendant 59
Figure 54 Create Signals on Robotstudio 59
Figure 55 Signals to the System Outputs on flexpendant 60
Figure 56 Signals to the System Outputson Robotstudio 61
Figure 57 Load Modules to Tasks on Flexpendent 62
Figure 58 Load Modules to Tasks on Robotstudio 62
Figure 59 Editor select Tasks and programs set modules 62
Figure 60 AUTO mode Flexpendent 63
Figure 61 Flexpendent waiting for connection 63
Figure 62 Ping 192.168.56.1 on Ubuntu 64
Figure 63 abb_irb120_moveit 64
Figure 64 Flexpendent connection with ROS 65
Figure 65 robot studio and Moveit 65
Figure 66 robot studio and Moveit comunnication 66
Figure 67 IRC5 controller 66
Figure 68 IP sever socket “192.168.125.1” 67
Figure 69 Ethernet Status. IP 192.168.125.1 67
Figure 70 IP 192.168.125.2 Subnet mask 255.255.255.0 68
Figure 71 Add controller” to connect IP 68
Figure 72 IRC5 controller port connected. 68
Figure 73 Request write access Flexpendant 69
Figure 74 Create Relation 69
Figure 75 Create Relation type 69
Figure 76 Transfer all data 70
Figure 77 Transfer Summary 70
Figure 78 Reset RAPID (P-start). 71
Figure 79 FlexPendant the notification connection waiting. 71
Figure 80 Network setting on Ubuntu 72
Figure 81 Set the IPv4 Method 72
Figure 82 Ubuntu turn on/off connected. 72
Figure 83 Ethernet connected Ubuntu 73
Figure 84 Flexpendent connection 74
Figure 85 the IRC5 controller auto mode 74
Figure 86 Data points to Flexpendent 75
Figure 87 Robot Plan & Execute the position axis. 75
Figure 88 USB connect the camera 76
Figure 89 D435i list of topics 78
Figure 90 Aruco marker detection program 82
Figure 91 Collision Safety. 88
Figure 92 collision object warming 92
Figure 93 Flow chart Visual servoing 95
Figure 94 Visual servoing working 96
Figure 95 Visual servoing working in program. 96
Figure 96 Visual servoing with program 97

CHAPTER 1
INTRODUCTION
1.1 Introduction
Robotics has become an important field of study with its vast applications in various industries. In recent years, there has been growing interest in developing visual servoing controllers for robotic arms to enhance their ability to interact with and manipulate objects in the environment. One approach to achieving this goal is to use RViz in ROS, which is a popular tool for robot visualization and control.
In this report, we present the design and implementation of a visual servo controller for a robotic arm controlled by RViz on ROS. The controller was developed using computer vision techniques to detect and track a moving target in real-time, based on images captured by a camera mounted on the robot arm. The control system used RViz in ROS to adjust the robot arm’s position in real-time based on the target’s location in the image.
The report will detail the development process of the visual servoing controller, including the feature extractor and control system components. We will also present the results of experiments conducted to evaluate the performance of the controller, using a simulated environment ABB robots to test the RViz in ROS program.
1.2 Objective
1 To develop a visual servoing controller for robotic arms using RViz in ROS; suitable for robots in the industry such as ABB.
2 To develop visual servoing controller will utilize computer vision techniques to detect and track a moving target in real-time.
3 To evaluate performance of the visual servoing controller through experiments in a simulated environment and explore its potential applications in ROS Noetic.

1.3 Scope of the project

  1. To develop and test a visual servoing controller for a robotic arm using RViz with ROS Noetic in Ubuntu 20.04.
  2. The controller will be designed to track a moving target in real-time using a RealSense D435i depth camera.
  3. The project will focus on evaluating ArUco marker detection performance in a simulated environment and exploring ROS potential applications in ROS Noetic.

1.4 Assigned duties
Mr. Tathipan Chaiwattanapan
• Setting all Configure and transfer data to IRC5 controller.
• Setting all IP Network for communicate between Robot ABB IRB120 with ROS.
• Test the programs simulation Robot on Gazebo, Robotstudio and Real robot IRB120.
Mr. Atthaphan Paksakunnee:
• Make ArUco markers using the RealSense D435i depth camera.
• Create a camera axis attached to the robot and retrieve the position values (XYZ) of the detected marker.
• Combine the robot code and camera code to work together.

1.5 Research Schedule and Detailed Activity
Table 1 Shows the workflow process for each mouth.
Topic January
1 month February
2 months March
3 months April
4 months
Find information about a program that can create a visual servoing controller for a robot arm.
Find and install the necessary study materials on Ubuntu and ROS systems
Begin learning and experimenting with the Dynamixel robotic arm and programming RViz to control its movement.
Study and experiment with both the NUBWO camera system with ROS, as well as the RealSense depth camera D435i system.
Learn and experiment with the AUBO robot and ROS.
Shift focus to studying the ABB IRB120 robot with ROS.
Learn how to connect the IP address from the IRC5 controller to Ubuntu.
Successfully detect ArUco markers using the RealSense depth camera D435i.
Create a camera axis attached to the robot body in the RViz program and read the position (XYZ) of the detected marker.
Use RViz to control a real ABB IRB120 robot through ROS.
Combine the robot code and camera code to create a visual servoing controller for the robot arm.

1.6 Expected Benefits

  1. The visual servoing controller developed using ROS and tested with ABB robots is expected to enhance robotic arm movements in diverse industries.
  2. Real-time tracking using a depth camera for the visual servoing controller helps robots respond efficiently to environment changes, improving performance and productivity.
  3. Local IP address connectivity, Integration with robot systems and devices

CHAPTER 2
Theories
In Chapter 2, the theory of access of Robot industrial and software program such as Ubuntu, ROS, Depth camera.

2.1 Ubuntu

Figure 1 Ubuntu logo
Ubuntu is a widely-used open-source operating system that is based on the Linux kernel. It was initially released in 2004 and has since gained popularity among users for its ease of use, stability, and security. The name “Ubuntu” is derived from a South African philosophy that emphasizes the interconnectedness of all things. One of the key features of Ubuntu is its package management system, which allows users to easily install, update, and remove software from their systems. Ubuntu also includes a graphical user interface (GUI) that is user-friendly and customizable.

Figure 2 Ubuntu website
Ubuntu is popular among developers, system administrators, and other technical users due to its command-line interface (CLI) and support for a wide range of programming languages and tools. Additionally, Ubuntu is used in many enterprise environments due to its stability and security features, including mandatory access control and full disk encryption. Ubuntu is constantly updated and improved by a large community of developers and users, and new versions are released every six months. These updates often include new features, security patches, and bug fixes, ensuring that Ubuntu remains a reliable and secure operating system for its users.
In conclusion, Ubuntu is a powerful and versatile operating system that is widely used in a variety of settings. Its ease of use, stability, and security make it an excellent choice for both personal and professional use, and its open-source nature ensures that it will continue to be updated and improved by its community of users and developers.

2.1.1 Ubuntu 20.04 version

Figure 3 Ubuntu 20.04 version
Ubuntu 20.04 is a Linux distribution that was released on April 23, 2020. It is the latest long-term support (LTS) version of the Ubuntu operating system and is expected to receive support and updates until 2025. Ubuntu is one of the most popular Linux distributions, known for its ease of use, stability, and community support.

Finally, Ubuntu 20.04 includes many new features and improvements for developers. The OS ships with the latest versions of popular programming languages, such as Python 3.8 and PHP 7.4. The new Ubuntu 20.04 also includes a built-in toolchain for developing and debugging applications for the Snap craft package format. This makes it easier for developers to create and distribute their applications on Ubuntu.

Figure 4 Ubuntu 20.04.6 LTS
In conclusion, Ubuntu 20.04 is a significant update to the popular Linux distribution, offering improved performance, stability, and features. Its updated user interface, new kernel, and updated applications make it an excellent choice for both desktop and server use. Additionally, the OS provides a great environment for developers with the latest tools and features needed for creating and distributing applications.

2.2 ROS

Figure 5 ROS logo
ROS, which stands for Robot Operating System, is an open-source software framework for building robotic systems. It was first developed by Willow Garage in 2007 and has since become the standard for building robotic systems across a wide range of applications, including autonomous vehicles, drones, industrial automation, and more. At its core, ROS is a middleware that provides a set of libraries and tools that allow robots to communicate with each other and with their environments. It provides a set of standard interfaces for sensors, actuators, and other hardware components, as well as a set of tools for processing and analyzing sensor data, controlling robot motion, and managing system-level tasks.

Figure 6 ROS website
One of the key benefits of ROS is its modular architecture, which allows developers to build complex robotic systems by combining and reusing existing components. This modularity makes it easy to build and test individual components in isolation, simplifies system integration, and allows for easy sharing and collaboration among developers. ROS also includes a powerful visualization system that allows developers to visualize and interact with the data generated by their robots in real-time. This includes 2D and 3D visualization tools, as well as tools for debugging and tuning robot behavior.

Figure 7 ROS Community
Another key feature of ROS is its support for distributed computing. ROS provides a set of tools for running robotic systems across multiple computers, allowing developers to take advantage of the processing power of multiple machines and distribute the workload of complex applications. Finally, ROS has a large and active community of developers and users, who contribute to the development and improvement of the framework and share their knowledge and expertise with others. This community provides a wealth of resources, including documentation, tutorials, and support forums, making it easier for developers to get started with ROS and to overcome any challenges they may encounter.

2.2.1 ROS Noetic Ninjemys
ROS Noetic Ninjemys is the latest stable version of the Robot Operating System (ROS) framework, released in May 2020. It is the successor to ROS Melodic Morenia and is named after the North American freshwater turtle species, the Noetic turtles. One of the most significant updates in ROS Noetic is its support for Python 3. While previous versions of ROS were based on Python 2, ROS Noetic is built using Python 3. This update provides improved performance and support for modern Python libraries, as well as the latest features and tools provided by the Python language.

Figure 8 ROS Noetic Ninjemys
Another major update in ROS Noetic is its support for the latest versions of the Gazebo simulator and the MoveIt! motion planning framework. These updates provide developers with a more powerful and flexible simulation environment for testing and developing robotic systems, as well as improved motion planning and control capabilities. ROS Noetic also includes updates and improvements to several core packages, including the rosbag package for recording and playing back ROS messages, the RViz visualization tool, and the navigation stack for autonomous navigation. These updates provide improved performance, stability, and features to these core components, making it easier for developers to build complex robotic systems.

Figure 9 ROS Noetic installation
Additionally, ROS Noetic includes a set of new packages and libraries that provide support for a range of new hardware and sensors, such as the Intel RealSense camera and the Velodyne LiDAR sensor. These new packages allow developers to take advantage of the latest hardware and sensors in their robotic systems, opening up new possibilities for applications such as autonomous driving and robotics research. Overall, ROS Noetic Ninjemys offers several updates and improvements to the ROS framework, including support for Python 3, updated versions of core packages and tools, and new packages for hardware and sensors. These updates make ROS Noetic a powerful and flexible tool for building complex robotic systems, with a modular architecture, powerful visualization tools, support for distributed computing, and a large and active community of developers and users.

2.2.2 Basic ROS
ROS node is a process that performs a specific task and communicates with other nodes via messages. Nodes are the building blocks of ROS applications, and they can be written in various programming languages such as C++, Python, and Java. Nodes can publish messages to topics or subscribe to topics to receive messages from other nodes. They can also provide and use services to perform specific tasks. Nodes in ROS can run on a single machine or distributed across multiple machines, providing a flexible and scalable framework for building robotic systems.
ROS topic is a named bus over which nodes exchange messages. Topics enable communication between nodes in a publish-subscribe architecture, where nodes can publish messages to a topic or subscribe to receive messages from a topic. Messages can be of any type, such as sensor data, control commands, or status updates. Topics can also be visualized using tools such as rostopic and rqt_graph, making it easier to understand the communication between nodes in a ROS system.
ROS core is system provides a set of tools and libraries that enable communication between nodes, as well as support for common functionality such as message passing, visualization, and hardware drivers. The core system includes components such as the roscore, roslaunch, rostopic, and RViz, which are essential for building ROS applications. The core system is designed to be modular and extensible, allowing developers to add their own packages and libraries to build custom functionality on top of the core system.
ROS catkin workspace is a directory where ROS packages are built and installed. Catkin is the build system used in ROS, and it provides a unified way to build, test, and install ROS packages. A catkin workspace contains a src directory where the source code for ROS packages is stored, as well as a devel directory where the built packages are installed. Catkin workspaces enable developers to manage dependencies between packages and build custom ROS distributions. They also provide tools for managing and releasing packages, such as catkin_make, catkin_tools, and bloom. Overall, catkin workspaces are an essential tool for building and managing ROS applications.

2.2.3 MoveIt /RViz
In ROS, MoveIt! is a powerful motion planning framework that provides tools for planning, executing, and monitoring robotic motion. MoveIt! is designed to work with a wide range of robot platforms, making it a popular choice for building complex robotic systems. One of the key features of MoveIt! is its integration with RViz, a 3D visualization tool for ROS. RViz provides a graphical user interface for visualizing and interacting with robot models, planning trajectories, and monitoring the execution of motion plans. MoveIt! and RViz work together to provide a powerful and flexible framework for motion planning and control in ROS, enabling developers to build sophisticated robotic applications with ease.

Figure 10 MoveIt logo
Motion planning refers to the process of generating a sequence of movements for a robot to achieve a specific task or goal. This can include tasks such as navigating through an environment, manipulating objects, or performing a series of coordinated motions. ROS provides several motion planning frameworks, including MoveIt!, which provides a set of tools for planning, executing, and monitoring robotic motion. MoveIt! uses a variety of motion planning algorithms to generate feasible and optimized motion plans for robot systems. These algorithms include geometric planning, sampling-based planning, and optimization-based planning. ROS also provides tools for simulating robot motion, enabling developers to test and refine their motion plans before deploying them on a physical robot. Overall, ROS provides a powerful and flexible framework for motion planning in robotic systems, enabling developers to build sophisticated applications with ease.

Figure 11 RViz interface
Also MoveIt have TF (short for Transform) function is a powerful library for managing coordinate frame transforms in 3D space. TF provides a way to define a hierarchy of coordinate frames that represent the position and orientation of objects in a robot system. These frames can represent physical components of the robot, such as joints and sensors, as well as objects in the robot’s environment, such as tables and walls. TF provides tools for transforming points and vectors between coordinate frames, allowing developers to easily track the position and orientation of objects in a robot system. TF also provides tools for interpolating between frames, smoothing noisy sensor data, and managing multiple frames of reference. Overall, TF is an essential tool for managing coordinate frames in ROS, enabling developers to build complex robotic systems with ease.

Figure 12 Transform tree

2.3 RealSense D435i depth camera

Figure 13 RealSense D435i depth camera
The RealSense D435i is a depth camera developed by Intel. It is part of the Intel RealSense product line, which includes various cameras and depth sensing technologies.
The D435i is an upgraded version of the D435 camera and incorporates additional inertial measurement unit (IMU) sensors. These IMU sensors enable the camera to capture not only depth information but also data related to motion and orientation. This combination of depth sensing and inertial data allows for more advanced applications in robotics, augmented reality, virtual reality, and other fields that require precise spatial understanding.

Figure 14 RealSense D435i depth camera structured
The D435i camera utilizes structured light technology to capture depth information. It emits a pattern of infrared light and then analyzes the deformation of this pattern when it interacts with objects in the environment. By measuring the distortion of the pattern, the camera can calculate the depth values of different points in the scene.
The D435i camera also features stereo cameras for capturing RGB (color) information. These RGB images can be combined with the depth data to create a complete 3D representation of the environment.

2.3.1 RealSense SDK 2.0
The RealSense SDK 2.0 (Software Development Kit) is a software package developed by Intel Corporation. It is designed to enable developers to incorporate Intel RealSense technology into their applications and projects. Intel RealSense technology includes a combination of depth sensing, motion tracking, and image processing capabilities.
The RealSense SDK 2.0 is a cross-platform library for Intel® RealSense™ depth cameras (D400 & L500 series and the SR300) and the T265 tracking camera.
The RealSense SDK 2.0 provides APIs (Application Programming Interfaces) and tools for accessing and utilizing the various features of Intel RealSense depth cameras. These depth cameras capture depth information along with color and infrared data, allowing for the creation of immersive augmented reality experiences, gesture recognition, 3D scanning, facial analysis, and more.
The RealSense SDK 2.0 supports multiple programming languages, including C++, C#, Python, and Java, making it accessible to a wide range of developers. It provides a range of functionalities and features, such as depth stream access, hand and finger tracking, face tracking and recognition, object scanning, and background segmentation.

2.4 Aruco marker
Aruco markers are a type of fiducial marker used in computer vision applications, particularly in augmented reality (AR) and robotics. They are square or rectangular markers that consist of a black and white grid pattern. The patterns are designed to be easily detectable and identifiable by computer vision algorithms.
The term “Aruco” is derived from the words “Augmented” and “Reality” combined with the initials “Co” from “Computer Vision.” Aruco markers are widely used for camera pose estimation, object tracking, and localization in AR applications. They provide a way for a computer vision system to recognize and track the position and orientation of the marker in real-time.
This is a random selection of Aruco markers, the kind of markers which we shall endeavor to detect in images:

Figure 15 Example of Aruco markers
An Aruco marker refers to a synthetic square marker characterized by a prominent black border surrounding an inner binary matrix, which serves as its identifier (id). The black border serves the purpose of enabling swift detection within an image, while the binary encoding enables identification and facilitates the implementation of error detection and correction techniques. The dimensions of the marker correspond to the size of the internal matrix. For example, a marker size of 4×4 consists of 16 bits.
It is important to highlight that a marker can be detected in various orientations within the environment. However, the detection process must possess the ability to determine the marker’s original rotation, ensuring the unambiguous identification of each corner. This rotation determination is also achieved based on the marker’s binary codification.

In a specific application, a dictionary of markers comprises the collection of markers that are utilized. Essentially, it is a straightforward compilation of the binary codifications associated with each individual marker in the set.
The main properties of a dictionary are the dictionary size and the marker size.
• The dictionary size is the number of markers that compose the dictionary.
• The marker size is the size of those markers (the number of bits).
The Aruco module includes some predefined dictionaries covering a range of different dictionary sizes and marker sizes.
It may be initially assumed that the marker identifier corresponds to a decimal number obtained by converting the binary codification. However, this approach becomes impractical for markers with high sizes where the number of bits becomes excessively large. Managing such huge numbers becomes cumbersome and inefficient. Instead, the marker identifier is simply defined as the index of the marker within the dictionary to which it belongs.
2.4.1 Marker Detection
Given an image containing Aruco markers, the detection process has to return a list of detected markers. Each detected marker includes:
• The position in the image
• The id of the marker.
The process of detecting markers involves two primary steps:
I. Detection of marker candidates: In this step, the image is analyzed to identify square-shaped objects that have the potential to be markers. The process begins with adaptive thresholding to segment the markers, followed by the extraction of contours from the thresholded image. Contours that are not convex or do not approximate a square shape are discarded. Additional filtering techniques are applied, such as removing contours that are too small or too large, or eliminating contours that are in close proximity to each other.
II. Verification of marker candidates: After the initial detection of marker candidates, it is necessary to determine if they are indeed markers by analyzing their inner codification. This step involves extracting the marker bits for each candidate. To achieve this, a perspective transformation is applied to bring the marker into its canonical form. Subsequently, the canonical image is thresholded using Otsu’s method to separate the black and white bits. The image is divided into different cells based on the marker size and border size. The number of black or white pixels in each cell is then counted to determine whether it represents a white or black bit. Finally, the bits are analyzed to ascertain whether the marker belongs to the specific dictionary. Error correction techniques are employed as necessary to enhance the accuracy of identification.

These are the detected markers (in green). Note that some markers are rotated. The small red square indicates the marker’s top left corner.:

Figure 16 Aruco detection
2.4.2 Pose Estimation
To accomplish camera, pose estimation, it is imperative to possess knowledge of the camera’s calibration parameters. These parameters consist of the camera matrix and distortion coefficients.
• Camera Matrix: The camera matrix contains intrinsic parameters that define the internal properties of the camera. It includes focal length (expressed in pixels), principal point coordinates (representing the optical center of the camera), and skew coefficient (typically assumed to be zero). The camera matrix encapsulates the transformation from 3D world coordinates to 2D image coordinates.
• Distortion Coefficients: Distortion coefficients account for lens distortions that occur in real-world cameras. These distortions can be classified into radial distortion and tangential distortion. Radial distortion refers to the curvature of straight lines near the image corners, while tangential distortion accounts for the slight tilt or shift of the lens relative to the image plane.

The function assumes a marker coordinate system where the origin is positioned either in the center (by default) or at the top left corner of the marker. In this coordinate system, the Z-axis extends outward from the marker surface. The axis-color correspondences follow the convention of X-axis represented by red, Y-axis represented by green, and Z-axis represented by blue.
It’s important to note that the image provided demonstrates the axis directions for rotated markers. These directions indicate how the X, Y, and Z axes align with the marker’s orientation in 3D space.

Figure 17 Aruco pose estimate
2.5 Robot industrial
The industrial robots defined by the ISO standard are programmable manipulators with three or more axes that are automatically controlled, reprogrammable, and multipurpose. They are used to automate processes in the industrial sector and can operate within collaborative environments with humans or within a security fence. Industrial robots typically have between three and seven axes and different degrees of freedom, making them suitable for various applications, including assembly lines and production lines. Overall the use of industrial robots in the manufacturing industry has become increasingly prevalent and will likely continue to grow as technology advances.

Figure 18 robot industrial
However, there are also potential drawbacks to consider when using AUBO cobots. One is that they may not be suitable for all types of tasks, particularly those that require a high degree of precision or dexterity. Additionally, the upfront cost of acquiring and integrating AUBO cobots into existing production processes can be a significant investment for companies. Nevertheless, the potential benefits of using AUBO cobots suggest that they are a promising technology for the future of industrial automation.
2.5.1 ABB
ABB robots are industrial robots manufactured by ABB for various applications such as welding, painting, material handling, and assembly. This report will explore the benefits and potential drawbacks of ABB robots, including improved production efficiency and reduced labor costs, and the potential for job displacement. The theories discussed will highlight the impact of ABB robots on industrial automation and the future of work.

Figure 19 ABB logo
The ABB IRB120 robot is a compact and versatile industrial robot designed for various applications, including assembly, material handling, and machine tending. One theory regarding the robot is that it can improve production efficiency in small and medium-sized enterprises (SMEs). Due to its compact size and flexible programming, the IRB120 robot can be easily integrated into existing production processes in SMEs, increasing productivity and reducing lead times. This can be particularly beneficial for SMEs that are looking to compete with larger companies by adopting advanced manufacturing technologies.

Figure 20 ABB IRB120
Despite the many benefits of using the ABB IRB120 robot, there are potential drawbacks to consider. One concern is the high initial investment cost associated with the robot, as well as the need for specialized training and programming. Additionally, the implementation of the robot may require changes to the existing production processes, which can be a complex and time-consuming process.
The maximum working range of the ABB IRB 120 can be expressed in centimeters as follows:
• Base rotation: ±180 degrees
The maximum distance the robot can reach in this axis is dependent on the location of the robot’s base and the length of its arm.
• Shoulder rotation: +155 to -90 degrees
The maximum vertical reach of the robot in this axis is approximately 42.5 cm.
• Elbow rotation: +45 to -90 degrees
The maximum horizontal reach of the robot in this axis is approximately 40.8 cm.
• Wrist rotation: ±270 degrees
The maximum horizontal reach of the robot in this axis is approximately 31.7 cm.
• Wrist bend: ±135 degrees
The maximum vertical reach of the robot in this axis is approximately 27.5 cm.
• Gripper rotation: ±360 degrees
The maximum distance the robot can reach in this axis is dependent on the length of the gripper.
To convert the maximum working range from degrees to centimeters, we need to know the length of the robot’s arm. Assuming the arm length is approximately 51.5 cm, we can estimate the maximum working range of each axis in centimeters as follows:
• Base rotation: ±51.5 cm (based on the length of the arm)
• Shoulder rotation: 0 to 42.5 cm (vertical reach)
• Elbow rotation: 0 to 40.8 cm (horizontal reach)
• Wrist rotation: ±31.7 cm (horizontal reach)
• Wrist bend: 0 to 27.5 cm (vertical reach)
• Gripper rotation: Dependent on the length of the gripper
It’s important to note that the actual maximum working range of the robot may be limited by other factors such as the size and shape of the objects being handled, the orientation of the gripper, and any obstacles in the environment. Therefore, it’s essential to perform a thorough analysis of the robot’s reach and workspace for each specific application to ensure that the robot can perform the required tasks accurately and safely.
2.6 Robotic eye-in-hand
Robotic eye-in-hand visual servoing is an important area of research in robotics and automation. It is a closed-loop control technique that uses visual feedback to adjust the robot’s motion. This technique involves using a camera mounted on the robot’s end effector to capture images of the object or scene being manipulated, and then using computer vision algorithms to extract information from the images to update the robot’s position and orientation in real-time. In this report, we will discuss the theoretical foundations and key concepts of robotic eye-in-hand visual servoing.

Figure 21 Robotic eye-in-hand
Robotic eye-in-hand visual servoing is a powerful technique that allows robots to adapt to changes in the environment and perform complex tasks with high accuracy and efficiency. The theoretical foundations and key concepts of visual servoing, eye-in-hand visual servoing, and robotics eye-in-hand visual servoing are essential to understanding this technique and its potential applications in robotics and automation. Further research in this area is necessary to develop more advanced algorithms and techniques for robotic eye-in-hand visual servoing.

CHAPTER 3
Materials and Methods

This chapter will discuss our Software and Real robot hardware. The software consists of the Program ROS and Depth camera.
3.1 VirtualBox
Before we install Ubuntu 20.04 version for use ROS Noetic, we have to install Oracle VM VirtualBox

Figure 22 VirtualBox logo
VirtualBox is a free and open-source virtualization software developed by Oracle. It allows you to create and run virtual machines on your computer, which are essentially emulated computer systems that can run their own operating system and applications.

Figure 23 VirtualBox 7.0.8 platform packages download Windows hosts

3.1.1 Install and setup Ubuntu
Open the Ubuntu 20.04 LTS website. Go to this web https://releases.ubuntu.com/focal/ select 64-bit PC (AMD64) desktop image download for access to the full program options, so that you don’t have to download the server separately.
Open Oracle VM VirtualBox Manager and Click New It’s in the upper-left corner of the VirtualBox window. Doing so opens a pop-up menu.

Figure 24 Oracle VM VirtualBox Manager
Enter a name for virtual machine. Type name virtual machine (e.g., Ubuntu) into the “Name” text field that’s near the top of the pop-up menu and Select Linux as the “Type” value then Select Ubuntu as the “Version” value match with ubuntu downloaded and click Next.

Figure 25 Create VM

Choose the base memory based on your host system RAM size and Processors CPU and click Next

Figure 26 Choose the base memory VM
Choose the Virtual Hard Disk Size and click Next.

Figure 27 Virtual Hard Disk Size

Finally Virtual will summary setup and click Finish.

Figure 28 Virtual summary

Click Setting It's in the upper corner of the VirtualBox window.

Figure 29 Setting VM

Go to Storage and click Add hard disk select file Ubuntu that downloaded then click OK.

Figure 30 Storage Ubuntu

Click Start It’s in the upper corner of the VirtualBox window to run the program and continue setup Ubuntu.

Figure 31 Click Start Ubuntu
3.2 ROS
This Ubuntu 20.04 version supports only ROS Noetic. Go to this web http://wiki.ros.org/noetic/Installation/Ubuntu to follow step install.

Figure 32 Ubuntu 20.04 Linux
3.2.1 Install and setup ROS
Configure Ubuntu repositories
Configure Ubuntu repositories to allow “restricted,” “universe,” and “multiverse.” can follow the Ubuntu guide https://help.ubuntu.com/community/Repositories/Ubuntu for instructions on doing this.
Setup sources. list
• Setup computer to accept software from packages.ros.org. Open terminal on ubuntu and type.
sudo sh -c ‘echo “deb http://packages.ros.org/ros/ubuntu $(lsb_release -sc) main” > /etc/apt/sources.list.d/ros-latest.list’

• Set up keys
sudo apt install curl # if you haven’t already installed curl
curl -s https://raw.githubusercontent.com/ros/rosdistro/master/ros.asc | sudo apt-key add –

Installation
• First, make sure Debian package index is up-to-date:
sudo apt update

Now pick of ROS install.
• Desktop-Full Install: (Recommended): Everything in Desktop plus 2D/3D simulators and 2D/3D perception packages
sudo apt install ros-noetic-desktop-full

Environment setup
Must source this script in every bash terminal you use ROS in.
source /opt/ros/noetic/setup.bash
It can be convenient to automatically source this script every time a new shell is launched. These commands will do that.
• Bash
echo “source /opt/ros/noetic/setup.bash” >> ~/.bashrc
source ~/.bashrc

Dependencies for building packages
Up to now have installed need to run the core ROS packages. To create and manage own ROS workspaces, there are various tools and requirements that are distributed separately.
• To install this tool and other dependencies for building ROS packages, run:
sudo apt install python3-rosdep python3-rosinstall python3-rosinstall-generator python3-wstool build-essential

Initialize rosdep
Before can use many ROS tools, need to initialize rosdep. rosdep enables to easily install system dependencies for source want to compile and is required to run some core components in ROS
• Installed rosdep, do so as follows.
sudo apt install python3-rosdep

• With the following, you can initialize rosdep.
sudo rosdep init
rosdep update

Check ROS version.
• The rosversion command prints version information for ROS stacks and can also print the name of the active ROS distribution. Open terminal and type
rosversion -d

Figure 33 check rosversion

3.3 ROS with ABB
There are several repositories with experimental packages for ABB manipulators in ROS-Industrial. Specifically, for this project, the following has been used: https://github.com/ros-industrial/abb_experimental , which is based on the official ROS page: http://wiki.ros.org/abb_experimental
Below are the steps for creating the workspace, cloning the said repository, installing all the necessary dependencies and finally building the workspace
3.3.1 Install and setup Package Summary
Creating a workspace for catkin
The catkin_make command is a convenience tool for working with catkin workspaces. Running it the first time in your workspace, it will create a CMakeLists.txt link in your ‘src’ folder.
$ source /opt/ros/noetic/setup.bash
$ mkdir -p ~/catkin_ws/src
$ cd ~/catkin_ws/
$ catkin_make

// or catkin build

Figure 34 catkin workspace

Industrial_core
Go to http://wiki.ros.org/industrial_core ,ROS-Industrial core stack contains packages and libraries for supporing industrial systems. This stack is part of the ROS Industrial program. It currently contains core packages that provide nodes and libraries for communication with industrial robot controllers. It also includes utilities and tools that are useful for industrial robotics and automation applications.

Figure 35 ROS-Industrial Overview
The ROS-Industrial distribution contains metapackages for several industrial vendors. More information can be found on the http://wiki.ros.org/Industrial/supported_hardware .
• ABB
• Fanuc
• Kuka (under development, experimental)
• Motoman
• Robotiq
• Universal Robots
• Open terminal and type this command to install ROS-Industrial ROS package.
$ sudo apt install ros-noetic-industrial-core
$ rosdep update
$ catkin build

Abb_driver
Go to web http://wiki.ros.org/abb_driver ,This package is part of the ROS-Industrial program and contains nodes for interfacing with ABB industrial robot controllers. This repository contains a simple, RAPID based ROS driver for ABB industrial robots connected to IRC5 controllers. The driver is largely manipulator agnostic, and is expected to work with any ABB manipulator compatible with an IRC5 controller.
• Open terminal and type this command to install abb_driver ROS package.
$ sudo apt install ros-noetic-abb-driver
$ rosdep update

• Install abb_driver library for Building the package Go to web https://github.com/ros-industrial/abb_driver.

change to the root of the Catkin workspace

$ cd $HOME/catkin_ws

$ git clone -b kinetic-devel https://github.com/ros-industrial/abb_driver.git src/abb_driver

check for and install missing build dependencies.

first: update the local database

$ rosdep update

Be sure to change ‘ noetic ‘ to whichever ROS version you are using

$ rosdep install –from-paths src/ –ignore-src –rosdistro noetic

build the workspace (using catkin_tools)

$ catkin build

Activating the workspace

$source $HOME/catkin_ws/devel/setup.bash

ABB Experimental
Go to web https://github.com/ros-industrial/abb_experimental This repository is part of the ROS-Industrial program. It currently contains packages that provide nodes for communication with ABB industrial robot controllers, URDF models for supported manipulators and associated MoveIt packages.
• It contains experimental packages that will be moved to the abb package once they’ve received sufficient testing and review. Open terminal and type this command to install ABB Experimental
$ sudo apt install ros-noetic-abb-experimental
$ rosdep update

• Install ABB Experimental all robot library for Building the package

change to the root of the Catkin workspace

$ cd $HOME/catkin_ws

retrieve the latest development version of the abb repository. If you’d rather

use the latest released version, replace ‘ noetic -devel’ with ‘ noetic ‘

$ git clone -b noetic c-devel https://github.com/ros-industrial/abb.git src/abb

retrieve the latest development version of abb_experimental

$ git clone -b noetic -devel https://github.com/ros-industrial/abb_experimental.git src/abb_experimental

check build dependencies. Note: this may install additional packages,

depending on the software installed on the machine

$ rosdep update

be sure to change ‘ noetic ‘ to whichever ROS release you are using

$ rosdep install –from-paths src/ –ignore-src –rosdistro noetic

build the workspace (using catkin_tools)

$ catkin build

Activating the workspace

$source $HOME/catkin_ws/devel/setup.bash

Figure 36 ABB Experimental files

3.3.2 Simulation ABB with RViz and Gazebo
This package contains the files required to simulate the ABB IRB 120 manipulator (and variants) in Gazebo.

Figure 37 ABB IRB 120 manipulator flies
Using Moveit! with Gazebo Simulator open terminal and type commands
• Bring the robot model into gazebo and load the ros_control controllers:
roslaunch abb_irb120_gazebo irb120_3_58_gazebo.launch

• Open new terminal and type commands Launch Moveit! and ensure that it is configured to run alongside Gazebo:
roslaunch abb_irb120_moveit_config moveit_planning_execution_gazebo.launch

Figure 38 ABB with RViz and Gazebo
To move the robot and check that the two tools are connected to each other, we position ourselves in RVIZ and look at the MotionPlanning window. In the first box, Context, we have the option to select the OMPL library that we want to use. The Open Motion Planning Library is a powerful collection of state-of-the-art sample-based motion planning algorithms and is the default scheduler in MoveIt. In our case select RRTConnectkConfigDefault.

Figure 39 OMPL RRTConnectkConfigDefault.
The robot is shown by default, in its previously established initial position. we can do it manually by clicking on the Interactive Marker and holding with the mouse move it to the desired position. However, to use MOVEIT trajectory planner we must go to the Planning box, where we can send different positions to the robot. To do this, we position ourselves on the Query tab and select a goal, which can be either a valid or invalid random position generated by RVIZ, or a position that has been saved by default, such as the “home” position. By “clicking” on the Update box, the robot is established in the new position and by clicking on the Plan and Execute button, it will carry out the trajectory to that point both in RVIZ and in Gazebo.

Figure 40 Planning box
In the following figures 37, the robot is shown in the final position in both Gazebo and RVIZ.

3.3.3 Simulation ABB with RViz and Robot Studio
Setting Up Robot Studio for Simulated ROS Control. This repository contains a simple, RAPID based ROS driver for ABB industrial robots connected to IRC5 controllers. Go to https://github.com/ros-industrial/abb_driver ( Make sure download and install on ubuntu and download on window for set up robot studio)
For first open robot studio and Select robot IRB 120

Figure 41 RobotStudio IRB120

Then Add controller from layout and click option

Figure 42 Add controller Robotstudio
Select the following controller options are required:
• 623-1: Multitasking
• 672-1: Socket Messaging (in recent RobotWare versions, this option is included with 616-1: PC Interface)
The ABB ROS Server code is written in RAPID, using a socket interface and multiple parallel tasks.

Figure 43 Controller options

Copy the files download form https://github.com/ros-industrial/abb_driver to the virtual controller of robot studio file that we create.

Figure 44 Copy the files ROS to folder Robot studio create
File Overview
• Shared by all tasks
o ROS_common.sys — Global variables and data types shared by all files
o ROS_socket.sys — Socket handling and simple_message implementation
o ROS_messages.sys — Implementation of specific message types
• Specific task modules
o ROS_stateServer.mod — Broadcast joint position and state data
o ROS_motionServer.mod — Receive robot motion commands
o ROS_motion.mod — Issues motion commands to the robot
Then save and restart file. Go to tap controller RAPID and check File Overview.

Socket-Server Tasks (GetSysInfo () patch)
As the GetSysInfo (..) function does not return a valid IP address when used in RobotStudio (it returns “VC” instead of the IP of your Windows machine), we need to change something in the ROS_socket.sys source file.

Make sure Windows PC has a static IP configured. If workstation does not have a static IP address, will have to repeat the below changes each time IP address changes

Figure 45 Network Connection windows
Type “Ipconfig” on command prompt to check IPv4 address. Change the IP address of Ethernet that use for Ubuntu (In this network use Ethernet 7) to 192.168.56.1

Figure 46 Ipconfig command prompt
Ethernet adapter IP address is 192.168.56.1 because have to connect to Virtual Box Host-only Ethernet adapter. Go to setting Network on Ubuntu Host-only Adapter.

Figure 47 setting Network VM
• Now open ROS_socket.sys (Open on robot studio or Notepad) and change the following line:
IF (SocketGetStatus(server_socket) = SOCKET_CREATED) SocketBind server_socket, GetSysInfo(\LanIp), port;
• into:
IF (SocketGetStatus(server_socket) = SOCKET_CREATED) SocketBind server_socket, “192.168.56.1”, port;

Figure 48 ROS_socket.sys RAPID
Configuring Controller Settings
All files in the abb_driver/rapid (Noetic and later) directory should be copied to the robot controller. This tutorial assumes the files are copied to a “ROS” subdirectory under the system’s HOME directory (e.g. //HOME/ROS/*) or catkin work space that we make or build.
Open the FlexPendant on robot studio or go to tap controller for setting configuration.

Figure 49 Open the FlexPendant
For Flexpendant running in MANUAL mode for setting controllers.

Figure 50 Flexpendant running in MANUAL

Create Tasks
• Browse to Controller tab → Configuration Editor → Controller → Task, then right-click New Task
• (In RobotStudio 5, this is found under ABB → Control Panel → Configuration → Topics → Controller → Task)
• Create 3 tasks as follows:
Table 1 Create Tasks
Name Type Trust Level Entry Motion Task
ROS_StateServer SEMISTATIC NoSafety main NO
ROS_MotionServer SEMISTATIC SysStop main NO
T_ROB1 NORMAL main YES

Figure 51 Create Tasks on Flexpendant

Figure 52 Create Tasks on Robotstudio
Create Signals
• Browse to Controller tab → Configuration Editor → I/O System → Signal, then right-click New Signal
• (In RobotStudio 5, this is found under ABB → Control Panel → Configuration → Topics → I/O → Signal)
• Create 7 signals as follows
Table 2 Create Signals
Name Type of Signal
signalExecutionError Digital Output
signalMotionPossible Digital Output
signalMotorOn Digital Output
signalRobotActive Digital Output
signalRobotEStop Digital Output
signalRobotNotMoving Digital Output

Figure 53 Create Signals on Flexpendant

Figure 54 Create Signals on Robotstudio
Tie Signals to the System Outputs
• Browse to Controller tab → Configuration Editor → I/O System → System Output, then right-click New System Output
• (In RobotStudio 5, this is found under the ABB → Control Panel → Configuration → Topics → I/O → System Output)
• Add one entry for signal as follows:
Table 3 Tie Signals to the System Outputs
Signal Name Status Arg 1 Arg 2 Arg 3 Arg 4
signalExecutionError Execution Error N/A T_ROB1 N/A N/A
signalMotionPossible Runchain OK N/A N/A N/A N/A
signalMotorOn Motors On State N/A N/A N/A N/A
signalRobotActive Mechanical Unit Active ROB_1 N/A N/A N/A
signalRobotEStop Emergency Stop N/A N/A N/A N/A
signalRobotNotMoving Mechanical Unit Not Moving ROB_1 N/A N/A N/A
signalRosMotionTaskExecuting Task Executing N/A T_ROB1 N/A N/A

Figure 55 Signals to the System Outputs on flexpendant

Figure 56 Signals to the System Outputson Robotstudio
Load Modules to Tasks
• Browse to Controller tab → Configuration Editor → Controller → Automatic Loading of Modules, then right-click New Automatic Loading of Modules
• (In RobotStudio 5, this is found under ABB → Control Panel → Configuration → Topics → Controller → Automatic Loading of Modules)
• Add one entry for each server file as follows
Table 4 Load Modules to Tasks
File Task Installed All Tasks Hidden
HOME:/ROS/ROS_common.sys NO YES NO
HOME:/ROS/ROS_socket.sys NO YES NO
HOME:/ROS/ROS_messages.sys NO YES NO
HOME:/ROS/ROS_stateServer.mod ROS_StateServer NO NO NO
HOME:/ROS/ROS_motionServer.mod ROS_MotionServer NO NO NO
HOME:/ROS/ROS_motion.mod T_ROB1 NO NO NO

Figure 57 Load Modules to Tasks on Flexpendent

Figure 58 Load Modules to Tasks on Robotstudio
After the last change, select YES to restart the controller and apply the changes.Then go to program editor select Tasks and programs set modules like this

Figure 59 Editor select Tasks and programs set modules

Running in AUTO mode and PP to Main to run programs

Figure 60 AUTO mode Flexpendent
Run the program
Click Play the robot waiting for connection

Figure 61 Flexpendent waiting for connection

Open terminal Type “Ping 192.168.56.1” IP address on Ubuntu to connect with robot studio on window

Figure 62 Ping 192.168.56.1 on Ubuntu
• Open new terminal run RViz abb robot and set IP address to connect robot IP address.
$ roslaunch abb_irb120_moveit_config moveit_planning_execution.launch sim:=false robot_ip:=192.168.56.1

Figure 63 abb_irb120_moveit

When the program open, Robotstudio will connection with RIVZ and can control robot IRB 120 selected main to pp in tap production window.

Figure 64 Flexpendent connection with ROS
We can move the robot by use interface active marker or set the Goal state to random the position of robot.When move the robot then click Plan & Execute Robot, RViz will set the data points to motion task to robot studio on Flexpendent.

Figure 65 robot studio and Moveit

Click play button on Flexpendent the robot will run to start the points and stop & end received.

Figure 66 robot studio and Moveit comunnication
3.3.4 Simulation ABB with RViz and Real robot IRB120
The connection between ROS and the real robot is done almost in the same way. First of all, the IRC5 controller of the robot has to be configured as it was done in the simulation, following all the steps but with the real FlexPendant and Use X2 in controller for TCP/IP (Make sure set up all Configuring Controller Settings on real FlexPendant).

Figure 67 IRC5 controller
Set the IP sever socket “192.168.125.1” on IRC5 controller robot. the IP address in code socket will be the first server for make another port connect match will the IRC5 controller.

Figure 68 IP sever socket “192.168.125.1”
Set Ethernet status IP Address 192.168.125.2 to on computer windows for connect IRC5 controllers. Go to Control Panel →Network and Internet → Network Connections→ Ethernet Status.

Figure 69 Ethernet Status. IP 192.168.125.1

Open Details for set IP 192.168.125.2 Subnet mask 255.255.255.0

Figure 70 IP 192.168.125.2 Subnet mask 255.255.255.0
Click “Add controller” to connect IP address with robot studio.

Figure 71 Add controller” to connect IP
When connect With IRC5 controller the tap controller with show the management port connected.

Figure 72 IRC5 controller port connected.

Click “Request write access” and click revoke tap for allow access upload code and edit on real Flexpendent from robot studio.

Figure 73 Request write access Flexpendant
Click “Create Relation” from transfer all data and Rapid code.

Figure 74 Create Relation
In Create Relation type the Relation name and set the First controller to Robot studio Station that created and set the Second controller to the IRC5 controller.

Figure 75 Create Relation type
Now can Transfer all data code from source to the target Click “Transfer now”. (In this transfer can Change Direction Source and Target)

Figure 76 Transfer all data
When Click “Transfer now” the program will show the Transfer Summary then Click Yes to continue upload.

Figure 77 Transfer Summary
Finally, we restart from RobotStudio the real controller. To do this, click on the real controller and in the Controller section, select Restart – Reset RAPID (P-start).

Figure 78 Reset RAPID (P-start).
If everything has worked correctly, Will see on the FlexPendant the notification that the connection is waiting.

Figure 79 FlexPendant the notification connection waiting.

Now can establish the connection between the robot controller IRC5 and the Noetic ROS laptop via the Ethernet cable. Go to Network setting on Ubuntu change Ethernet connected.

Figure 80 Network setting on Ubuntu
Set the IPv4 Method to manual and set Addresses “192.168.125.5” and Netmask “255.255.255.0” then apply.

Figure 81 Set the IPv4 Method
Next reset the Ethernet Ubuntu turn on/off connected.

Figure 82 Ubuntu turn on/off connected.

Go to Ethernet connected setting and check the IPv4

Figure 83 Ethernet connected Ubuntu
Now can simulation the robot IRB120 with RViz Moveit! via noetic Ros
• Open the terminal on ubuntu and type this command to ping Ethernet connected
ping 198.162.125.1

• Open new terminal types this command to run RViz abb robot and set IP address to connect robot IP address.
roslaunch abb_irb120_moveit_config moveit_planning_execution.launch sim:=false robot_ip:=192.168.125.1

When the program open, Flexpendent will connection with RViz and can control real robot IRB120.

Figure 84 Flexpendent connection
Set the IRC5 controller to the auto mode for make robot move automation

Figure 85 the IRC5 controller auto mode
Move the robot by interface active marker of robot or set the Goal state to random the position of robot then Plan & Execute Robot, RViz will set the data points for motion task to Flexpendent.

Figure 86 Data points to Flexpendent
Then selected main to pp in tap production window click play button on Flexpendent to start movement of robot and continue Plan & Execute the position axis.

Figure 87 Robot Plan & Execute the position axis.


3.4 RealSense D435i depth camera
The RealSense D435i depth camera is a versatile device that provides both depth and RGB imaging capabilities, along with integrated IMU sensors, making it suitable for a wide range of applications that require accurate spatial perception.
To connect the RealSense D435i depth camera in Ubuntu, you’ll need to follow these steps:
I. Install the RealSense SDK:
• Open a terminal in Ubuntu.
• Update the package list by running the command:
$ sudo apt-get update
• Install the RealSense SDK by running the following commands:
$ sudo apt-get install librealsense2-dkms
$ sudo apt-get install librealsense2-utils

II. Connect the camera:

Figure 88 USB connect the camera
• Plug in the RealSense D435i camera to a USB 3.0 port on Ubuntu machine.
• Ensure that the camera is powered on.


III. Verify camera detection:
• Open a terminal and run the command:
$ lsusb
• Look for an entry in the output that corresponds to the RealSense camera. It may be listed as “Intel Corp.” or something similar. This confirms that the camera is detected by Ubuntu.

IV. Check camera functionality:
• In the terminal, run the command:
$ realsense-viewer
• This will launch the RealSense Viewer application, which provides a graphical interface to interact with the camera.
• In the RealSense Viewer, you should see the camera feed, depth data, and various camera settings.
• If the camera is functioning properly, you should be able to see the camera feed in the RealSense Viewer.

V. Install the RealSense ROS Package:
• Install the RealSense ROS Package in direct project catkin workspace folder by run the following commands:
$ cd ~/catkin_ws/src
• Open a terminal in Ubuntu and run the following commands:
$ sudo apt-get update
$ sudo apt-get install ros-noetic-realsense2-camera
$ sudo apt-get install ros-noetic-realsense2-description
• These commands will install the RealSense SDK and the ROS packages necessary to use the RealSense camera in ROS.

VI. Published the RealSense ROS Node:
• The published topics differ according to the device and parameters. By run the following commands to start the camera node in ROS:
$ roslaunch realsense2_camera rs_camera.launch
• This will launch the camera node, which will start publishing the camera feed and depth data to ROS topics.
• command with D435i attached, the following list of topics will be available

Figure 89 D435i list of topics
• Used these topics published into Aruco Marker program to get parameter from Realsence camera  
3.5 Aruco maker detected.
To utilize Aruco markers in Ubuntu with ROS Noetic, please adhere to the following steps:
I. Install the required packages: Open a terminal and install the necessary ROS packages for using Aruco markers by running the following command:
$ sudo apt-get install ros-noetic-aruco-ros

II. Cloning the aruco_ros package: Within catkin workspace directory, please proceed to the ‘src’ folder and clone the aruco_ros package from GitHub by executing the following steps:
$ cd ~/catkin_ws/src
$ git clone https://github.com/pal-robotics/aruco_ros.git
$ catkin build

III. Modify Aruco code to use in project.
• In order to modify the Aruco code and make adjustments to the camera node and Aruco details within ROS Noetic, it is necessary to perform modifications to the configuration files and the code of the Aruco package. Please follow the steps outlined below for accomplishing this task in a formal manner.
• Locating the Aruco package: Within catkin workspace, kindly navigate to the ‘src’ directory where you have previously cloned the aruco_ros package. Please refer to the following steps to accomplish this in a formal manner:
$ cd ~/catkin_ws/src/aruco_ros/aruco_ros/launch


• Modify the camera settings: Open the launch file that corresponds to the camera you want to use. For example, if you want to modify the settings for camera, open the single.launch file:
$ gedit single.launch
• Within this file, you have the ability to modify parameters that are relevant to marker detection, including but not limited to marker size, dictionary type, marker detection threshold, and other related settings. Please review and adjust these parameters according to specific requirements. Or adjust on my parameters according to project.
o Set “markerSize” = 0.096
To define the real-world dimension of marker in unit meter (m.)
o Set “markerId” = 701
To allow program to detect this Marker only.
o Set “camera_frame” = camera_frame
To define TF frame camera in Robot arm
o Set “marker_frame” = aruco_marker_frame
To define name of Aruco mark TF frame
o Set “ref_frame” = base_link
To define reference TF position of Aruco Mark
o Set “/camera_info” = /camera/color/camera_info
To define node of RealSense RGB camera
o Set “/image” = /camera/color/image_raw
To define node of Realsense RGB camera parameters


<arg name="markerId"        default="701"/>
<arg name="markerSize"      default="0.096"/>    <!-- in m -->
<arg name="eye"             default="left"/>
<arg name="camera_frame"    default="camera_frame"/>
<arg name="marker_frame"    default="aruco_marker_frame"/>
<arg name="ref_frame"       default="base_link"/>  <!-- leave empty and the pose will be published wrt param parent_name -->
<arg name="corner_refinement" default="LINES" /> <!-- NONE, HARRIS, LINES, SUBPIX -->

<!-- start ArUco -->
<node pkg="aruco_ros" type="single" name="aruco_single">
    <remap from="/camera_info" to="/camera/color/camera_info" />
    <remap from="/image" to="/camera/color/image_raw" />
    <param name="image_is_rectified" value="True"/>
    <param name="marker_size"        value="$(arg markerSize)"/>
    <param name="marker_id"          value="$(arg markerId)"/>
    <param name="reference_frame"    value="$(arg ref_frame)"/>   <!-- frame in which the marker pose will be refered $(arg ref_frame) --> 
<param name="camera_frame" value="$(arg camera_frame)"/>
<param name="marker_frame" value="$(arg marker_frame)" />
    <param name="corner_refinement"  value="$(arg corner_refinement)" />
</node>

IV. Published the Aruco ROS Node:
• Launch the Aruco marker detection with the updated settings: Launch the Aruco marker detection node using the modified settings by running the appropriate launch file. For example, to launch the RealSense camera with single marker detection:
$ roslaunch aruco_ros usb_single.launch
• This will launch the Aruco marker node, which will start publishing the pose estimate position and orientation data to ROS topics.
• Mention that you need to run node camera before used Aruco marker.
• The following list of topics will be available, can Published by run the following command:
$ rqt
• Subscribe the ROS topic aruco marker to show pose estimate position and orientation, by run the following command:
$ rostopic echo /aruco_single/pose

Figure 90 Aruco marker detection program

3.6 Visual servoing
This section serves the purpose of creating a visual servoing system utilizing an ABB robot (irb120) and Aruco marker. The objective is to track a specific marker and generate a path from the starting pose to the goal pose of the robot within the ROS Noetic framework.

I. Locate project folder catkin workspace and create catkin package by run the following command.
• Open a terminal and navigate to Catkin workspace directory:
• Create a new Catkin package using the catkin_create_pkg command.
$ cd ~/catkin_ws/src
$ catkin_create_pkg visual_servoing roscpp std_msgs sensor_msgs geometry_msgs moveit_core moveit_ros_planning moveit_ros_planning_interface moveit_visual_tools
• Upon executing the “catkin_create_pkg” command, a newly created directory will appear, bearing the name of package. Inside this directory, you will discover a collection of files and folders that constitute package.
• It is possible to incorporate supplementary files into package, such as source code files, launch files, or configuration files, among others.
• Build catkin_ws using the ‘catkin build’ command:
$ catkin build

I. Create C++ program to write visual servoing system in ‘src’ in visual_servoing package folder. By run the following command:
$ cd ~/catkin_ws/src/visual_servoing/src
$ gedit robot_arm_controller.cpp


• Write C++ follow the instruction code below:

static const std::string PLANNING_GROUP = “manipulator”;
double x, y, z, qx, qy, qz, qw;

void poseCallback()
{
// Check if all variables are 0.0 and return if true
if (x == 0.0 && y == 0.0 && z == 0.0 && qx == 0.0 && qy == 0.0 && qz == 0.0)
{
return;
}
ROS_INFO(“fUNCTION posecallback”);
ROS_INFO(“Received Aruco_ros pose: x=%f, y=%f, z=%f, qx=%f, qy=%f, qz=%f, qw=%f”, x, y, z, qx, qy, qz, qw);

moveit::planning_interface::MoveGroupInterface move_group(PLANNING_GROUP);
move_group.setPlanningTime(3.0);
move_group.setNumPlanningAttempts(1);
move_group.setMaxVelocityScalingFactor(0.8);        // Speed 80%
move_group.setMaxAccelerationScalingFactor(0.8);    // Speed 80%
move_group.setGoalTolerance(0.001);
geometry_msgs::Pose target_pose;
target_pose.position.x = x;
target_pose.position.y = y;
target_pose.position.z = z;
target_pose.orientation.x = qx;
target_pose.orientation.y = qy;
target_pose.orientation.z = qz;
target_pose.orientation.w = qw;

move_group.setPoseTarget(target_pose);
moveit::planning_interface::MoveGroupInterface::Plan my_plan;
bool success = (move_group.plan(my_plan) == moveit::planning_interface::MoveItErrorCode::SUCCESS);
if (success) {
    move_group.execute(my_plan);
    ROS_INFO("Move to target pose success!");
} else {
    ROS_WARN("Move to target pose failed!");
}
x = 0.0;
y = 0.0;    
z = 0.0;    
qx = 0.0;   
qy = 0.0;   
qz = 0.0;   
qw = 0.0;   
ROS_INFO("end loop");

}

void callmsgs(const geometry_msgs::PoseStamped::ConstPtr& msg)
{
// Extract position information from the PoseStamped message
x = msg->pose.position.x- 0.4;
y = msg->pose.position.y;
z = msg->pose.position.z;
// Extract orientation information from the PoseStamped message
qx = msg->pose.orientation.x;
qy = msg->pose.orientation.y;
qz = msg->pose.orientation.z;
qw = msg->pose.orientation.w + 0.432050;
}

int main(int argc, char** argv)
{
ROS_INFO(“START”);
ros::init(argc, argv, “move_to_aruco_pose”);
ros::NodeHandle nh;
ros::AsyncSpinner spinner(2); // Use async spinner instead of multi-threaded
spinner.start();
ros::Subscriber sub = nh.subscribe(“aruco_single/pose”, 1, callmsgs);
while (ros::ok())
{
poseCallback();
}
ROS_INFO(“END”);
ros::waitForShutdown();
return 0;
}
• This code receives pose estimate from Aruco marker topic and joint sensers from robot to define start pose and goal pose using Moveit API to planning path that auto create my used OMPL library, then check the collision after sending path to move the real robot arm.
II. To include the “robot_arm_controller.cpp” file in the CMakeLists.txt file within the visual_servoing package, by run follow these steps:
• Open a terminal and navigate to Catkin_ws package directory. Assuming package is named visual_servoing, you can use the following command:
$ cd /path/to/catkin_ws/src/my_package
• Open the CMakeLists.txt file using a text editor of your choice. For example:
$ gedit CMakeLists.txt
• Inside the CMakeLists.txt file, you will find a section for adding source files to your package. Look for a line that starts with ‘add_executable’ and ‘target_link_libraries’.
add_executable(robot_arm_controller src/robot_arm_controller.cpp)
target_link_libraries(robot_arm_controller ${catkin_LIBRARIES} ${Boost_LIBRARIES})
Adjust the path and filename according to the actual location of your robot_arm_controller.cpp file within your package.
• Rebuild your Catkin workspace using the ‘catkin build’ command:
$ Catkin build
This will compile package with the newly added source file.

III. Start the robot_arm_controller.cpp node in catkin_ws workspace, follow these steps:
• Build your Catkin workspace using the catkin_make command:
$ catkin build
This step is important to ensure that your package and its dependencies are compiled and built correctly.
• Start the robot_arm_controller node using the rosrun command:
$ rosrun visual_ servoing robot_arm_controller.cpp

3.8 Collision Safety.
The purpose of adding collision objects to the planning scene is to provide MoveIt with information about obstacles or objects that the robot must avoid during its motion planning. By adding these collision objects, MoveIt can generate collision-free paths for the robot to follow.

Figure 91 Collision Safety.
I. Create C++ program to write visual servoing system in ‘src’ in visual_servoing package folder. By run the following command:
$ cd ~/catkin_ws/src/visual_servoing/src
$ gedit add_collision_object.cpp

II. In this code, three collision objects are added to the planning scene: “Table”, “Controller”, and “wall”. The “Table” and “Controller” objects represent obstacles that the robot must avoid, while the “wall” object represents a physical boundary that the robot cannot pass through. It can also define the dimensions of objects and the XYZ position of their poses in Rviz.

using namespace moveit::planning_interface;

int main(int argc, char** argv)
{
ros::init(argc, argv, “move_robot”);
ros::NodeHandle nh;

// Create a MoveGroupInterface instance
MoveGroupInterface move_group("manipulator");

// Set the planning time and number of planning attempts
move_group.setPlanningTime(5.0);
move_group.setNumPlanningAttempts(10);

// Create a PlanningSceneInterface instance
PlanningSceneInterface planning_scene_interface;

// Create and add the first collision object
moveit_msgs::CollisionObject collision_object1;
collision_object1.header.frame_id = move_group.getPlanningFrame();
collision_object1.id = "Table";
shape_msgs::SolidPrimitive Table;
Table.type = shape_msgs::SolidPrimitive::BOX;
Table.dimensions = {1.01, 0.85, 0.79};
geometry_msgs::Pose pose1;
pose1.position.x = 0.0;
pose1.position.y = 0.0;
pose1.position.z = -0.4;
pose1.orientation.w = 1.0;
collision_object1.primitives.push_back(Table);
collision_object1.primitive_poses.push_back(pose1);

planning_scene_interface.applyCollisionObject(collision_object1);
ROS_INFO("Added Table into the world");

// Create and add the second collision object
moveit_msgs::CollisionObject collision_object2;
collision_object2.header.frame_id = move_group.getPlanningFrame();
collision_object2.id = "Controller";
shape_msgs::SolidPrimitive Controller;
Controller.type = shape_msgs::SolidPrimitive::BOX;
Controller.dimensions = {0.7, 0.7, 0.5};
geometry_msgs::Pose pose2;
pose2.position.x = 0.0;
pose2.position.y = -0.85;
pose2.position.z = 0.25;
pose2.orientation.w = 1.0;
collision_object2.primitives.push_back(Controller);
collision_object2.primitive_poses.push_back(pose2);
planning_scene_interface.applyCollisionObject(collision_object2);
ROS_INFO("Added Controller into the world");

// Create and add the third collision object
moveit_msgs::CollisionObject collision_object3;
collision_object3.header.frame_id = move_group.getPlanningFrame();
collision_object3.id = "wall";
shape_msgs::SolidPrimitive wall;
wall.type = shape_msgs::SolidPrimitive::BOX;
wall.dimensions = {0.2, 2.0, 2.0};
geometry_msgs::Pose pose3;
pose3.position.x = -0.4;
pose3.position.y = 0.0;
pose3.position.z = 1.0;
pose3.orientation.w = 1.0;
collision_object3.primitives.push_back(wall);
collision_object3.primitive_poses.push_back(pose3);
planning_scene_interface.applyCollisionObject(collision_object3);
ROS_INFO("Added wall into the world");

ros::spin();
return 0;

}

By adding these collision objects to the planning scene, the robot can plan its motion safely and avoid collisions with the objects in the environment.
III. To include the ” add_collision_object.cpp ” file in the CMakeLists.txt file within the visual_servoing package, by run follow these steps:
• Open a terminal and navigate to Catkin_ws package directory. Assuming package is named visual_servoing, you can use the following command:
$ cd /path/to/catkin_ws/src/my_package
• Open the CMakeLists.txt file using a text editor of your choice. For example:
$ gedit CMakeLists.txt
• Inside the CMakeLists.txt file, you will find a section for adding source files to your package. Look for a line that starts with ‘add_executable’ and ‘target_link_libraries’.
add_executable add_collision_object src/ add_collision_object.cpp)
target_link_libraries(add_collision_object ${catkin_LIBRARIES} ${Boost_LIBRARIES})
Adjust the path and filename according to the actual location of your add_collision_object.cpp file within your package.
• Rebuild your Catkin workspace using the ‘catkin build’ command:
$ Catkin build
This will compile package with the newly added source file.

IV. Start the add_collision_object.cpp node in catkin_ws workspace, follow these steps:
• Build your Catkin workspace using the catkin_make command:
$ catkin build
This step is important to ensure that your package and its dependencies are compiled and built correctly.
• Start the add_collision_object.cpp node using the rosrun command:
$ rosrun visual_ servoing add_collision_object.cpp

Figure 92 collision object warming
3.9 Combine program.
This Combine program serves the purpose of launching and coordinating multiple components in a RViz environment.
I. Create C++ program to write visual servoing system in ‘src’ in visual_servoing package folder. By run the following command:
$ cd ~/catkin_ws/src/visual_servoing/ launch
$ gedit visual_servoing.launch

Each section corresponds to a different component or functionality required for a specific robotic system. It specifies the configuration and launch sequence for several components and nodes.

<!-- Start Moveit ABB IRB120 model -->
<arg name="sim" default="false" />
<arg name="robot_ip" default="192.168.125.1" />
<include file="$(find abb_irb120_moveit_config)/launch/moveit_planning_execution.launch">
    <arg name="sim" value="$(arg sim)" />
    <arg name="robot_ip" value="$(arg robot_ip)" />
</include>

<!-- Start Add collision object  -->
<include file="$(find visual_servoing)/launch/add_collision_object.launch" />

<!-- Start Realsense Camera -->
<include file="$(find realsense2_camera)/launch/rs_camera.launch" />

<!-- Start Aruco tracking -->
<include file="$(find aruco_ros)/launch/single.launch" />

All node each section:
• Moveit ABB IRB120 model
This section launches the MoveIt configuration for an ABB IRB120 robot. It includes the moveit_planning_execution.launch file from the abb_irb120_moveit_config package.
• Add collision object
This section includes the add_collision_object.launch file from the visual_servoing package.
• Realsense Camera
This section launches the Realsense camera by including the rs_camera.launch file from the realsense2_camera package.
• Aruco tracking:
This section launches Aruco tracking by including the single.launch file from the aruco_ros package.
II. Start the visual_servoing.launch node in catkin_ws workspace, follow these steps:
• Build your Catkin workspace using the catkin_make command:
$ catkin build
This step is important to ensure that your package and its dependencies are compiled and built correctly.
• Start the visual_servoing.launch node using the rosrun command:
$ rosrun visual_ servoing visual_servoing.launch

CHAPTER 4
Results
4.1 Flow chart

Figure 93 Flow chart Visual servoing
• First, the system receives Camera data parameters from a camera sensor.
• Detected Aruco marker using camera data parameters and sent out geometry message and transfer into the goal pose in the robot arm.
• Receives a joints position from the robot sensor to start pose in the robot arm.
• Set start pose and goal pose in Moveti API to generate planning path from OMPL library. Check path collision before executing robot to goal pose.

4.2 Results Visual servoing

Figure 94 Visual servoing working

Visual servoing is a technique used in robotics that utilizes visual feedback from cameras or sensors to control the motion of a robot.
I. To Run the Visual servoing in Ubuntu, you’ll need to follow these steps:
Open the terminal and run this command for open all node that combine program. The program will load the robot model, collision object environment and function camera sensor
$ roslaunch visual_servoing visual_servoing.launch

Figure 95 Visual servoing working in program.
II. Start visual servoing
Run this command for start Visual servoing the camera sensor will send value to the robot’s for receiving data value
$ roslaunch visual_servoing robot_arm_controller.launch

• The program initializes a ROS node and subscribe the aruco pose estimate geometry that use to define robot arm goal pose

• When Received start pose from robot sensor and goal pose from sensor camera, it extracts the position and orientation values and assigns them to generate the planning path variables.

• The program will plan and executes the motion of a manipulator robot to reach the goal pose using the MoveIt library.

• If the program verifies that the path is in collision, then the robot will stop and send a warning message.

Figure 96 Visual servoing with program

CHAPTER 5
Conclusion

5.1 Performance Summary
In conclusion, this report has presented a comprehensive investigation into the implementation of visual servoing techniques applied to an ABB robot simulation, with communication facilitated through ROS using IP. The integration of RViz and a camera sensor, coupled with the detection of Aruco markers, has enabled precise control and manipulation of the robot’s motion based on visual feedback.
Through this research, we have demonstrated the efficacy of visual servoing as a powerful method for robot control, offering advantages such as enhanced adaptability, real-time responsiveness, and improved accuracy. By utilizing the visual information captured by the camera sensor and employing the Aruco marker detection algorithm, the robot was able to autonomously adjust its position and orientation to reach desired targets with minimal error. RViz, as a powerful visualization tool provided by ROS, has served as a valuable interface for monitoring and controlling the robot’s behavior. Its intuitive interface and real-time feedback capabilities have greatly contributed to the overall success of the visual servoing system.
In summary, the integration of visual servoing techniques with an ABB robot simulation, ROS communication via IP, RViz control, and the utilization of a camera sensor with Aruco marker detection has demonstrated the potential for precise and adaptive robot control. The findings presented in this report contribute to the advancement of visual servoing methods and pave the way for their application in diverse domains such as industrial automation, robotic manipulation, and object tracking.

5.2 Problems and suggestions

Problems:
• The camera sensor value is sent later than the robot’s receiving data value, there have a delay movement.
• The connection between camera and controller USB unstable
Suggestions:
• Changing the Ubuntu base on the computer, rather than using Oracle VirtualBox, the result will more stable connection.
Future research endeavors should focus on refining the system’s performance by exploring advanced marker detection algorithms, addressing limitations related to lighting and occlusions, and investigating potential applications in real-world scenarios.
The ongoing progress in visual servoing technology promises significant contributions to the field of robotics, enabling more accurate and versatile control of robotic systems in various practical settings.

References
Hoorn, G. v., 2023. [Online]
Available at: http://wiki.ros.org/abb_driver/Tutorials/RobotStudio
Hoorn, G. v., 2023. ABB Experimental. [Online]
Available at: http://wiki.ros.org/abb_experimental
Open Source Robotics Foundation, Inc. (OSRF), n.d. ROS documment. [Online]
Available at: http://wiki.ros.org/noetic/Installation
Ingvaldsen, M., 2019. The benefits of 3D hand-eye calibration. [Online]
Available at: https://blog.zivid.com/importance-of-3d-hand-eye-calibration
Intel® RealSense™, 2023. RealSense-ROS. [Online]
Available at: https://github.com/IntelRealSense/realsense-ros
Kalachev, O., 2022. Aruco Generator. [Online]
Available at: https://chev.me/arucogen/
PAL Robotics S.L., 2022. aruco_ros. [Online]
Available at: https://github.com/pal-robotics/aruco_ros
Raessa, M., 2022. Moveit Tutorials. [Online]
Available at: https://ros-planning.github.io/moveit_tutorials/

Related Posts

Create Account



Log In Your Account