My name is Stanko Kružić. I work as postdoctoral researcher at University of Split, Faculty of Electrical Engineering, Mechanical Engineering and Naval Architecture (FESB). The area of my scientific interest, and the focus of my research is on applications of deep learning on autonomous mobile robotic manipulators, which is at the intersection of several very interesting engineering fields: mobile robotics, robotic manipulators, human-robot interaction, artificial intelligence and computer vision.
When a robotic manipulator interacts with its environment, the end-effector forces need to be measured to assess if a task has been completed successfully and for safety reasons. Traditionally, these forces are either measured directly by a 6-dimensional (6D) force–torque sensor (mounted on a robot’s wrist) or by estimation methods based on observers, which require knowledge of the robot’s exact model. Contrary to this, the proposed approach is based on using an array of low-cost 1-dimensional (1D) strain gauge sensors mounted beneath the robot’s base in conjunction with time series neural networks, to estimate both the end-effector 3-dimensional (3D) interaction forces as well as robot joint torques. The method does not require knowledge of robot dynamics. For comparison reasons, the same approach was used but with 6D force sensor measurements mounted beneath the robot’s base. The trained networks showed reasonably good performance, using the long-short term memory (LSTM) architecture, with a root mean squared error (RMSE) of 1.945 N (vs. 2.004 N; 6D force–torque sensor-based) for end-effector force estimation and 3.006 Nm (vs. 3.043 Nm; 6D force–torque sensor-based) for robot joint torque estimation. The obtained results for an array of 1D strain gauges were comparable with those obtained with a robot’s built-in sensor, demonstrating the validity of the proposed approach.
Mobile robotic manipulators often interact with other robots, humans or the environment in indoor and outdoor scenarios. In many cases, end-effector forces need to be known to give feedback about task completion. The mobile base might be titled due to the uneven surface on which the mobile base is positioned. The paper presents the approach to estimating end-effector forces based on neural networks in such cases. The estimates are inferred based on the force sensor mounted under the robot's base and the knowledge of the tilt angle. The robot's dynamic model does not have to be known since it is learned from data during neural network training. The dataset for this research was obtained in simulation. The angle between the robot and the surface changed to simulate a change in surface slope that a mobile manipulator might encounter during the execution of real-world tasks. The trained neural network shows good performance no matter the angle between the base and the ground. It showed an RMSE of 0.302 N (on the test set). Furthermore, there was no significant difference when comparing RMSE across all test data with test data obtained on a per-angle basis, demonstrating the effectiveness of the proposed approach..
When a mobile robotic manipulator interacts with other robots, people, or the environment in general, the end-effector forces need to be measured to assess if a task has been completed successfully. Traditionally used force or torque estimation methods are usually based on observers, which require knowledge of the robot dynamics. Contrary to this, our approach involves two methods based on deep neural networks: robot end-effector force estimation and joint torque estimation. These methods require no knowledge of robot dynamics and are computationally effective but require a force sensor under the robot base. Several different architectures were considered for the tasks, and the best ones were identified among those tested. First, the data for training the networks were obtained in simulation. The trained networks showed reasonably good performance, especially using the LSTM architecture (with a root mean squared error (RMSE) of 0.1533 N for end-effector force estimation and 0.5115 Nm for joint torque estimation). Afterward, data were collected on a real Franka Emika Panda robot and then used to train the same networks for joint torque estimation. The obtained results are slightly worse than in simulation (0.5115 Nm vs. 0.6189 Nm, according to the RMSE metric) but still reasonably good, showing the validity of the proposed approach.
The measurement of the robotic manipulator end- effector interaction forces can in certain cases be challenging, especially when using robots that have a small payload (and consequently not capable of using wrist-mounted force sensor), which is often case with educational robots. In the paper, a method for estimation of end- effector forces using measurements from the base- mounted force sensor and deep neural networks is presented. Several deep architectures were trained using data collected on real 6-DOF robot manipulator (Commonplace Robotics Mover6 robot) using custom-made interaction object operated by a human. The obtained results show that when using appropriate deep architecture promising estimates can be achieved (with an RMSE metric on test set which was 16%, 12% and 6% of maximum force in respective directions of x, y and z axes). This makes this approach suitable for use in a variety of applications, including but not limited to usage with haptic feedback interfaces for robot control.
This paper presents the results of the survey that was conducted during 2018 in four countries: Bulgaria, Greece, Bosnia and Herzegovina and Croatia. The survey is a part of activities within the project “Increasing the well being of the population by RObotic and ICT based iNNovative education” (RONNI), funded by the Danube Strategic Project Fund (DSPF). The survey included two target groups: the teachers/experts and the parents ; and the corresponding questionnaires (QR) were delivered to schools in each of the participating countries. A total of 428 subjects participated in the survey (231 parents and 197 teachers/experts). Seven hypotheses related to stakeholders attitudes and opinions were formed and tested in the work, showing highly favorable sentiment toward inclusion of robotics and information technology (IT) in the classroom but with some exceptions. The conclusions drawn, based on the analysis of the results, can be used for proposing strategies and methodologies aimed at boosting inclusion of IT in the teaching process, transferable across the regions to support effective learning as well as to identify possible problems with their implementation in relation to attitudes of stakeholders: teachers and parents.
The world’s seas and the oceans are under constant negative pressure caused by human activity. It is estimated that more than 150 million tonnes of litter will be accumulated in the world’s oceans until 2025, while up to 12.7 million tonnes of litter will be added to the sea every year. Besides ecology-related issues, marine litter can also hurt the economy of the affected areas. Detection and classification of sea litter thus becomes a first step in tracking the litter and consequently a basis for the development of any automatic or human based marine litter retrieval system. Modern convolutional neural networks are a logical choice for detection and classification algorithms since they have proven themselves time after time in image-based machine learning tasks. Nevertheless, according to the available literature, the application of such neural networks in underwater images for marine litter detection (and classification) has started just recently. Thus, the paper carries out an initial study on the performance of such detection and classification system constructed in several ways and with several architectures, as well as using several sources of training data. It is shown that obtained validation accuracy is around 88% and test accuracy around 85%, depending on the used architecture, and that inclusion of synthetically generated images reduces the network performance on real- world image dataset.
Navigation of Unmanned Ground Vehicles (UGV) in unknown environments is an active area of research for mobile robotics. A main hindering factor for UGV navigation is the limited range of the on-board sensors that process only restricted areas of the environment at a time. In addition, most existing approaches process sensor information under the assumption of a static environment. This restrains the exploration capability of the UGV especially in time-critical applications such as search and rescue. The cooperation with an Unmanned Aerial Vehicle (UAV) can provide the UGV with an extended perspective of the environment which enables a better-suited path planning solution that can be adjusted on demand. In this work, we propose a UAV-UGV cooperative path planning approach for dynamic environments by performing semantic segmentation on images acquired from the UAV's view via a deep neural network. The approach is evaluated in a car park scenario, with the goal of providing a path plan to an empty parking space for a ground- based vehicle. The experiments were performed on a created dataset of real-world car park images located in Croatia and Germany, in addition to images from a simulated environment. The segmentation results demonstrate the viability of the proposed approach in producing maps of the dynamic environment on demand and accordingly generating path plans for ground-based vehicles.
The paper proposes and implements a self- supervised simulation-driven approach to data collection used for training of perception- based shallow neural networks for mobile robot obstacle avoidance. In the approach, a 2D LiDAR sensor was used as an information source for training neural networks. The paper analyses neural network performance in terms of numbers of layers and neurons, as well as the amount of data needed for reliable robot operation. Once the best architecture is identified, it is trained using only data obtained in simulation, and implemented and tested on a real robot (Turtlebot 2) in several simulation and real- world scenarios. Based on obtained results it is shown that this fast and simple approach is very powerful with good results in a variety of challenging environments, with both static and dynamic obstacles.