Stanko Kružić
Stanko Kružić
Postdoctoral Researcher

Information

Location
Split, Croatia
Position
Postdoc
Degree
PhD
Fields
Robotics, AI

Skills

JavaScriptPythonMATLABLaTeX

About me

My name is Stanko Kružić. I work as postdoctoral researcher at University of Split, Faculty of Electrical Engineering, Mechanical Engineering and Naval Architecture (FESB). The area of my scientific interest, and the focus of my research is on applications of deep learning on autonomous mobile robotic manipulators, which is at the intersection of several very interesting engineering fields: mobile robotics, robotic manipulators, human-robot interaction, artificial intelligence and computer vision.

Publications

End-Effector Force and Joint Torque Estimation of a 7-DoF Robotic Manipulator Using Deep Learning
Stanko Kružić et al.
2021-11-28
MDPI Electronics
Journal

When a mobile robotic manipulator interacts with other robots, people, or the environment in general, the end-effector forces need to be measured to assess if a task has been completed successfully. Traditionally used force or torque estimation methods are usually based on observers, which require knowledge of the robot dynamics. Contrary to this, our approach involves two methods based on deep neural networks: robot end-effector force estimation and joint torque estimation. These methods require no knowledge of robot dynamics and are computationally effective but require a force sensor under the robot base. Several different architectures were considered for the tasks, and the best ones were identified among those tested. First, the data for training the networks were obtained in simulation. The trained networks showed reasonably good performance, especially using the LSTM architecture (with a root mean squared error (RMSE) of 0.1533 N for end-effector force estimation and 0.5115 Nm for joint torque estimation). Afterward, data were collected on a real Franka Emika Panda robot and then used to train the same networks for joint torque estimation. The obtained results are slightly worse than in simulation (0.5115 Nm vs. 0.6189 Nm, according to the RMSE metric) but still reasonably good, showing the validity of the proposed approach.

Estimating Robot Manipulator End-effector Forces using Deep Learning
Stanko Kružić et al.
2020-10-28
MIPRO 2020
Conference

The measurement of the robotic manipulator end- effector interaction forces can in certain cases be challenging, especially when using robots that have a small payload (and consequently not capable of using wrist-mounted force sensor), which is often case with educational robots. In the paper, a method for estimation of end- effector forces using measurements from the base- mounted force sensor and deep neural networks is presented. Several deep architectures were trained using data collected on real 6-DOF robot manipulator (Commonplace Robotics Mover6 robot) using custom-made interaction object operated by a human. The obtained results show that when using appropriate deep architecture promising estimates can be achieved (with an RMSE metric on test set which was 16%, 12% and 6% of maximum force in respective directions of x, y and z axes). This makes this approach suitable for use in a variety of applications, including but not limited to usage with haptic feedback interfaces for robot control.

Robotics and Information Technologies in Education: Four Countries from Alpe-Adria- Danube Region Survey
Josip Musić et al.
2020-10-13
International Journal of Technology and Design Education
Conference

This paper presents the results of the survey that was conducted during 2018 in four countries: Bulgaria, Greece, Bosnia and Herzegovina and Croatia. The survey is a part of activities within the project “Increasing the well being of the population by RObotic and ICT based iNNovative education” (RONNI), funded by the Danube Strategic Project Fund (DSPF). The survey included two target groups: the teachers/experts and the parents ; and the corresponding questionnaires (QR) were delivered to schools in each of the participating countries. A total of 428 subjects participated in the survey (231 parents and 197 teachers/experts). Seven hypotheses related to stakeholders attitudes and opinions were formed and tested in the work, showing highly favorable sentiment toward inclusion of robotics and information technology (IT) in the classroom but with some exceptions. The conclusions drawn, based on the analysis of the results, can be used for proposing strategies and methodologies aimed at boosting inclusion of IT in the teaching process, transferable across the regions to support effective learning as well as to identify possible problems with their implementation in relation to attitudes of stakeholders: teachers and parents.

Detecting Underwater Sea Litter Using Deep Neural Networks: An Initial Study
Josip Musić et al.
2020-09-23
SpliTech 2020
Conference

The world’s seas and the oceans are under constant negative pressure caused by human activity. It is estimated that more than 150 million tonnes of litter will be accumulated in the world’s oceans until 2025, while up to 12.7 million tonnes of litter will be added to the sea every year. Besides ecology-related issues, marine litter can also hurt the economy of the affected areas. Detection and classification of sea litter thus becomes a first step in tracking the litter and consequently a basis for the development of any automatic or human based marine litter retrieval system. Modern convolutional neural networks are a logical choice for detection and classification algorithms since they have proven themselves time after time in image-based machine learning tasks. Nevertheless, according to the available literature, the application of such neural networks in underwater images for marine litter detection (and classification) has started just recently. Thus, the paper carries out an initial study on the performance of such detection and classification system constructed in several ways and with several architectures, as well as using several sources of training data. It is shown that obtained validation accuracy is around 88% and test accuracy around 85%, depending on the used architecture, and that inclusion of synthetically generated images reduces the network performance on real- world image dataset.

Deep Semantic Image Segmentation for UAV-UGV Cooperative Path Planning: A Car Park Use Case
Mirela Kundid Vasić et al.
2020-09-17
SoftCOM 2020
Conference

Navigation of Unmanned Ground Vehicles (UGV) in unknown environments is an active area of research for mobile robotics. A main hindering factor for UGV navigation is the limited range of the on-board sensors that process only restricted areas of the environment at a time. In addition, most existing approaches process sensor information under the assumption of a static environment. This restrains the exploration capability of the UGV especially in time-critical applications such as search and rescue. The cooperation with an Unmanned Aerial Vehicle (UAV) can provide the UGV with an extended perspective of the environment which enables a better-suited path planning solution that can be adjusted on demand. In this work, we propose a UAV-UGV cooperative path planning approach for dynamic environments by performing semantic segmentation on images acquired from the UAV's view via a deep neural network. The approach is evaluated in a car park scenario, with the goal of providing a path plan to an empty parking space for a ground- based vehicle. The experiments were performed on a created dataset of real-world car park images located in Croatia and Germany, in addition to images from a simulated environment. The segmentation results demonstrate the viability of the proposed approach in producing maps of the dynamic environment on demand and accordingly generating path plans for ground-based vehicles.

Crash course learning: an automated approach to simulation-driven LiDAR-based training of neural networks for obstacle avoidance in mobile robotics
Stanko Kružić et al.
2020-01-27
Turkish Journal of Electrical Engineering and Computer Sciences
Journal

The paper proposes and implements a self- supervised simulation-driven approach to data collection used for training of perception- based shallow neural networks for mobile robot obstacle avoidance. In the approach, a 2D LiDAR sensor was used as an information source for training neural networks. The paper analyses neural network performance in terms of numbers of layers and neurons, as well as the amount of data needed for reliable robot operation. Once the best architecture is identified, it is trained using only data obtained in simulation, and implemented and tested on a real robot (Turtlebot 2) in several simulation and real- world scenarios. Based on obtained results it is shown that this fast and simple approach is very powerful with good results in a variety of challenging environments, with both static and dynamic obstacles.