Stanko KružićPersonal academic web site
.01

ABOUT

PERSONAL DETAILS
Ruđera Boškovića 32, Split, Croatia
mapiconimg
skruzic@fesb.hr
+385 21 305 648
I am research/teaching assistant and PhD student at University of Split, FESB. My primary field of scientific interest is mobile robotics.

BIO

ABOUT ME

My name is Stanko Kružić.I work as teaching/research assistant at University of Split, Faculty of Electrical Engineering, Mechanical Engineering and Naval Architecture (FESB).

The area of my scientific interest, and the focus of my PhD research is on autonomous mobile robots, which is itself at the intersection of several very interesting engineering fields: robotics, human-robot interaction, human-computer interaction, artificial intelligence and computer vision.

HOBBIES

INTERESTS

I play bridge regularely at local club as well as in Croatian 2nd division for teams. I was an assistant TD at several european and world championships.

I play football on regular basis, but non-competitvely, just for fun.

.02

RESUME

EDUCATION
  • 2015
    NOW
    Split, Croatia

    PhD IN ROBOTICS (in progress)

    University of Split, FESB

    Supervisor: Assistant Professor Josip Musić, PhD
    Co-supervisor: Professor Roman Kamnik, PhD (University of Ljubljana, Slovenia)
  • 2004
    2009
    Split, Croatia

    Master of ELECTRICAL ENGINEERING

    University of Split, FESB

    Area of specialisation: Automatic control and systems
    Supervisor: Professor Jadranka Marasović, PhD
ACADEMIC AND PROFESSIONAL POSITIONS
  • 2016
    NOW
    Split, Croatia

    RESEARCH/TEACHING ASSISTANT

    University of Split, FESB

    Department of Electrical Engineering and Computing
    Chair of Automatic Control and Systems
  • 2011
    2016
    Split, Croatia

    HEAD OF IT OFFICE

    University of Split, Department of Forensic Sciences

.03

PUBLICATIONS

PUBLICATIONS LIST
28 Nov 2021

End-Effector Force and Joint Torque Estimation of a 7-DoF Robotic Manipulator Using Deep Learning

MDPI Electronics

When a mobile robotic manipulator interacts with other robots, people, or the environment in general, the end-effector forces need to be measured to assess if a task has been completed successfully. Traditionally used force or torque estimation methods are usually based on observers, which require knowledge of the robot dynamics. Contrary to this, our approach involves two methods based on deep neural networks: robot end-effector force estimation and joint torque estimation. These methods require no knowledge of robot dynamics and are computationally effective but require a force sensor under the robot base.

Journal paper Selected S. Kružić, J. Musić, R. Kamnik, V. Papić

End-Effector Force and Joint Torque Estimation of a 7-DoF Robotic Manipulator Using Deep Learning

S. Kružić, J. Musić, R. Kamnik, V. Papić
Journal paper Selected
About The Publication
When a mobile robotic manipulator interacts with other robots, people, or the environment in general, the end-effector forces need to be measured to assess if a task has been completed successfully. Traditionally used force or torque estimation methods are usually based on observers, which require knowledge of the robot dynamics. Contrary to this, our approach involves two methods based on deep neural networks: robot end-effector force estimation and joint torque estimation. These methods require no knowledge of robot dynamics and are computationally effective but require a force sensor under the robot base. Several different architectures were considered for the tasks, and the best ones were identified among those tested. First, the data for training the networks were obtained in simulation. The trained networks showed reasonably good performance, especially using the LSTM architecture (with a root mean squared error (RMSE) of 0.1533 N for end-effector force estimation and 0.5115 Nm for joint torque estimation). Afterward, data were collected on a real Franka Emika Panda robot and then used to train the same networks for joint torque estimation. The obtained results are slightly worse than in simulation (0.5115 Nm vs. 0.6189 Nm, according to the RMSE metric) but still reasonably good, showing the validity of the proposed approach.
13 Oct 2020

Robotics and Information Technologies in Education: Four Countries from Alpe-Adria- Danube Region Survey

International journal of technology and design education

This paper presents the results of the survey that was conducted during 2018 in four countries: Bulgaria, Greece, Bosnia and Herzegovina and Croatia. The survey is a part of activities within the project “Increasing the well being of the population by RObotic and ICT based iNNovative education” (RONNI), funded by the Danube Strategic Project Fund (DSPF).

Journal paper J. Musić et al.

Robotics and Information Technologies in Education: Four Countries from Alpe-Adria- Danube Region Survey

J. Musić et al.
Journal paper
About The Publication
This paper presents the results of the survey that was conducted during 2018 in four countries: Bulgaria, Greece, Bosnia and Herzegovina and Croatia. The survey is a part of activities within the project “Increasing the well being of the population by RObotic and ICT based iNNovative education” (RONNI), funded by the Danube Strategic Project Fund (DSPF). The survey included two target groups: the teachers/experts and the parents ; and the corresponding questionnaires (QR) were delivered to schools in each of the participating countries. A total of 428 subjects participated in the survey (231 parents and 197 teachers/experts). Seven hypotheses related to stakeholders attitudes and opinions were formed and tested in the work, showing highly favorable sentiment toward inclusion of robotics and information technology (IT) in the classroom but with some exceptions. The conclusions drawn, based on the analysis of the results, can be used for proposing strategies and methodologies aimed at boosting inclusion of IT in the teaching process, transferable across the regions to support effective learning as well as to identify possible problems with their implementation in relation to attitudes of stakeholders: teachers and parents.
23 Sep 2020

Detecting Underwater Sea Litter Using Deep Neural Networks: An Initial Study

SpliTech 2020

The world’s seas and the oceans are under constant negative pressure caused by human activity. It is estimated that more than 150 million tonnes of litter will be accumulated in the world’s oceans until 2025, while up to 12.7 million tonnes of litter will be added to the sea every year. Besides ecology-related issues, marine litter can also hurt the economy of the affected areas. Detection and classification of sea litter thus becomes a first step in tracking the litter and consequently a basis for the development of any automatic or human based marine litter retrieval system.

Conference paper J. Musić, S. Kružić, I. Stančić, F. Alexandrou,

Detecting Underwater Sea Litter Using Deep Neural Networks: An Initial Study

J. Musić, S. Kružić, I. Stančić, F. Alexandrou,
Conference paper
About The Publication
The world’s seas and the oceans are under constant negative pressure caused by human activity. It is estimated that more than 150 million tonnes of litter will be accumulated in the world’s oceans until 2025, while up to 12.7 million tonnes of litter will be added to the sea every year. Besides ecology-related issues, marine litter can also hurt the economy of the affected areas. Detection and classification of sea litter thus becomes a first step in tracking the litter and consequently a basis for the development of any automatic or human based marine litter retrieval system. Modern convolutional neural networks are a logical choice for detection and classification algorithms since they have proven themselves time after time in image-based machine learning tasks. Nevertheless, according to the available literature, the application of such neural networks in underwater images for marine litter detection (and classification) has started just recently. Thus, the paper carries out an initial study on the performance of such detection and classification system constructed in several ways and with several architectures, as well as using several sources of training data. It is shown that obtained validation accuracy is around 88% and test accuracy around 85%, depending on the used architecture, and that inclusion of synthetically generated images reduces the network performance on real- world image dataset.
18 Sep 2020

Deep Semantic Image Segmentation for UAV-UGV Cooperative Path Planning: A Car Park Use Case

SoftCOM 2020

Navigation of Unmanned Ground Vehicles (UGV) in unknown environments is an active area of research for mobile robotics. A main hindering factor for UGV navigation is the limited range of the on-board sensors that process only restricted areas of the environment at a time. In addition, most existing approaches process sensor information under the assumption of a static environment. This restrains the exploration capability of the UGV especially in time-critical applications such as search and rescue.

Conference paper M. Kundid Vasić et al.

Deep Semantic Image Segmentation for UAV-UGV Cooperative Path Planning: A Car Park Use Case

M. Kundid Vasić et al.
Conference paper
About The Publication
Navigation of Unmanned Ground Vehicles (UGV) in unknown environments is an active area of research for mobile robotics. A main hindering factor for UGV navigation is the limited range of the on-board sensors that process only restricted areas of the environment at a time. In addition, most existing approaches process sensor information under the assumption of a static environment. This restrains the exploration capability of the UGV especially in time-critical applications such as search and rescue. The cooperation with an Unmanned Aerial Vehicle (UAV) can provide the UGV with an extended perspective of the environment which enables a better-suited path planning solution that can be adjusted on demand. In this work, we propose a UAV-UGV cooperative path planning approach for dynamic environments by performing semantic segmentation on images acquired from the UAV’s view via a deep neural network. The approach is evaluated in a car park scenario, with the goal of providing a path plan to an empty parking space for a ground- based vehicle. The experiments were performed on a created dataset of real-world car park images located in Croatia and Germany, in addition to images from a simulated environment. The segmentation results demonstrate the viability of the proposed approach in producing maps of the dynamic environment on demand and accordingly generating path plans for ground-based vehicles.
28 Sep 2020

Estimating Robot Manipulator End-effector Forces using Deep Learning

MIPRO 2020

The measurement of the robotic manipulator end- effector interaction forces can in certain cases be challenging, especially when using robots that have a small payload (and consequently not capable of using wrist-mounted force sensor), which is often case with educational robots. In the paper, a method for estimation of end-effector forces using measurements from the base-mounted force sensor and deep neural networks is presented.

Conference paper S. Kružić, J. Musić, R. Kamnik, V. Papić

Estimating Robot Manipulator End-effector Forces using Deep Learning

S. Kružić, J. Musić, R. Kamnik, V. Papić
Conference paper
About The Publication
The measurement of the robotic manipulator end- effector interaction forces can in certain cases be challenging, especially when using robots that have a small payload (and consequently not capable of using wrist-mounted force sensor), which is often case with educational robots. In the paper, a method for estimation of end-effector forces using measurements from the base-mounted force sensor and deep neural networks is presented. Several deep architectures were trained using data collected on real 6-DOF robot manipulator (Commonplace Robotics Mover6 robot) using custom-made interaction object operated by a human. The obtained results show that when using appropriate deep architecture promising estimates can be achieved (with an RMSE metric on test set which was 16%, 12% and 6% of maximum force in respective directions of x, y and z axes). This makes this approach suitable for use in a variety of applications, including but not limited to usage with haptic feedback interfaces for robot control.
22 Jan 2020

Crash course learning: an automated approach to simulation-driven LiDAR-based training of neural networks for obstacle avoidance in mobile robotics

Turkish Journal of Electrical Engineering and Computer Sciences

The paper proposes and implements a self- supervised simulation-driven approach to data collection used for training of perception- based shallow neural networks for mobile robot obstacle avoidance. In the approach, a 2D LiDAR sensor was used as an information source for training neural networks. The paper analyses neural network performance in terms of numbers of layers and neurons, as well as the amount of data needed for reliable robot operation. Once the best architecture is identified, it is trained using only data obtained in simulation, and implemented and tested on a real robot (Turtlebot 2) in several simulation and real- world scenarios. Based on obtained results it is shown that this fast and simple approach is very powerful with good results in a variety of challenging environments, with both static and dynamic obstacles.

Journal paper Selected S. Kružić, J. Musić, M. Bonković, F. Duchoň

Crash course learning: an automated approach to simulation-driven LiDAR-based training of neural networks for obstacle avoidance in mobile robotics

S. Kružić, J. Musić, M. Bonković, F. Duchoň
Journal paper Selected
About The Publication
The paper proposes and implements a self- supervised simulation-driven approach to data collection used for training of perception- based shallow neural networks for mobile robot obstacle avoidance. In the approach, a 2D LiDAR sensor was used as an information source for training neural networks. The paper analyses neural network performance in terms of numbers of layers and neurons, as well as the amount of data needed for reliable robot operation. Once the best architecture is identified, it is trained using only data obtained in simulation, and implemented and tested on a real robot (Turtlebot 2) in several simulation and real- world scenarios. Based on obtained results it is shown that this fast and simple approach is very powerful with good results in a variety of challenging environments, with both static and dynamic obstacles.
23 Sep 2019

Adaptive Fuzzy Mediation for Multimodal Control of Mobile Robots in Navigation-based Tasks

International Journal of Computational Intelligence Systems

The paper proposes and analyses performance of a fuzzy-based mediator with showcase examples in robot navigation. The mediator receives outputs from two controllers and uses estimated collision probability for adapting the signal proportions in the final output. The approach was implemented and tested in simulation and on real robots with different footprints. The task complexity during testing varied from single obstacle avoidance to a realistic navigation in real environments. The obtained results showed that this approach is simple but effective.

Journal paper Selected J. Musić, S. Kružić, I. Stančić, V. Papić

Adaptive Fuzzy Mediation for Multimodal Control of Mobile Robots in Navigation-based Tasks

J. Musić, S. Kružić, I. Stančić, V. Papić
Journal paper Selected
About The Publication
The paper proposes and analyses performance of a fuzzy-based mediator with showcase examples in robot navigation. The mediator receives outputs from two controllers and uses estimated collision probability for adapting the signal proportions in the final output. The approach was implemented and tested in simulation and on real robots with different footprints. The task complexity during testing varied from single obstacle avoidance to a realistic navigation in real environments. The obtained results showed that this approach is simple but effective.
15 Sep 2018

Identifying Needs of Robotic and Technological Solutions for the Classroom

SoftCOM 2018

This paper presents preliminary results of the questionnaire (QR) that was conducted during April and May 2018 in three countries: Bulgaria, Greece, and Croatia. The QR is part of the activities within a project funded by Danube Strategic Project Fund (DSPF): Increasing the well being of the population by RObotic and ICT based iNNovative education (RONNI). The QR has been delivered to schools in each of the participating countries. Two sets of questions were delivered to target groups: teachers/experts and parents. The analysis of the results will be used in proposing innovative teaching strategies and methodologies, transferable across the regions to support effective learning.

Conference paper S. Kostova et al.

Identifying Needs of Robotic and Technological Solutions for the Classroom

S. Kostova et al.
Conference paper
About The Publication
This paper presents preliminary results of the questionnaire (QR) that was conducted during April and May 2018 in three countries: Bulgaria, Greece, and Croatia. The QR is part of the activities within a project funded by Danube Strategic Project Fund (DSPF): Increasing the well being of the population by RObotic and ICT based iNNovative education (RONNI). The QR has been delivered to schools in each of the participating countries. Two sets of questions were delivered to target groups: teachers/experts and parents. The analysis of the results will be used in proposing innovative teaching strategies and methodologies, transferable across the regions to support effective learning.
13 Jul 2016

Face and Nose Detection in Digital Images using Local Binary Patterns

SpliTech 2016

This paper describes an approach to object detection based on Viola-Jones algorithm and LBP histogram.

Professional paper S. Kružić, V. Papić

Face and Nose Detection in Digital Images using Local Binary Patterns

S. Kružić, V. Papić
Professional paper
About The Publication
This paper describes an approach to object detection based on Viola-Jones algorithm and LBP histogram. LBP is used on sub-windows of the input image to obtain LBP feature histogram, from which small number of key visual features is extracted using machine learning algorithm based on AdaBoost. The final decision if image contains the object is training cascade of classifiers which rejects the negative sub-windows quickly and use processor time for those sub-windows which are have higher probability for containing the object in question.
26 Sep 2017

Map Building Using Autonomous Mobile Service Robots with Deep Neural Networks

Sixth Croatian Computer Vision Workshop

The navigation of mobile (service) robots has always been an important topic in robotics, particularly for autonomous robots. In order to be able to do so a map of the area needs to be known. Here, a method for autonomous map building using Advanced Monte Carlo Localization and Deep Neural Networks is presented. The method is designed to be used in unstructured, dynamic environments, and it needs no human intervention. Method uses range data (from LIDAR, ultrasonic and other depth sensors) for recognizing patterns that are created by various obstacles, and based on deep neural network outputs, avoids obstacles while building a map of free space.

Poster S. Kružić, J. Musić

Map Building Using Autonomous Mobile Service Robots with Deep Neural Networks

S. Kružić, J. Musić
Poster
About The Publication
The navigation of mobile (service) robots has always been an important topic in robotics, particularly for autonomous robots. In order to be able to do so a map of the area needs to be known. Here, a method for autonomous map building using Advanced Monte Carlo Localization and Deep Neural Networks is presented. The method is designed to be used in unstructured, dynamic environments, and it needs no human intervention. Method uses range data (from LIDAR, ultrasonic and other depth sensors) for recognizing patterns that are created by various obstacles, and based on deep neural network outputs, avoids obstacles while building a map of free space.
28 Jun 2017

Teleoperation of Mobile Service Robots Over the Network: Challenges and Possible Solutions

The 14th International Conference on Telecommunications ConTEL 2017

With the ubiquity of service robots, various usage scenarios are developed for them, usually with a degree of autonomy. However, due to dynamic nature of the environment, algorithms for autonomous operation might fail. Besides that, there are tasks which are not meant to be completed during autonomous operation, and where the operator must take control of the robot to complete them. In order to efficiently and safely teleoperate the robot, the operator has to have a high degree of situational awareness. This can be achieved with the appropriate human-computer interface (HCI) so that the remote environment model constructed with sensor data is presented at an appropriate time and in an appropriate manner, and that robot commands can be issued intuitively and without much effort. Network latency problem is also addressed and a method for reducing its impact on teleoperation is presented.

Poster S. Kružić, J. Musić

Teleoperation of Mobile Service Robots Over the Network: Challenges and Possible Solutions

S. Kružić, J. Musić
Poster
About The Publication
With the ubiquity of service robots, various usage scenarios are developed for them, usually with a degree of autonomy. However, due to dynamic nature of the environment, algorithms for autonomous operation might fail. Besides that, there are tasks which are not meant to be completed during autonomous operation, and where the operator must take control of the robot to complete them. In order to efficiently and safely teleoperate the robot, the operator has to have a high degree of situational awareness. This can be achieved with the appropriate human-computer interface (HCI) so that the remote environment model constructed with sensor data is presented at an appropriate time and in an appropriate manner, and that robot commands can be issued intuitively and without much effort. Network latency problem is also addressed and a method for reducing its impact on teleoperation is presented.
01 Jan 2018

Quadcopter Altitude Control Methods for Communication Relay Systems

Recent Advances in Communications and Networking Technology

The paper addresses the issue by developing controllers for UAV hovering. The proposed approach can then be extended to arbitrary height (possibly with different sensor setup). Method For the development of control schemes, two approaches were used: PID and Neural network (NN) based one. For the development of NN (i.e. learning phase), Gazebo simulation environment was used essentially modeling the human driver. Developed approaches were tested both in simulation and in the real- world scenario on AR.Drone 1.0 and AR.Drone 2.0 UAVs.

Journal paper A. Maras, J. Musić, S. Kružić, I. Stančić

Quadcopter Altitude Control Methods for Communication Relay Systems

A. Maras, J. Musić, S. Kružić, I. Stančić
Journal paper
About The Publication
Background Owing to the rapid development of hardware components and reduction in prices, Unmanned Ariel Vehicles (UAVs) are becoming ubiquitous including application to airborne ad- hoc communication relay stations. Quadcopters are one class of UAVs which has particularity seen rapid growth due to its versatility. Since quadcopters are inherently unstable and hard to stabilize by a human operator, they need automated attitude stabilization. However, altitude stabilization around desired height is often overlooked, while it has an important role in an optimal location for airborne communication relay. Objective The paper addresses the issue by developing controllers for UAV hovering. The proposed approach can then be extended to arbitrary height (possibly with different sensor setup). Method For the development of control schemes, two approaches were used: PID and Neural network (NN) based one. For the development of NN (i.e. learning phase), Gazebo simulation environment was used essentially modeling the human driver. Developed approaches were tested both in simulation and in the real- world scenario on AR.Drone 1.0 and AR.Drone 2.0 UAVs. Results Obtained indoor results demonstrated PID accuracy of 1 cm with an overshoot of 2.7% and settling time of 3.75 s, while NN demonstrated 2.1 cm, 1%, and 8.4 s, respectively. Outdoor testing was also performed with similar result trends. Conclusion Both developed controllers demonstrated good results (indoor and outdoor) and could be used in the real-world scenario, but NN due to its favorable characteristics (i.e. human driver modeling) and straightforward development phase (as compared to PID which involves a lot of trial-and-error) is preferred.
27 Jun 2018

Influence of Data Collection Parameters on Performance of Neural Network-based Obstacle Avoidance

SpliTech 2018 - SPLIT, CROATIA

Neural networks are becoming wide-spread, including applications in mobile robotics and related fields. Most state - of-the-art approaches to training neural networks use video cameras for generating training datasets. However, these data are hard and time-consuming to collect resulting in a bottleneck for neural network training procedure. Thus, the paper briefly presents simulation-based LiDAR data collection for the training of neural networks for obstacle avoidance. The influence of two data collection parameters in simulation (distance to obstacles and number of LiDAR points) on the performance of the realworld mobile robot is analysed in more depth. Experimental testing was performed in a narrow corridor (augmented with additional obstacles) in order to fully test the neural networks and detect possible limitations. For a better understanding of proposed algorithms and analysis of their performance in reallife scenarios, a simple test-bed was devised with Turtbebot 2 as a test vehicle. Based on obtained results, and with safety in mind, conclusions are drawn and possible future improvements proposed.

Conference paper S. Kružić, J. Musić, I. Stančić, V. Papić

Influence of Data Collection Parameters on Performance of Neural Network-based Obstacle Avoidance

S. Kružić, J. Musić, I. Stančić, V. Papić
Conference paper
About The Publication
Neural networks are becoming wide-spread, including applications in mobile robotics and related fields. Most state-of-the-art approaches to training neural networks use video cameras for generating training datasets. However, these data are hard and time-consuming to collect resulting in a bottleneck for neural network training procedure. Thus, the paper briefly presents simulation-based LiDAR data collection for the training of neural networks for obstacle avoidance. The influence of two data collection parameters in simulation (distance to obstacles and number of LiDAR points) on the performance of the realworld mobile robot is analysed in more depth. Experimental testing was performed in a narrow corridor (augmented with additional obstacles) in order to fully test the neural networks and detect possible limitations. For a better understanding of proposed algorithms and analysis of their performance in reallife scenarios, a simple test-bed was devised with Turtbebot 2 as a test vehicle. Based on obtained results, and with safety in mind, conclusions are drawn and possible future improvements proposed.
15 Mar 2018

Complete Model for Automatic Object Detection and Localisation on Aerial Images using Convolutional Neural Networks

Journal of Communications Software and Systems

In this paper, a novel approach for an automatic object detection and localisation on aerial images is proposed. Proposed model does not use ground control points (GCPs) and consists of three major phases. In the first phase, optimal flight route is planned in order to capture the area of interest and aerial images are acquired using unmanned aerial vehicle (UAV), followed by creating a mosaic of collected images to obtained larger field-of-view panoramic image of the area of interest and using the obtained image mosaic to create georeferenced map. The image mosaic is then also used to detect objects of interest using the approach based on convolutional neural networks.

Journal paper D. Božić-Štulić, S. Kružić, S. Gotovac, V. Papić

Complete Model for Automatic Object Detection and Localisation on Aerial Images using Convolutional Neural Networks

D. Božić-Štulić, S. Kružić, S. Gotovac, V. Papić
Journal paper
About The Publication
In this paper, a novel approach for an automatic object detection and localisation on aerial images is proposed. Proposed model does not use ground control points (GCPs) and consists of three major phases. In the first phase, optimal flight route is planned in order to capture the area of interest and aerial images are acquired using unmanned aerial vehicle (UAV), followed by creating a mosaic of collected images to obtained larger field-of-view panoramic image of the area of interest and using the obtained image mosaic to create georeferenced map. The image mosaic is then also used to detect objects of interest using the approach based on convolutional neural networks.
12 Jul 2017

A Model for Automatic Geomapping of Aerial Images Mosaic Acquired by UAV

SpliTech 2017 - SPLIT, CROATIA

In this paper, we propose a novel approach for an automatic aerial image georeferencing for an unmanned aerial vehicle (UAV) image data acquisition platform that does not require the use of ground control points (GCPs).

Conference paper D. Gotovac, S. Kružić, S. Gotovac, V. Papić

A Model for Automatic Geomapping of Aerial Images Mosaic Acquired by UAV

D. Gotovac, S. Kružić, S. Gotovac, V. Papić
Conference paper
About The Publication
In this paper, we propose a novel approach for an automatic aerial image georeferencing for an unmanned aerial vehicle (UAV) image data acquisition platform that does not require the use of ground control points (GCPs). The determination of GCPs is, in general, a costly and slow process, hence a considerable bottleneck in georeferencing. With predefined route planning and making a mosaic of acquired aerial images, we managed to calculate world file transformation. In comparison with manual collecting GCPs, our automatic georeferenced results indicated that position errors were less than 92 cm. This accuracy is considered sufficient for most of the intended precision for search and rescue purposes.
25 May 2017

Influence of Human-Computer Interface Elements on Performance of Teleoperated Mobile Robot

Mipro - OPATIJA, CROATIA

Mobile robots are becoming ubiquitous, with applications which usually include a degree of autonomy. However, due to uncertain and dynamic nature of operational environment, algorithms for autonomous operation might fail. In order to assist the robot, the human operator might need to take control over the robot from remote location. In order to efficiently and safely teleoperate the robot, the operator has to have high degree of situational awareness.

Conference paper S. Kružić, J. Musić, I. Stančić

Influence of Human-Computer Interface Elements on Performance of Teleoperated Mobile Robot

S. Kružić, J. Musić, I. Stančić
Conference paper
About The Publication
Mobile robots are becoming ubiquitous, with applications which usually include a degree of autonomy. However, due to uncertain and dynamic nature of operational environment, algorithms for autonomous operation might fail. In order to assist the robot, the human operator might need to take control over the robot from remote location. In order to efficiently and safely teleoperate the robot, the operator has to have high degree of situational awareness. This can be achieved with appropriate human-computer interface (HCI), so that the remote environment model constructed with sensor data is presented at appropriate time, and that robot commands can be issued intuitively and easily. In the research, influence of HCI elements on performance of teleoperated mobile robot was studied for several tasks and with several HCI setups. The user study was performed, in which accuracy and speed of completion of given tasks were measured on a real robot. Statistical analysis was performed in order to identify possible setup dependencies. It showed that, in majority of analysed cases and based on introduced metrics, there is no significant difference between the setups, and between the visual control and teleoperation. Finally, conclusions were drawn with emphasis on benefits of information technology in particular case.
.04

RESEARCH

RESEARCH PROJECTS

INCREASING THE WELL BEING OF THE POPULATION BY ROBOTIC AND ICT BASED INNOVATIVE EDUCATION (RONNI)

07_ECVII_PA07, RONNI

RONNI aims the promotion of application of Robotics and Information and Communication Technologies (R&ICT) in education in order to overcome learning difficulties and raise the educational level of the young generation of the citizens. New innovative teaching strategies and methodologies transferable across the region will be proposed to support effective learning.

Project coordinator:

Institute of Robotics (IR-BAS), Sofia, BULGARIA

Project partners:
  1. Eastern Macedonia and Thrace Institute of Technology (EMaTTech), Kavala, Greece
  2. University of Split (US), Faculty of Electrical Engineering, Mechanical Engineering and Naval Architecture (FESB), Split, Republic of Croatia

SmartBots - Autonomous control of mobile robots using computer vision algorithms and modern neural network architectures

Bilateral project funded by DAAD (Germany) and Croatian Ministry of Science and Education.

Project partners:
  1. Hochschule Bonn-Rhein-Sieg, Sankt Augustin, NRW, Germany
  2. University of Split, Faculty of Electrical Engineering, Mechanical Engineering and Naval Architecture (FESB), Split, Croatia
.05

TEACHING

CURRENT
  • 2019
    NOW

    SYSTEMS THEORY

    Undergraduate Study Programme in Electrical Engineering and Information Technology

    Understanding and applications of basic principles used in analysis and synthesis of technical systems. Describing and analysing of simple linear dynamical systems. Permanent acquiring and deepening of knowledge in the area of theory of technical systems.
  • 2016
    NOW

    BIOMECHANICS PRACTICE

    Vocational study programme in Electronics

    Laboratory practice of biomechanics. Anthropometry. Forces. EMG. Motion capturing with high-speed camera.
  • 2016
    NOW

    BIOCYBERNETICS

    Graduate study programme in Automatic Control and Systems

    Laboratory practice of biocybernetics. Measurments and result analysis. Anthropometry. EMG. Motion capturing with high-speed camera.
  • 2016
    NOW

    Industrial Robotics

    Graduate study programme in Automatic Control and Systems

    Laboratory practice of industrial robotics with various different robotic manipulators.
PAST
  • 2016
    2017

    INTRODUCTION TO PROGRAMMING

    Undergraduate study programme in Electrical Engineering and Information Technology

    Introductory course to programming in C. It covers fundamental topics of procedural programming.
  • 2016
    2018

    PROGRAMMING 1

    Vocational Study Programme in Computer Science

    Introductory course to programming in VB.NET. It covers fundamental topics of programming, as well as introduction to OOP and GUI programming for Windows.
.06

SKILLS

PROGRAMMING SKIILLS
Python programming >
LEVEL : ADVANCED EXPERIENCE : 4 YEARS
Python IPython NumPy ROS
.NET Programming >
LEVEL : INTERMEDIATE EXPERIENCE : 12 YEARS
VB.NET C++.NET C#.NET
Web programming >
LEVEL : ADVANCED EXPERIENCE : 12 YEARS
HTML CSS PHP MySQL JavaScript React Gatsby
.07

CONTACT

Contact me

GET IN TOUCH

Simply use the form below to get in touch