Room: G29, R404
Phone: +49 391 67 52804
Since 2013, I am head of the „Chair of Software Engineering (CSE)“ at the Otto-von-Guericke-University Magdeburg. Before that I had been an associate professor for „Computer Systems in Engineering“ and a Post-Doc at Augsburg University.
Currently, I am leading several research projects, coordinating the Bachelor program „Ingenieurinformatik“ and the Master program „Digital Engineering“ and I am a founding member of the university compentence Center for Digital Engineering, Management and Operations.
The leading theme of my research is making advances in Computer Science available for engineering applications – with a special focus on methods from Software Engineering, formal specification techniques and robotics. Lack of Efficiency in developing new, dependable aoftware for almost any Domain of Engineering is – in my opinion – probably the biggest hurdle for new innovations.
In the last years, many new ideas from software engineering research already found their path to industrial practice for „Mainstream“ IT-applications. At the same time, development process, languages and paradigms for robotics, automotive systems or production automation have not changed a lot.
As a consequence software development and maintenance costs explode and software is often becoming a risk factor in engineering. This is in particular important as many new innovations in engineering, production or even the products are based on smart IT.
Research focus:
- Software engineering for technical applications and embedded systems
- Model-based approaches for software-intensive systems
- Self-organization as a new programming paradigm
- Systems Engineering
- Design and analysis of highly safety critical applications
- Techniques for designing an integrated view on dependability covering various facets like safety, reliability, security, transparency and user-trust
Teaching activities
Up to date information on current lectures may be found in the „For Students“ sections. In general my lectures are organized around the following topics:
- Software Engineering
- Clean Code Development
- Software Engineering for technical applications
- Software develpoment for industrial robotics
- Specification methods and model-based approaches
- Safety critical Systems
- Mobile devices
Ongoing activities and projects (excerpt):
- Project: VIP-MoBaSA – „Validation of model-based approaches for dependable systems“
- Project: ProMoSA – „Probabilistic Models for Safety Analysis“
- Project: ViERforES – „Virtuelle und Erweiterte Realitaeten für hoechste Sicherheit und Zuverlaessigkeit von EingebettetenSystemen“
- Project: TDL – Trajectory Description Language
- Member of EWICS TC 7: The European Workshop on Industrial Computer Systems, Technical Committee 7, Safety, Reliability and Security
- Member of „Gesellschaft für Systems Engineering“
- General Chair and PC Chair of SAFECOMP2012
- Speaker of regional sub-group „Sachsen-Anhalt“ of the „Gesellschaft für Informatik“
- Organizer of workshop „Software Engineering for mechatronical systems“ (2007 and 2008)
- Tutorials on model based safety analysis at the SAFECOMP conference (2008 and 2009)
- Reviewer and PC member for various international conferences and journal
Publications
2020 |
Heumüller, Robert; Nielebock, Sebastian; Krüger, Jacob; Ortmeier, Frank Publish or Perish, but do not Forget your Software Artifacts Artikel Empirical Software Engineering, 2020. @article{sap2020, title = {Publish or Perish, but do not Forget your Software Artifacts}, author = {Robert Heum\"{u}ller and Sebastian Nielebock and Jacob Kr\"{u}ger and Frank Ortmeier}, editor = {Springer}, url = {https://cse.cs.ovgu.de/cse-wordpress/wp-content/uploads/2020/07/2020-emse-paper-publish-or-perish.pdf}, doi = {10.1007/s10664-020-09851-6}, year = {2020}, date = {2020-10-08}, journal = {Empirical Software Engineering}, abstract = {Open-science initiatives have gained substantial momentum in computer science, and particularly in software-engineering research. A critical aspect of open-science is the public availability of artifacts (e.g., tools), which facilitate the replication, reproduction, extension, and verification of results. While we experienced that many artifacts are not publicly available, we are not aware of empirical evidence supporting this subjective claim. In this article, we report an empirical study on software artifact papers (SAPs) published at the International Conference on Software Engineering (ICSE), in which we investigated whether and how researchers have published their software artifacts, and whether this had scientific impact. Our dataset comprises 789 ICSE research track papers, including 604 SAPs (76.6,%), from the years 2007 to 2017. While showing a positive trend towards artifact availability, our results are still sobering. Even in 2017, only 58.5,% of the papers that stated to have developed a software artifact made that artifact publicly available. As we did find a small, but statistically significant, positive correlation between linking to artifacts in a paper and its scientific impact in terms of citations, we hope to motivate the research community to share more artifacts. With our insights, we aim to support the advancement of open science by discussing our results in the context of existing initiatives and guidelines. In particular, our findings advocate the need for clearly communicating artifacts and the use of non-commercial, persistent archives to provide replication packages.}, keywords = {}, pubstate = {published}, tppubtype = {article} } Open-science initiatives have gained substantial momentum in computer science, and particularly in software-engineering research. A critical aspect of open-science is the public availability of artifacts (e.g., tools), which facilitate the replication, reproduction, extension, and verification of results. While we experienced that many artifacts are not publicly available, we are not aware of empirical evidence supporting this subjective claim. In this article, we report an empirical study on software artifact papers (SAPs) published at the International Conference on Software Engineering (ICSE), in which we investigated whether and how researchers have published their software artifacts, and whether this had scientific impact. Our dataset comprises 789 ICSE research track papers, including 604 SAPs (76.6,%), from the years 2007 to 2017. While showing a positive trend towards artifact availability, our results are still sobering. Even in 2017, only 58.5,% of the papers that stated to have developed a software artifact made that artifact publicly available. As we did find a small, but statistically significant, positive correlation between linking to artifacts in a paper and its scientific impact in terms of citations, we hope to motivate the research community to share more artifacts. With our insights, we aim to support the advancement of open science by discussing our results in the context of existing initiatives and guidelines. In particular, our findings advocate the need for clearly communicating artifacts and the use of non-commercial, persistent archives to provide replication packages. |
Matschek, Janine; Gonschorek, Tim; Hanses, Magnus; Elkmann, Norbert; Ortmeier, Frank; Findeisen, Rolf Learning References with Gaussian Processes in Model Predictive Control Applied to Robot Assisted Surgery Inproceedings Forthcoming IFAC, (Hrsg.): Forthcoming. @inproceedings{matschek2020, title = {Learning References with Gaussian Processes in Model Predictive Control Applied to Robot Assisted Surgery}, author = {Janine Matschek and Tim Gonschorek and Magnus Hanses and Norbert Elkmann and Frank Ortmeier and Rolf Findeisen}, editor = {IFAC}, year = {2020}, date = {2020-05-13}, keywords = {}, pubstate = {forthcoming}, tppubtype = {inproceedings} } |
Nielebock, Sebastian; Heumüller, Robert; Krüger, Jacob; Ortmeier, Frank Cooperative API Misuse Detection Using Correction Rules Inproceedings ACM, (Hrsg.): Proccedings of the 42nd IEEE/ACM International Conference on Software Engineering - New Ideas and Emerging Results Track, ICSE-NIER, ACM, 2020. @inproceedings{Nielebock2020, title = {Cooperative API Misuse Detection Using Correction Rules}, author = {Sebastian Nielebock and Robert Heum\"{u}ller and Jacob Kr\"{u}ger and Frank Ortmeier}, editor = {ACM}, url = {https://cse.cs.ovgu.de/cse-wordpress/wp-content/uploads/2020/02/cooperative-api-misuse-detection-1.pdf https://bitbucket.org/SNielebock/icse-2020-nier-cooperative-api-misuse/src/master/}, doi = {10.1145/3377816.3381735}, year = {2020}, date = {2020-05-01}, booktitle = {Proccedings of the 42nd IEEE/ACM International Conference on Software Engineering - New Ideas and Emerging Results Track, ICSE-NIER}, publisher = {ACM}, keywords = {}, pubstate = {published}, tppubtype = {inproceedings} } |
Nielebock, Sebastian; Heumüller, Robert; Krüger, Jacob; Ortmeier, Frank Using API-Embedding for API-Misuse Repair Inproceedings ACM, (Hrsg.): Proceedings of the 1st International Workshop on Automated Program Repair (APR 2020) in conjunction with 42nd International Conference on Software Engineering (ICSE 2020), Seoul, South Korea, 2020. @inproceedings{Nielebock2020A, title = {Using API-Embedding for API-Misuse Repair}, author = {Sebastian Nielebock and Robert Heum\"{u}ller and Jacob Kr\"{u}ger and Frank Ortmeier}, editor = {ACM}, url = {https://cse.cs.ovgu.de/cse-wordpress/wp-content/uploads/2020/04/api-embeddings-for-repair-Nielebock-et-al-APR2020.pdf}, doi = {10.1145/3387940.3392171}, year = {2020}, date = {2020-05-01}, booktitle = {Proceedings of the 1st International Workshop on Automated Program Repair (APR 2020) in conjunction with 42nd International Conference on Software Engineering (ICSE 2020), Seoul, South Korea}, keywords = {}, pubstate = {published}, tppubtype = {inproceedings} } |
2019 |
Fuentealba, Patricio; Illanes, Alfredo; Ortmeier, Frank Applied Sciences, 9 (24), S. 5421, 2019, ISSN: 2076-3417. @article{fuentealba2019independent, title = {Independent Analysis of Decelerations and Resting Periods through CEEMDAN and Spectral-Based Feature Extraction Improves Cardiotocographic Assessment}, author = {Patricio Fuentealba and Alfredo Illanes and Frank Ortmeier}, url = {https://cse.cs.ovgu.de/cse-wordpress/wp-content/uploads/2019/12/fuentealba2019independent.pdf}, doi = {https://doi.org/10.3390/app9245421}, issn = {2076-3417}, year = {2019}, date = {2019-12-11}, journal = {Applied Sciences}, volume = {9}, number = {24}, pages = {5421}, publisher = {Multidisciplinary Digital Publishing Institute}, abstract = {Fetal monitoring is commonly based on the joint recording of the fetal heart rate (FHR) and uterine contraction signals obtained with a cardiotocograph (CTG). Unfortunately, CTG analysis is difficult, and the interpretation problems are mainly associated with the analysis of FHR decelerations. From that perspective, several approaches have been proposed to improve its analysis; however, the results obtained are not satisfactory enough for their implementation in clinical practice. Current clinical research indicates that a correct CTG assessment requires a good understanding of the fetal compensatory mechanisms. In previous works, we have shown that the complete ensemble empirical mode decomposition with adaptive noise, in combination with time-varying autoregressive modeling, may be useful for the analysis of those characteristics. In this work, based on this methodology, we propose to analyze the FHR deceleration episodes separately. The main hypothesis is that the proposed feature extraction strategy applied separately to the complete signal, deceleration episodes, and resting periods (between contractions), improves the CTG classification performance compared with the analysis of only the complete signal. Results reveal that by considering the complete signal, the classification performance achieved 81.7% quality. Then, including information extracted from resting periods, it improved to 83.2%.}, keywords = {}, pubstate = {published}, tppubtype = {article} } Fetal monitoring is commonly based on the joint recording of the fetal heart rate (FHR) and uterine contraction signals obtained with a cardiotocograph (CTG). Unfortunately, CTG analysis is difficult, and the interpretation problems are mainly associated with the analysis of FHR decelerations. From that perspective, several approaches have been proposed to improve its analysis; however, the results obtained are not satisfactory enough for their implementation in clinical practice. Current clinical research indicates that a correct CTG assessment requires a good understanding of the fetal compensatory mechanisms. In previous works, we have shown that the complete ensemble empirical mode decomposition with adaptive noise, in combination with time-varying autoregressive modeling, may be useful for the analysis of those characteristics. In this work, based on this methodology, we propose to analyze the FHR deceleration episodes separately. The main hypothesis is that the proposed feature extraction strategy applied separately to the complete signal, deceleration episodes, and resting periods (between contractions), improves the CTG classification performance compared with the analysis of only the complete signal. Results reveal that by considering the complete signal, the classification performance achieved 81.7% quality. Then, including information extracted from resting periods, it improved to 83.2%. |
Fuentealba, Patricio; Illanes, Alfredo; Ortmeier, Frank A Study on the Classification Performance of Cardiotocographic Data vs Class Formation Criteria Inproceedings Forthcoming Forthcoming. @inproceedings{fuentealba2019study, title = {A Study on the Classification Performance of Cardiotocographic Data vs Class Formation Criteria}, author = {Patricio Fuentealba and Alfredo Illanes and Frank Ortmeier}, year = {2019}, date = {2019-11-24}, abstract = {Fetal monitoring during labor is commonly based on the joint recording of the fetal heart rate (FHR) and uterine contraction data obtained by a Cardiotocograph (CTG). Currently, the interpretation of such data is difficult because it involves a visual analysis of highly complex signals. For this reason, several approaches based on signal processing and classification have been proposed. Most of the CTG classification approaches use class formation criteria based on the pH value, which is considered as a gold standard measure for postpartum evaluation. However, at birth, the association of a precise value of pH with the neonatal outcome is still inconclusive, which makes the classification training a difficult task. This work focuses on studying the CTG classification performance in relation to the used class formation criterion. For this purpose, first, the FHR signal is decomposed by using the complete ensemble empirical mode decomposition with adaptive noise (CEEMDAN) method. Second, we extract a set of signal features based on CEEMDAN and conventional time-domain features proposed in the literature, which are computed in different FHR signal lengths just before delivery. Then, the features classification performance is evaluated according to a set of class formation criteria based on different pH values used as thresholds. Results reveal that the classification performance significantly depends on the selected pH value for the class formation, whose best performance is achieved by considering a class formation based on a pH=7.05.}, keywords = {}, pubstate = {forthcoming}, tppubtype = {inproceedings} } Fetal monitoring during labor is commonly based on the joint recording of the fetal heart rate (FHR) and uterine contraction data obtained by a Cardiotocograph (CTG). Currently, the interpretation of such data is difficult because it involves a visual analysis of highly complex signals. For this reason, several approaches based on signal processing and classification have been proposed. Most of the CTG classification approaches use class formation criteria based on the pH value, which is considered as a gold standard measure for postpartum evaluation. However, at birth, the association of a precise value of pH with the neonatal outcome is still inconclusive, which makes the classification training a difficult task. This work focuses on studying the CTG classification performance in relation to the used class formation criterion. For this purpose, first, the FHR signal is decomposed by using the complete ensemble empirical mode decomposition with adaptive noise (CEEMDAN) method. Second, we extract a set of signal features based on CEEMDAN and conventional time-domain features proposed in the literature, which are computed in different FHR signal lengths just before delivery. Then, the features classification performance is evaluated according to a set of class formation criteria based on different pH values used as thresholds. Results reveal that the classification performance significantly depends on the selected pH value for the class formation, whose best performance is achieved by considering a class formation based on a pH=7.05. |
Fuentealba, Patricio; Illanes, Alfredo; Ortmeier, Frank IEEE Access, 7 (1), S. 159754 - 159772, 2019. @article{fuentealba2019thyroid, title = {Cardiotocographic Signal Feature Extraction through CEEMDAN and Time-Varying Autoregressive Spectral-Based Analysis for Fetal Welfare Assessment}, author = {Patricio Fuentealba and Alfredo Illanes and Frank Ortmeier}, url = {https://cse.cs.ovgu.de/cse-wordpress/wp-content/uploads/2019/11/fuentealba2019cardiotocographic.pdf}, doi = {10.1109/ACCESS.2019.2950798}, year = {2019}, date = {2019-10-31}, journal = {IEEE Access}, volume = {7}, number = {1}, pages = {159754 - 159772}, publisher = {IEEE}, abstract = {Cardiotocograph (CTG) is a widely used tool for fetal surveillance during labor, which provides the joint recording of fetal heart rate (FHR) and uterine contraction data. Unfortunately, the CTG interpretation is difficult because it involves a visual analysis of highly complex signals. Recent clinical research indicates that a correct CTG assessment requires a good understanding of the fetal compensatory mechanisms modulated by the autonomic nervous system. Certainly, this modulation reflects variations in the FHR, whose characteristics can involve significant information about the fetal condition. The main contribution of this work is to investigate these characteristics by a new approach combining two signal processing methods: the complete ensemble empirical mode decomposition with adaptive noise (CEEMDAN) and time-varying autoregressive (TV-AR) modeling. The idea is to study the CEEMDAN intrinsic mode functions (IMFs) in both the time-domain and the spectral-domain in order to extract information that can help to assess the fetal condition. For this purpose, first, the FHR signal is decomposed, and then for each IMF, the TV-AR spectrum is computed in order to study their spectral dynamics over time. In this paper, we first explain the foundations of our proposed features. Then, we evaluate their performance in CTG classification by using three machine learning classifiers. The proposed approach has been evaluated on real CTG data extracted from the CTU-UHB database. Results show that by using only conventional FHR features, the classification performance achieved 78,0%. Then, by including the proposed CEEMDAN spectral-based features, it increased to 81,7%.}, keywords = {}, pubstate = {published}, tppubtype = {article} } Cardiotocograph (CTG) is a widely used tool for fetal surveillance during labor, which provides the joint recording of fetal heart rate (FHR) and uterine contraction data. Unfortunately, the CTG interpretation is difficult because it involves a visual analysis of highly complex signals. Recent clinical research indicates that a correct CTG assessment requires a good understanding of the fetal compensatory mechanisms modulated by the autonomic nervous system. Certainly, this modulation reflects variations in the FHR, whose characteristics can involve significant information about the fetal condition. The main contribution of this work is to investigate these characteristics by a new approach combining two signal processing methods: the complete ensemble empirical mode decomposition with adaptive noise (CEEMDAN) and time-varying autoregressive (TV-AR) modeling. The idea is to study the CEEMDAN intrinsic mode functions (IMFs) in both the time-domain and the spectral-domain in order to extract information that can help to assess the fetal condition. For this purpose, first, the FHR signal is decomposed, and then for each IMF, the TV-AR spectrum is computed in order to study their spectral dynamics over time. In this paper, we first explain the foundations of our proposed features. Then, we evaluate their performance in CTG classification by using three machine learning classifiers. The proposed approach has been evaluated on real CTG data extracted from the CTU-UHB database. Results show that by using only conventional FHR features, the classification performance achieved 78,0%. Then, by including the proposed CEEMDAN spectral-based features, it increased to 81,7%. |
Fuentealba, Patricio; Illanes, Alfredo; Ortmeier, Frank Cardiotocograph Data Classification Improvement by Using Empirical Mode Decomposition Inproceedings 2019 41st Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), S. 5646–5649, IEEE 2019, ISBN: 978-1-5386-1311-5 . @inproceedings{fuentealba2019cardiotocograph, title = {Cardiotocograph Data Classification Improvement by Using Empirical Mode Decomposition}, author = {Patricio Fuentealba and Alfredo Illanes and Frank Ortmeier}, url = {https://ieeexplore.ieee.org/document/8856673}, doi = {10.1109/EMBC.2019.8856673}, isbn = {978-1-5386-1311-5 }, year = {2019}, date = {2019-10-07}, booktitle = {2019 41st Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC)}, pages = {5646--5649}, organization = {IEEE}, abstract = {This work proposes to study the fetal heart rate (FHR) signal based on information about its dynamics as a signal resulting from the modulation by the autonomic nervous system. The analysis is performed using the complete ensemble empirical mode decomposition with adaptive noise (CEEMDAN) technique. The main idea is to extract a set of signal features based on that technique and also conventional time-domain features proposed in the literature in order to study their performance by using a support vector machine (SVM) as a classifier. As a hypothesis, we postulate that by including CEEMDAN based features, the classification performance should improve compared with the performance achieved by conventional features. The proposed method has been evaluated using real FHR data extracted from the open access CTU-UHB database. Results show that the classification performance improved from 67, 6% using only conventional features, to 71, 7% by incorporating CEEMDAN based features.}, keywords = {}, pubstate = {published}, tppubtype = {inproceedings} } This work proposes to study the fetal heart rate (FHR) signal based on information about its dynamics as a signal resulting from the modulation by the autonomic nervous system. The analysis is performed using the complete ensemble empirical mode decomposition with adaptive noise (CEEMDAN) technique. The main idea is to extract a set of signal features based on that technique and also conventional time-domain features proposed in the literature in order to study their performance by using a support vector machine (SVM) as a classifier. As a hypothesis, we postulate that by including CEEMDAN based features, the classification performance should improve compared with the performance achieved by conventional features. The proposed method has been evaluated using real FHR data extracted from the open access CTU-UHB database. Results show that the classification performance improved from 67, 6% using only conventional features, to 71, 7% by incorporating CEEMDAN based features. |
Fuentealba, Patricio; Illanes, Alfredo; Ortmeier, Frank Foetal heart rate assessment by empirical mode decomposition and spectral analysis Artikel Current Directions in Biomedical Engineering, 5 (1), S. 381–383, 2019. @article{fuentealba2019foetal, title = {Foetal heart rate assessment by empirical mode decomposition and spectral analysis}, author = {Patricio Fuentealba and Alfredo Illanes and Frank Ortmeier}, url = {https://www.degruyter.com/downloadpdf/j/cdbme.2019.5.issue-1/cdbme-2019-0096/cdbme-2019-0096.pdf}, doi = {https://doi.org/10.1515/cdbme-2019-0096}, year = {2019}, date = {2019-09-18}, journal = {Current Directions in Biomedical Engineering}, volume = {5}, number = {1}, pages = {381--383}, publisher = {De Gruyter}, abstract = {This paper focuses on studying the time-variant dynamics involved in the foetal heart rate (FHR) response resulting from the autonomic nervous system modulation. It provides a comprehensive analysis of such dynamics by relating the spectral information involved in the FHR signal with foetal physiological characteristics. This approach is based on two signal processing methods: the complete ensemble empirical mode decomposition with adaptive noise (CEEMDAN) and time-varying autoregressive (TV-AR) modelling. First, the CEEMDAN allows to decompose the signal into intrinsic mode functions (IMFs). Then, the TV-AR modelling allows to analyse their spectral dynamics. Results reveal that the IMFs can involve significant spectral information (p -value < 0.05) that can help to assess the foetal condition.}, keywords = {}, pubstate = {published}, tppubtype = {article} } This paper focuses on studying the time-variant dynamics involved in the foetal heart rate (FHR) response resulting from the autonomic nervous system modulation. It provides a comprehensive analysis of such dynamics by relating the spectral information involved in the FHR signal with foetal physiological characteristics. This approach is based on two signal processing methods: the complete ensemble empirical mode decomposition with adaptive noise (CEEMDAN) and time-varying autoregressive (TV-AR) modelling. First, the CEEMDAN allows to decompose the signal into intrinsic mode functions (IMFs). Then, the TV-AR modelling allows to analyse their spectral dynamics. Results reveal that the IMFs can involve significant spectral information (p -value < 0.05) that can help to assess the foetal condition. |
Gonschorek, Tim; Bergt, Philipp; Filax, Marco; Ortmeier, Frank; von Hoyningen-Hüne, Jan; Piper, Thorsten SafeDeML: On Integrating the Safety Design into the System Model Inproceedings Romanovsky, Alexander; Troubitsyna, Elena; Bitsch, Friedemann (Hrsg.): Computer Safety, Reliability, and Security, S. 271–285, Springer International Publishing, Cham, 2019, ISBN: 978-3-030-26601-1. @inproceedings{10.1007/978-3-030-26601-1_19, title = {SafeDeML: On Integrating the Safety Design into the System Model}, author = {Tim Gonschorek and Philipp Bergt and Marco Filax and Frank Ortmeier and Jan von Hoyningen-H\"{u}ne and Thorsten Piper}, editor = {Alexander Romanovsky and Elena Troubitsyna and Friedemann Bitsch}, url = {https://cse.cs.ovgu.de/cse-wordpress/wp-content/uploads/2020/04/GonschorekEtAl_SafeDeML.pdfhttps://link.springer.com/chapter/10.1007/978-3-030-26601-1_19}, doi = {10.1007/978-3-030-26601-1_19}, isbn = {978-3-030-26601-1}, year = {2019}, date = {2019-09-18}, booktitle = {Computer Safety, Reliability, and Security}, pages = {271--285}, publisher = {Springer International Publishing}, address = {Cham}, abstract = {The safety design definition of a safety critical system is a complex task. On the one hand, the system designer must ensure that he addressed all potentially hazardous harwdware faults. This is often defined not(!) in the model but within extra documents (e.g., Excel sheets). On the other hand, all defined safety mechanisms must be transformed back into the system model. We think an improvement for the designer would be given by a modeling extension integrating relevant safety design artifacts into the normal design work-flow and supporting the safety design development directly from within the model.}, keywords = {}, pubstate = {published}, tppubtype = {inproceedings} } The safety design definition of a safety critical system is a complex task. On the one hand, the system designer must ensure that he addressed all potentially hazardous harwdware faults. This is often defined not(!) in the model but within extra documents (e.g., Excel sheets). On the other hand, all defined safety mechanisms must be transformed back into the system model. We think an improvement for the designer would be given by a modeling extension integrating relevant safety design artifacts into the normal design work-flow and supporting the safety design development directly from within the model. |
Gonschorek, Tim; Bergt, Philipp; Filax, Marco; Ortmeier, Frank Integrating Safety Design Artifacts into System Development Models Using SafeDeML Inproceedings Papadopoulos, Yiannis; Aslansefat, Koorosh; Katsaros, Panagiotis; Bozzano, Marco (Hrsg.): Model-Based Safety and Assessment, S. 93–106, Springer International Publishing, Cham, 2019, ISBN: 978-3-030-32872-6. @inproceedings{10.1007/978-3-030-32872-6_7, title = {Integrating Safety Design Artifacts into System Development Models Using SafeDeML}, author = {Tim Gonschorek and Philipp Bergt and Marco Filax and Frank Ortmeier}, editor = {Yiannis Papadopoulos and Koorosh Aslansefat and Panagiotis Katsaros and Marco Bozzano}, url = {https://cse.cs.ovgu.de/cse-wordpress/wp-content/uploads/2020/04/SafeDeML_gonschorekEtAl.pdfhttps://link.springer.com/chapter/10.1007/978-3-030-32872-6_7}, doi = {10.1007/978-3-030-32872-6_7}, isbn = {978-3-030-32872-6}, year = {2019}, date = {2019-09-18}, booktitle = {Model-Based Safety and Assessment}, pages = {93--106}, publisher = {Springer International Publishing}, address = {Cham}, abstract = {Applying a safety artifact language as Safety Design Modeling Language SafeDeML integrates the generation of the safety design into the system modeling stage -- directly within the system architecture. In this paper, we present a modeling process and a prototype for the CASE tool Enterprise Architect for SafeDeML. The goal is to support the system designer in developing a standard (in this paper Iso 26262) conform system and safety design containing all relevant safety artifact within one model. Such integration offers several modeling guarantees like consistency checks or computation of coverage and fault metrics. Since all relevant information and artifacts are contained within the model, SafeDeML and the prototype can help to decrease the effect of structural faults during the safety design and further supports the safety assessment. To give an idea to the reader of the complexity of the approach's application, we present an exemplary implementation of the safety design for a brake light system, a real case-study from the Iso 26262 context.}, keywords = {}, pubstate = {published}, tppubtype = {inproceedings} } Applying a safety artifact language as Safety Design Modeling Language SafeDeML integrates the generation of the safety design into the system modeling stage -- directly within the system architecture. In this paper, we present a modeling process and a prototype for the CASE tool Enterprise Architect for SafeDeML. The goal is to support the system designer in developing a standard (in this paper Iso 26262) conform system and safety design containing all relevant safety artifact within one model. Such integration offers several modeling guarantees like consistency checks or computation of coverage and fault metrics. Since all relevant information and artifacts are contained within the model, SafeDeML and the prototype can help to decrease the effect of structural faults during the safety design and further supports the safety assessment. To give an idea to the reader of the complexity of the approach's application, we present an exemplary implementation of the safety design for a brake light system, a real case-study from the Iso 26262 context. |
Heumüller, Robert; Nielebock, Sebastian; Ortmeier, Frank SpecTackle - A Specification Mining Experimentation Platform Inproceedings Proceedings of the 45th Euromicro Conference on Software Engineering and Advanced Applications (SEAA),Kallithea, Chalkidiki. Greece, Euromicro 2019. @inproceedings{Heum\"{u}ller2019, title = {SpecTackle - A Specification Mining Experimentation Platform}, author = {Robert Heum\"{u}ller and Sebastian Nielebock and Frank Ortmeier}, url = {https://cse.cs.ovgu.de/cse-wordpress/wp-content/uploads/2020/08/paper-spectackle.pdf}, year = {2019}, date = {2019-08-30}, booktitle = {Proceedings of the 45th Euromicro Conference on Software Engineering and Advanced Applications (SEAA),Kallithea, Chalkidiki. Greece}, organization = {Euromicro}, abstract = {Nowadays, API Specification Mining is an important cornerstone of automated software engineering. In this paper, we introduce SpecTackle, an IDE-based experimentation platform aiming to facilitate experimentation and validation of specification mining algorithms and tools. SpecTackle strives toward (1) providing easy access to various specification mining tools, (2) simplifying configuration and usage through a shared interface, and (3) in-code visualization of pattern occurrences. The first version supports two heterogeneous mining tools, a third-party graph-based miner as well as a custom sequence mining tool. In the long term, SpecTackle envisions to also provide ground-truth benchmark projects, a unified pattern meta-model and parameter optimization for mining tools.}, keywords = {}, pubstate = {published}, tppubtype = {inproceedings} } Nowadays, API Specification Mining is an important cornerstone of automated software engineering. In this paper, we introduce SpecTackle, an IDE-based experimentation platform aiming to facilitate experimentation and validation of specification mining algorithms and tools. SpecTackle strives toward (1) providing easy access to various specification mining tools, (2) simplifying configuration and usage through a shared interface, and (3) in-code visualization of pattern occurrences. The first version supports two heterogeneous mining tools, a third-party graph-based miner as well as a custom sequence mining tool. In the long term, SpecTackle envisions to also provide ground-truth benchmark projects, a unified pattern meta-model and parameter optimization for mining tools. |
Nielebock, Sebastian; Nykolaichuk, Mykhaylo; Ortmeier, Frank 2019. @misc{datenschutzkonzept_nielebock, title = {Leitfaden "Ihre ersten Schritte auf dem Weg zu einem Datenschutzkonzept f\"{u}r Ihr Unternehmen - Das k\"{o}nnen Sie selbst tun!}, author = {Sebastian Nielebock and Mykhaylo Nykolaichuk and Frank Ortmeier}, editor = {Mittelstand 4.0-Kompetenzzentrum Magdeburg c/o ZPVP GmbH }, url = {https://cse.cs.ovgu.de/cse-wordpress/wp-content/uploads/2019/06/Leitfaden_Erste-Schritte_zum_Datenschutzkonzept_final.pdf}, year = {2019}, date = {2019-06-03}, abstract = {Sie informieren sich auf einer Firmen-Website im Internet. Sie m\"{o}chten in einem Onlineshop bestellen oder sind Mitglied in einem Verband? Selbst wenn Sie auf den ersten Blick gar keine pers\"{o}nlichen Daten preisgeben, so hinterlassen Sie doch, bei allem, was Sie tun, Ihre Datenspur. Diese Datenspur gilt es zu sch\"{u}tzen und im Umkehrschluss nat\"{u}rlich auch die Ihrer Kunden. Am 25. Mai 2018 trat die so genannte Datenschutzgrundverordnung (DSGVO) in Kraft. Damit sind die Richtlinien noch strenger geworden. Wir alle wissen, dass uns der Schutz unserer Daten wichtig ist. Aber was genau bedeutet das? Welche Daten fallen zum Beispiel in meinem Unternehmen an? Welche Daten muss ich sch\"{u}tzen und wie mache ich das? Wenn ich schon Schutzma\ssnahmen ergriffen habe, sind diese ausreichend? Vor diesen Fragen stehen nicht nur Sie, sondern viele Unternehmer und Unternehmerinnen. W\"{a}hrend Gro\ssbetriebe \"{u}ber eigene IT- und Rechtsabteilungen verf\"{u}gen, stehen Firmenchefs kleinerer und mittlerer Unternehmen diesen Fragen h\"{a}ufig alleine gegen\"{u}ber. Oftmals fehlt schon das Basiswissen, von Zeit und Mu\sse ganz zu schweigen. Dazu vorab eine gute und eine weniger gute Nachricht. Beginnen wir mit der weiniger guten: Datenschutz ist komplex und entwickelt sich st\"{a}ndig weiter. Um wirklich auf Nummer sicher zu gehen, brauchen Sie sehr wahrscheinlich Rat und Hilfe eines Datenschutz-Experten. Dabei kann dieser Leitfaden Sie unterst\"{u}tzen, auf Augenh\"{o}he mit einem Datenschutz-Profi zu reden, denn daf\"{u}r statten wir Sie mit dem notwendigen Vokabular aus und vermitteln Ihnen die wichtigsten datenschutzrechtlichen Grundlagen. Am Ende eines Prozesses steht ein f\"{u}r Ihr Unternehmen ma\ssgeschneidertes Datenschutzkonzept. Das ist nichts anderes als ein Ma\ssnahmenkonzept, um den Datenschutz in Ihrer Firma einzuhalten. Wir zeigen Ihnen auch, wer genau Ihnen beim Datenschutzkonzept f\"{u}r Ihr Unternehmens unter die Arme greifen kann. So kommen wir nun zur guten Nachricht: Guter Datenschutz ist wichtig, muss aber nicht teuer sein. Wichtig ist, dass Sie f\"{u}r Ihr Unternehmen systematisch vorgehen und im Thema stecken. So wird es Ihnen leicht fallen, die gr\"{o}\ssten Risiken herauszufinden und zielgerichtet zu minimieren. Ist Ihr Blick einmal f\"{u}r eventuelle Sicherheitsl\"{u}cken geschult, sind Sie auch zuk\"{u}nftig f\"{u}r noch kommende Datenschutzma\ssnahmen sensibilisiert. Datenschutz ist komplex und kompliziert. Aber Datenschutz ist auch keine „Raketenwissenschaft“. Wer es geschafft hat, ein Unternehmen aufzubauen und zu leiten, der kann auch die ersten Schritte hin zu einem Datenschutzkonzept selbst\"{a}ndig gehen. Lassen Sie sich dazu von uns an die Hand nehmen. Datenschutz und Datensicherheit entwickeln sich stetig weiter. Am besten ist, Sie entwickeln sich einfach mit ...}, keywords = {}, pubstate = {published}, tppubtype = {misc} } Sie informieren sich auf einer Firmen-Website im Internet. Sie möchten in einem Onlineshop bestellen oder sind Mitglied in einem Verband? Selbst wenn Sie auf den ersten Blick gar keine persönlichen Daten preisgeben, so hinterlassen Sie doch, bei allem, was Sie tun, Ihre Datenspur. Diese Datenspur gilt es zu schützen und im Umkehrschluss natürlich auch die Ihrer Kunden. Am 25. Mai 2018 trat die so genannte Datenschutzgrundverordnung (DSGVO) in Kraft. Damit sind die Richtlinien noch strenger geworden. Wir alle wissen, dass uns der Schutz unserer Daten wichtig ist. Aber was genau bedeutet das? Welche Daten fallen zum Beispiel in meinem Unternehmen an? Welche Daten muss ich schützen und wie mache ich das? Wenn ich schon Schutzmaßnahmen ergriffen habe, sind diese ausreichend? Vor diesen Fragen stehen nicht nur Sie, sondern viele Unternehmer und Unternehmerinnen. Während Großbetriebe über eigene IT- und Rechtsabteilungen verfügen, stehen Firmenchefs kleinerer und mittlerer Unternehmen diesen Fragen häufig alleine gegenüber. Oftmals fehlt schon das Basiswissen, von Zeit und Muße ganz zu schweigen. Dazu vorab eine gute und eine weniger gute Nachricht. Beginnen wir mit der weiniger guten: Datenschutz ist komplex und entwickelt sich ständig weiter. Um wirklich auf Nummer sicher zu gehen, brauchen Sie sehr wahrscheinlich Rat und Hilfe eines Datenschutz-Experten. Dabei kann dieser Leitfaden Sie unterstützen, auf Augenhöhe mit einem Datenschutz-Profi zu reden, denn dafür statten wir Sie mit dem notwendigen Vokabular aus und vermitteln Ihnen die wichtigsten datenschutzrechtlichen Grundlagen. Am Ende eines Prozesses steht ein für Ihr Unternehmen maßgeschneidertes Datenschutzkonzept. Das ist nichts anderes als ein Maßnahmenkonzept, um den Datenschutz in Ihrer Firma einzuhalten. Wir zeigen Ihnen auch, wer genau Ihnen beim Datenschutzkonzept für Ihr Unternehmens unter die Arme greifen kann. So kommen wir nun zur guten Nachricht: Guter Datenschutz ist wichtig, muss aber nicht teuer sein. Wichtig ist, dass Sie für Ihr Unternehmen systematisch vorgehen und im Thema stecken. So wird es Ihnen leicht fallen, die größten Risiken herauszufinden und zielgerichtet zu minimieren. Ist Ihr Blick einmal für eventuelle Sicherheitslücken geschult, sind Sie auch zukünftig für noch kommende Datenschutzmaßnahmen sensibilisiert. Datenschutz ist komplex und kompliziert. Aber Datenschutz ist auch keine „Raketenwissenschaft“. Wer es geschafft hat, ein Unternehmen aufzubauen und zu leiten, der kann auch die ersten Schritte hin zu einem Datenschutzkonzept selbständig gehen. Lassen Sie sich dazu von uns an die Hand nehmen. Datenschutz und Datensicherheit entwickeln sich stetig weiter. Am besten ist, Sie entwickeln sich einfach mit ... |
Filax, Marco; Gonschorek, Tim; Ortmeier, Frank Data for Image Recognition Tasks: An Efficient Tool for Fine-Grained Annotations Inproceedings Proceedings of the 8th International Conference on Pattern Recognition Applications and Methods, 2019. @inproceedings{Filax2019, title = {Data for Image Recognition Tasks: An Efficient Tool for Fine-Grained Annotations}, author = {Marco Filax and Tim Gonschorek and Frank Ortmeier}, url = {https://bitbucket.org/cse_admin/md_groceries http://www.scitepress.org/DigitalLibrary/Link.aspx?doi=10.5220/0007688709000907}, year = {2019}, date = {2019-02-19}, booktitle = {Proceedings of the 8th International Conference on Pattern Recognition Applications and Methods}, abstract = {Using large datasets is essential for machine learning. In practice, training a machine learning algorithm requires hundreds of samples. Multiple off-the-shelf datasets from the scientific domain exist to benchmark new approaches. However, when machine learning algorithms transit to industry, e.g., for a particular image classification problem, hundreds of specific purpose images are collected and annotated in laborious manual work. In this paper, we present a novel system to decrease the effort of annotating those large image sets. Therefore, we generate 2D bounding boxes from minimal 3D annotations using the known location and orientation of the camera. We annotate a particular object of interest in 3D once and project these annotations on to every frame of a video stream. The proposed approach is designed to work with off-the-shelf hardware. We demonstrate its applicability with an example from the real world. We generated a more extensive dataset than available in other works for a particular industrial use case: fine-grained recognition of items within grocery stores. Further, we make our dataset available to the interested vision community consisting of over 60,000 images. Some images were taken under ideal conditions for training while others were taken with the proposed approach in the wild. }, keywords = {}, pubstate = {published}, tppubtype = {inproceedings} } Using large datasets is essential for machine learning. In practice, training a machine learning algorithm requires hundreds of samples. Multiple off-the-shelf datasets from the scientific domain exist to benchmark new approaches. However, when machine learning algorithms transit to industry, e.g., for a particular image classification problem, hundreds of specific purpose images are collected and annotated in laborious manual work. In this paper, we present a novel system to decrease the effort of annotating those large image sets. Therefore, we generate 2D bounding boxes from minimal 3D annotations using the known location and orientation of the camera. We annotate a particular object of interest in 3D once and project these annotations on to every frame of a video stream. The proposed approach is designed to work with off-the-shelf hardware. We demonstrate its applicability with an example from the real world. We generated a more extensive dataset than available in other works for a particular industrial use case: fine-grained recognition of items within grocery stores. Further, we make our dataset available to the interested vision community consisting of over 60,000 images. Some images were taken under ideal conditions for training while others were taken with the proposed approach in the wild. |
Engel, Christoph; Mencke, Steffen; Heumüller, Robert; Ortmeier, Frank Companion Specifications for Smart Factories: From Machine to Process View Inproceedings Smart SysTech 2019; European Conference on Smart Objects, Systems and Technologies, S. 1–8, VDE 2019. @inproceedings{engel2019companion, title = {Companion Specifications for Smart Factories: From Machine to Process View}, author = {Christoph Engel and Steffen Mencke and Robert Heum\"{u}ller and Frank Ortmeier}, year = {2019}, date = {2019-01-01}, booktitle = {Smart SysTech 2019; European Conference on Smart Objects, Systems and Technologies}, pages = {1--8}, organization = {VDE}, abstract = {Currently Companion Specifications are used to create general interfaces to machines within a plant (see: [1, P. 27ff]). This is one step to reach an exchangeability of machines on the way to the future of Industry 4.0. But in the environment of enterprise companies also a process view and a recombinability of production steps is desirable to get the flexibility needed for the Industry 4.0. This paper proposes an Enterprise Domain Companion Specification as well as a mapping to corresponding Machine Companion Specifications as solution to that problem. Together with this mapping the Enterprise Domain Companion Specification allows to create an interactive process view for the IT and allows a recombination of process steps without changing the IT perspective to the production.}, keywords = {}, pubstate = {published}, tppubtype = {inproceedings} } Currently Companion Specifications are used to create general interfaces to machines within a plant (see: [1, P. 27ff]). This is one step to reach an exchangeability of machines on the way to the future of Industry 4.0. But in the environment of enterprise companies also a process view and a recombinability of production steps is desirable to get the flexibility needed for the Industry 4.0. This paper proposes an Enterprise Domain Companion Specification as well as a mapping to corresponding Machine Companion Specifications as solution to that problem. Together with this mapping the Enterprise Domain Companion Specification allows to create an interactive process view for the IT and allows a recombination of process steps without changing the IT perspective to the production. |
2018 |
Fuentealba, Patricio; Illanes, Alfredo; Ortmeier, Frank 2018 Computing in Cardiology (CinC), S. 1–4, IEEE 2018. @inproceedings{fuentealba2018spectral, title = {Spectral-based Analysis of Progressive Dynamical Changes in the Fetal Heart Rate Signal During Labor by Using Empirical Mode Decomposition}, author = {Patricio Fuentealba and Alfredo Illanes and Frank Ortmeier}, url = {http://www.cinc.org/2018/preprints/95_CinCFinalPDF.pdf}, year = {2018}, date = {2018-12-01}, booktitle = {2018 Computing in Cardiology (CinC)}, pages = {1--4}, organization = {IEEE}, abstract = {In this work, we propose to study the progressive fetal response along the fetal heart rate (FHR) signal by using empirical mode decomposition and time-varying spectral-based analysis. The main idea is to investigate if a particular FHR signal episode in the time-domain reflects dynamical changes in the frequency-domain that can help to assess the fetal condition. Results show that the spectral components associated with the neural sympathetic fetal reactivity exhibit significant spectral energy differences between normal and acidotic fetuses.}, keywords = {}, pubstate = {published}, tppubtype = {inproceedings} } In this work, we propose to study the progressive fetal response along the fetal heart rate (FHR) signal by using empirical mode decomposition and time-varying spectral-based analysis. The main idea is to investigate if a particular FHR signal episode in the time-domain reflects dynamical changes in the frequency-domain that can help to assess the fetal condition. Results show that the spectral components associated with the neural sympathetic fetal reactivity exhibit significant spectral energy differences between normal and acidotic fetuses. |
Nielebock, Sebastian; Krolikowski, Dariusz; Krüger, Jacob; Leich, Thomas; Ortmeier, Frank Commenting Source Code: Is It Worth It For Small Programming Tasks? Artikel Springer Empirical Software Engineering (EMSE), 24 (3), S. 1418–1457, 2018, ISSN: 1382-3256. @article{NielebockComments2018, title = {Commenting Source Code: Is It Worth It For Small Programming Tasks?}, author = {Sebastian Nielebock and Dariusz Krolikowski and Jacob Kr\"{u}ger and Thomas Leich and Frank Ortmeier}, editor = {Springer Science+Business Media, LLC, part of Springer Nature}, url = {https://cse.cs.ovgu.de/cse-wordpress/wp-content/uploads/2018/10/paper-influence-comments_preliminary.pdf}, doi = {10.1007/s10664-018-9664-z}, issn = {1382-3256}, year = {2018}, date = {2018-11-16}, journal = {Springer Empirical Software Engineering (EMSE)}, volume = {24}, number = {3}, pages = {1418--1457}, publisher = {Springer US}, abstract = {Maintaining a program is a time-consuming and expensive task in software engineering. Consequently, several approaches have been proposed to improve the comprehensibility of source code. One of such approaches are comments in the code that enable developers to explain the program with their own words or predefined tags. Some empirical studies indicate benefits of comments in certain situations, while others find no benefits at all. Thus, the real effect of comments on software development remains uncertain. In this article, we describe an experiment in which 277 participants, mainly professional software developers, performed small programming tasks on differently commented code. Based on quantitative and qualitative feedback, we i) partly replicate previous studies, ii) investigate performances of differently experienced participants when confronted with varying types of comments, and iii) discuss the opinions of developers on comments. Our results indicate that comments seem to be considered more important in previous studies and by our participants than they are for small programming tasks. While other mechanisms, such as proper identifiers, are considered more helpful by our participants, they also emphasize the necessity of comments in certain situations.}, keywords = {}, pubstate = {published}, tppubtype = {article} } Maintaining a program is a time-consuming and expensive task in software engineering. Consequently, several approaches have been proposed to improve the comprehensibility of source code. One of such approaches are comments in the code that enable developers to explain the program with their own words or predefined tags. Some empirical studies indicate benefits of comments in certain situations, while others find no benefits at all. Thus, the real effect of comments on software development remains uncertain. In this article, we describe an experiment in which 277 participants, mainly professional software developers, performed small programming tasks on differently commented code. Based on quantitative and qualitative feedback, we i) partly replicate previous studies, ii) investigate performances of differently experienced participants when confronted with varying types of comments, and iii) discuss the opinions of developers on comments. Our results indicate that comments seem to be considered more important in previous studies and by our participants than they are for small programming tasks. While other mechanisms, such as proper identifiers, are considered more helpful by our participants, they also emphasize the necessity of comments in certain situations. |
Gonschorek, Tim; Filax, Marco; Ortmeier, Frank A very first Glance on the Safety Analysis of Self-learning Algorithms for Autonomous Cars Inproceedings Guiochet, Jérémie (Hrsg.): 37th International Conference on Computer Safety, Reliability, & Security. SAFECOMP2018., HAL, 2018. @inproceedings{Gonschorek2018c, title = {A very first Glance on the Safety Analysis of Self-learning Algorithms for Autonomous Cars}, author = {Tim Gonschorek and Marco Filax and Frank Ortmeier}, editor = {J\'{e}r\'{e}mie Guiochet}, url = {https://hal.archives-ouvertes.fr/hal-01878562v1 https://cse.cs.ovgu.de/cse-wordpress/wp-content/uploads/2019/01/fastAbstractGonschorekEtAl_AVery-FirstGlanceOnThe-SafetyAnalysisOfSelf-learningAlgorithmsForAutonomousCars.pdf}, year = {2018}, date = {2018-09-26}, booktitle = {37th International Conference on Computer Safety, Reliability, & Security. SAFECOMP2018.}, publisher = {HAL}, series = {Fast Abstracts}, keywords = {}, pubstate = {published}, tppubtype = {inproceedings} } |
Fuentealba, Patricio; Illanes, Alfredo; Ortmeier, Frank Foetal heart rate signal spectral analysis by using time-varying autoregressive modelling Artikel Current Directions in Biomedical Engineering, 4 (1), S. 579–582, 2018. @article{fuentealba2018foetal, title = {Foetal heart rate signal spectral analysis by using time-varying autoregressive modelling}, author = {Patricio Fuentealba and Alfredo Illanes and Frank Ortmeier}, url = {https://cse.cs.ovgu.de/cse-wordpress/wp-content/uploads/2019/11/fuentealba2018foetal.pdf}, doi = {10.1515/cdbme-2018-0139}, year = {2018}, date = {2018-09-22}, journal = {Current Directions in Biomedical Engineering}, volume = {4}, number = {1}, pages = {579--582}, publisher = {De Gruyter}, abstract = {During labour, foetal monitoring enables clinicians to prevent potential adverse outcomes, whose surveillance procedure is commonly based on analysis of cardiotocographic (CTG) signals. Unfortunately, this procedure is difficult because it involves human interpretation of highly complex signals. In order to improve the CTG assessment, different approaches based on signal processing techniques have been proposed. However, most of them do not consider the progression of the foetal response over time. In this work, we propose to study such progression along the foetal heart rate (FHR) signal by using spectral analysis based on time-varying autoregressive modelling. The main idea is to investigate if a particular FHR signal episode in the time-domain reflects dynamical changes in the frequency-domain that can help to assess the foetal condition. Results show that each FHR deceleration leaves a particular time-varying frequency signature described by the spectral energy components which could help to distinguish between a normal and a pathological foetus.}, keywords = {}, pubstate = {published}, tppubtype = {article} } During labour, foetal monitoring enables clinicians to prevent potential adverse outcomes, whose surveillance procedure is commonly based on analysis of cardiotocographic (CTG) signals. Unfortunately, this procedure is difficult because it involves human interpretation of highly complex signals. In order to improve the CTG assessment, different approaches based on signal processing techniques have been proposed. However, most of them do not consider the progression of the foetal response over time. In this work, we propose to study such progression along the foetal heart rate (FHR) signal by using spectral analysis based on time-varying autoregressive modelling. The main idea is to investigate if a particular FHR signal episode in the time-domain reflects dynamical changes in the frequency-domain that can help to assess the foetal condition. Results show that each FHR deceleration leaves a particular time-varying frequency signature described by the spectral energy components which could help to distinguish between a normal and a pathological foetus. |
Nielebock, Sebastian; Heumüller, Robert; Ortmeier, Frank Commits as a Basis for API Misuse Detection Inproceedings ACM, (Hrsg.): Proceedings of the 7th International Workshop on Software Mining (SoftwareMining ’18), September 3, 2018, Montpellier, France., S. 4, ACM, New York, NY, USA, 2018. @inproceedings{NielebockAPIMisuseCommits2018, title = {Commits as a Basis for API Misuse Detection}, author = {Sebastian Nielebock and Robert Heum\"{u}ller and Frank Ortmeier}, editor = {ACM}, url = {https://cse.cs.ovgu.de/cse-wordpress/wp-content/uploads/2018/08/nielebock-et-al-Commits_as_a_Basis_for_API_Misuse_Detection.pdf}, doi = {10.1145/3242887.3242890}, year = {2018}, date = {2018-09-03}, booktitle = {Proceedings of the 7th International Workshop on Software Mining (SoftwareMining ’18), September 3, 2018, Montpellier, France.}, journal = {Proceedings of the 7th International Workshop on Software Mining (SoftwareMining ’18), September 3, 2018, Montpellier, France.}, pages = {4}, publisher = {ACM}, address = {New York, NY, USA}, abstract = {Programmers frequently make use of APIs. However, these usages can result in unintended, negative behavior, when developers are not aware of the correct usage or side effects of that API. Detecting those API misuses by means of automatic testing is challenging, as many test suites do not cover this unintended behavior. Instead, API usage patterns are used as specifications to verify the correctness of applications. However, to find meaningful patterns, i.e., those capable of fixing the misuse, the context of the misuse must be considered. Since the developer usually does not know which API is misused, a much larger code section has to be verified against many potential patterns. In this paper, we present a new idea to enhance API misuse detection by means of commits. We discuss the potential of using commits (1) to decrease the size of the code to be considered, (2) to identify suspicious commits, and (3) to contain API usages which can be used to shepherd API specification mining. This paper shows first results on the usability of commits for API misuse detection and some insights into what makes a commit suspicious in terms of exhibiting potential API misuses.}, keywords = {}, pubstate = {published}, tppubtype = {inproceedings} } Programmers frequently make use of APIs. However, these usages can result in unintended, negative behavior, when developers are not aware of the correct usage or side effects of that API. Detecting those API misuses by means of automatic testing is challenging, as many test suites do not cover this unintended behavior. Instead, API usage patterns are used as specifications to verify the correctness of applications. However, to find meaningful patterns, i.e., those capable of fixing the misuse, the context of the misuse must be considered. Since the developer usually does not know which API is misused, a much larger code section has to be verified against many potential patterns. In this paper, we present a new idea to enhance API misuse detection by means of commits. We discuss the potential of using commits (1) to decrease the size of the code to be considered, (2) to identify suspicious commits, and (3) to contain API usages which can be used to shepherd API specification mining. This paper shows first results on the usability of commits for API misuse detection and some insights into what makes a commit suspicious in terms of exhibiting potential API misuses. |
Heumüller, Robert; Nielebock, Sebastian; Ortmeier, Frank Who plays with Whom? ... and How? Mining API Interaction Patterns from Source Code Inproceedings ACM, (Hrsg.): Proceedings of the 7th International Workshop on Software Mining (SoftwareMining ’18), S. 4, ACM, New York, NY, USA, 2018. @inproceedings{HeumuellerInteractionPatterns2018, title = {Who plays with Whom? ... and How? Mining API Interaction Patterns from Source Code}, author = {Robert Heum\"{u}ller and Sebastian Nielebock and Frank Ortmeier}, editor = {ACM}, url = {https://cse.cs.ovgu.de/cse-wordpress/wp-content/uploads/2018/08/paper-mining-api-interactions.pdf}, doi = {10.1145/3242887.3242888}, year = {2018}, date = {2018-09-03}, booktitle = {Proceedings of the 7th International Workshop on Software Mining (SoftwareMining ’18)}, pages = {4}, publisher = {ACM}, address = {New York, NY, USA}, abstract = {State-of-science automated software engineering techniques increasingly rely on specification mining to provide API usage patterns for numerous applications, e.g. context sensitive code-completion, bug-detection or bug-fixing techniques. While some existing approaches already yield good results with respect to diverse tasks, the focus has always been on the inference of high-quality, reusable specifications for single APIs. However, in contemporary software development it is commonplace to combine a multitude of different libraries in order to increase efficiency by avoiding the reimplementation of the wheel. In contrast to prior research, in this idea paper we propose to explicitly study the patterns of interaction between multiple different APIs. First, we introduce a method for mining API interactions patterns from existing applications. Then, we give an overview of our preliminary investigation, in which we applied the method to a case-study of nearly 500 Android applications. The exemplary results show that there definitely exist valuable interaction patterns which can be helpful for various traditional and automated software engineering tasks.}, keywords = {}, pubstate = {published}, tppubtype = {inproceedings} } State-of-science automated software engineering techniques increasingly rely on specification mining to provide API usage patterns for numerous applications, e.g. context sensitive code-completion, bug-detection or bug-fixing techniques. While some existing approaches already yield good results with respect to diverse tasks, the focus has always been on the inference of high-quality, reusable specifications for single APIs. However, in contemporary software development it is commonplace to combine a multitude of different libraries in order to increase efficiency by avoiding the reimplementation of the wheel. In contrast to prior research, in this idea paper we propose to explicitly study the patterns of interaction between multiple different APIs. First, we introduce a method for mining API interactions patterns from existing applications. Then, we give an overview of our preliminary investigation, in which we applied the method to a case-study of nearly 500 Android applications. The exemplary results show that there definitely exist valuable interaction patterns which can be helpful for various traditional and automated software engineering tasks. |
Gonschorek, Tim; Zeller, Marc; Ortmeier, Frank; Höfig, Kai Fault Trees vs. Component Fault Trees: An Empirical Study Inproceedings Gallina B. Skavhaug A., Schoitsch Bitsch E F (Hrsg.): Computer Safety, Reliability, and Security. SAFECOMP 2018., Springer, Cham, 2018, ISBN: 978-3-319-99228-0. @inproceedings{Gonschorek2018b, title = {Fault Trees vs. Component Fault Trees: An Empirical Study}, author = {Tim Gonschorek and Marc Zeller and Frank Ortmeier and Kai H\"{o}fig}, editor = {Gallina B., Skavhaug A., Schoitsch E., Bitsch F. }, doi = {https://doi.org/10.1007/978-3-319-99229-7_21}, isbn = {978-3-319-99228-0}, year = {2018}, date = {2018-08-21}, booktitle = { Computer Safety, Reliability, and Security. SAFECOMP 2018.}, volume = {11094}, publisher = {Springer, Cham}, series = {Lecture Notes in Computer Science}, keywords = {}, pubstate = {published}, tppubtype = {inproceedings} } |
Schillreff, Nadia; Ortmeier, Frank Learning-based Kinematic Calibration using Adjoint Error Model Inproceedings SciTePress, (Hrsg.): Proceedings of the 15th International Conference on Informatics in Control, Automation and Robotics - Volume 2: ICINCO, 2018, ISBN: 978-989-758-321-6. @inproceedings{Schillreff2018, title = { Learning-based Kinematic Calibration using Adjoint Error Model }, author = {Nadia Schillreff and Frank Ortmeier}, editor = {SciTePress}, doi = {10.5220/0006870403820389}, isbn = {978-989-758-321-6}, year = {2018}, date = {2018-08-01}, booktitle = {Proceedings of the 15th International Conference on Informatics in Control, Automation and Robotics - Volume 2: ICINCO}, abstract = {A learning-based robot kinematic calibration approach based on the product-of-exponentials (POE) formula and Adjoint error model is introduced. To ensure high accuracy this approach combines the geometrical and non-geometrical influences like for e.g. elastic deformations without explicitly defining all physical processes that contribute to them using a polynomial regression method. By using the POE formula for kinematic modeling of the manipulator it is ensured that kinematic parameters vary smoothly and used method is robust and singularity-free. The introduced error parameters are presented in the form of Adjoint transformations on nominal joint twists. The calibration process then becomes finding a set of polynomial functions using regression methods that are able to reflect the actual kinematics of the robot. The proposed method is evaluated on a dataset obtained using a 7-DOF manipulator (KUKA LBR iiwa 7 R800). The experimental results show that this approach significantly reduc es positional errors of the robotic manipulator after calibration.}, keywords = {}, pubstate = {published}, tppubtype = {inproceedings} } A learning-based robot kinematic calibration approach based on the product-of-exponentials (POE) formula and Adjoint error model is introduced. To ensure high accuracy this approach combines the geometrical and non-geometrical influences like for e.g. elastic deformations without explicitly defining all physical processes that contribute to them using a polynomial regression method. By using the POE formula for kinematic modeling of the manipulator it is ensured that kinematic parameters vary smoothly and used method is robust and singularity-free. The introduced error parameters are presented in the form of Adjoint transformations on nominal joint twists. The calibration process then becomes finding a set of polynomial functions using regression methods that are able to reflect the actual kinematics of the robot. The proposed method is evaluated on a dataset obtained using a 7-DOF manipulator (KUKA LBR iiwa 7 R800). The experimental results show that this approach significantly reduc es positional errors of the robotic manipulator after calibration. |
Gonschorek, Tim; Bedau, Ludwig; Ortmeier, Frank Automatic Model-based Verification of Railway Interlocking Systems using Model Checking Inproceedings Haugen, Stein (Hrsg.): Proceedings of ESREL 2018, S. 741-748, CRC Press, London, 2018. @inproceedings{Gonschorek2018, title = {Automatic Model-based Verification of Railway Interlocking Systems using Model Checking}, author = {Tim Gonschorek and Ludwig Bedau and Frank Ortmeier}, editor = {Stein Haugen}, url = {https://cse.cs.ovgu.de/cse-wordpress/wp-content/uploads/2018/02/Esrel2018_GonschorekEtAl_ModelCheckingRailMLInterlockings.pdf}, year = {2018}, date = {2018-06-17}, booktitle = {Proceedings of ESREL 2018}, pages = {741-748}, publisher = {CRC Press}, address = {London}, abstract = {The theoretic foundations for formally verifying railway interlocking systems have already been studied extensively. There exist a lot of work covering the application of methodologies like model checking in this context. However, some design faults still remain undetected until final on-track evaluation of the system. This is strongly related to missing automation solutions for real-world models and standards as well as the high theoretical expertise required. There exist many well-developed tools each requiring different modeling formalisms and focusing on a different question/scenario. Without specific experience in formal system modeling, it is extremely complicated to model such complex systems. In this paper, we present a methodology for the automatic model generation and verification of railway interlockings in a tool-independent(!) way. Therefore, we define a generic template set of atomic track elements and safety properties in a formal modeling language applicable with precise semantics. This generic template enables us to verify the structure of any given track layout. The already existing tool support of VECS allows to automatically translate these specifications into various model checkers for verification. More important, we present a robust transformation of the upcoming data exchange format for railway interlocking systems railML into the presented specification template. As a consequence, this approach really may help to bridge the gap between formal methods and system design in railway interlockings. We evaluate this approach on a real-world case studies train station of Brain l'Alleud. We also show the tool-independent modeling by automatically translating the specification to different verification engines and compare their performance.}, keywords = {}, pubstate = {published}, tppubtype = {inproceedings} } The theoretic foundations for formally verifying railway interlocking systems have already been studied extensively. There exist a lot of work covering the application of methodologies like model checking in this context. However, some design faults still remain undetected until final on-track evaluation of the system. This is strongly related to missing automation solutions for real-world models and standards as well as the high theoretical expertise required. There exist many well-developed tools each requiring different modeling formalisms and focusing on a different question/scenario. Without specific experience in formal system modeling, it is extremely complicated to model such complex systems. In this paper, we present a methodology for the automatic model generation and verification of railway interlockings in a tool-independent(!) way. Therefore, we define a generic template set of atomic track elements and safety properties in a formal modeling language applicable with precise semantics. This generic template enables us to verify the structure of any given track layout. The already existing tool support of VECS allows to automatically translate these specifications into various model checkers for verification. More important, we present a robust transformation of the upcoming data exchange format for railway interlocking systems railML into the presented specification template. As a consequence, this approach really may help to bridge the gap between formal methods and system design in railway interlockings. We evaluate this approach on a real-world case studies train station of Brain l'Alleud. We also show the tool-independent modeling by automatically translating the specification to different verification engines and compare their performance. |
Kögel, Markus; Andonov, Petar; Filax, Marco; Ortmeier, Frank; Findeisen, Rolf Predictive Tracking Control of a Camera - Head Mounted Display System subject to Communication Constraints Inproceedings 16th European Control Conference (ECC), S. 1035-1041, 2018. @inproceedings{Koegel18, title = {Predictive Tracking Control of a Camera - Head Mounted Display System subject to Communication Constraints}, author = {Markus K\"{o}gel and Petar Andonov and Marco Filax and Frank Ortmeier and Rolf Findeisen}, year = {2018}, date = {2018-06-01}, booktitle = {16th European Control Conference (ECC)}, pages = {1035-1041}, keywords = {}, pubstate = {published}, tppubtype = {inproceedings} } |
Nielebock, Sebastian; Heumüller, Robert; Ortmeier, Frank Programmers do not Favor Lambda Expressions for Concurrent Object-Oriented Code Artikel Springer Empirical Software Engineering (EMSE), 24 (1), S. 103–138, 2018, ISSN: 1382-3256. @article{NielebockLambda2018, title = {Programmers do not Favor Lambda Expressions for Concurrent Object-Oriented Code}, author = {Sebastian Nielebock and Robert Heum\"{u}ller and Frank Ortmeier}, editor = {Springer}, url = {https://cse.cs.ovgu.de/cse-wordpress/wp-content/uploads/2018/11/2018-11-06-ESEC-FSE-Lambda-In-Concurrency_prepared_publication.pdf https://cse.cs.ovgu.de/cse-wordpress/wp-content/uploads/2018/10/journal-emse-lambda-1.pdf https://bitbucket.org/SNielebock/lambdainconcurrentdataset}, doi = {10.1007/s10664-018-9622-9}, issn = {1382-3256}, year = {2018}, date = {2018-05-02}, journal = {Springer Empirical Software Engineering (EMSE)}, volume = {24}, number = {1}, pages = {103--138}, abstract = {Lambda expressions have long been state-of-the-art in the functional programming paradigm. Especially with regard to the use of higher-order functions, they provide developers with a means of defining predicate or projection functions locally, which greatly increases the comprehensibility of the resulting source code. This benefit has motivated language designers to also incorporate lambda expressions into object-oriented (OO) programming languages. In particular, they are claimed to facilitate concurrent programming. One likely reason for this assumption is their purity: pure lambda expressions are free of side effects, and therefore cannot cause, e.g., race conditions. In this paper, we present the first empirical analysis of whether or not this claim is true for OO projects. For this purpose, we investigated the application of lambda expressions in 2,923 open-source projects, implemented in one of the most common OO programming languages: C#, C++, and Java. We present three major findings. First, the majority of lambda expressions are not applied in concurrent code and most concurrent code does not make use of lambda expressions. Second, for all three of the languages, we observed that developers compromise their code by applying a significantly higher number of impure, capturing lambda expressions, which are capable of causing race conditions. Finally, we explored further use cases of lambda expressions and found out that testing, algorithmic implementation, and UI are far more common use-cases for the application of lambda expressions. Our results encourage to investigate in more detail the reasons that hinder programmers to apply lambda expressions in concurrent programming and to support developers, e.g., by providing automatic refactorings.}, keywords = {}, pubstate = {published}, tppubtype = {article} } Lambda expressions have long been state-of-the-art in the functional programming paradigm. Especially with regard to the use of higher-order functions, they provide developers with a means of defining predicate or projection functions locally, which greatly increases the comprehensibility of the resulting source code. This benefit has motivated language designers to also incorporate lambda expressions into object-oriented (OO) programming languages. In particular, they are claimed to facilitate concurrent programming. One likely reason for this assumption is their purity: pure lambda expressions are free of side effects, and therefore cannot cause, e.g., race conditions. In this paper, we present the first empirical analysis of whether or not this claim is true for OO projects. For this purpose, we investigated the application of lambda expressions in 2,923 open-source projects, implemented in one of the most common OO programming languages: C#, C++, and Java. We present three major findings. First, the majority of lambda expressions are not applied in concurrent code and most concurrent code does not make use of lambda expressions. Second, for all three of the languages, we observed that developers compromise their code by applying a significantly higher number of impure, capturing lambda expressions, which are capable of causing race conditions. Finally, we explored further use cases of lambda expressions and found out that testing, algorithmic implementation, and UI are far more common use-cases for the application of lambda expressions. Our results encourage to investigate in more detail the reasons that hinder programmers to apply lambda expressions in concurrent programming and to support developers, e.g., by providing automatic refactorings. |
Schott, Kevin Michael Extraktion relevanter API-spezifischer Informationen zur automatischen Korrektur von Softwarefehlern Abschlussarbeit Otto-von-Guericke-Unviersität Magdeburg, 2018. @mastersthesis{SchottExtractionBA2018, title = {Extraktion relevanter API-spezifischer Informationen zur automatischen Korrektur von Softwarefehlern}, author = {Kevin Michael Schott}, editor = {Sebastian Nielebock and Frank Ortmeier}, year = {2018}, date = {2018-03-13}, school = {Otto-von-Guericke-Unviersit\"{a}t Magdeburg}, abstract = {Bekanntlich ist Softwareentwicklung nicht fehlerfrei. W\"{a}hrend der Fokus auf der Entwicklung neuer Komponenten liegen sollte, geht viel Zeit in die Behebung existierender Programmfehler verloren. Jene Programmfehler, welche das Softwaresystem in einen regressiven Zustand versetzen - sodass dieses nicht mehr ordnungsgem\"{a}\ss funktioniert - sind oftmals m\"{u}hselig zu beheben. Dadurch ist der zust\"{a}ndige Programmierer meist auf unbestimmte Zeit verpflichtet, diesen zu beheben, wodurch er dem Unternehmen nicht f\"{u}r die Weiterentwicklung zur Verf\"{u}gung steht. Dieser zeitliche Engpass kann monet\"{a}re R\"{u}ckschl\"{a}ge bedeuten, insbesondere wenn es sich um eine sicherheitskritische L\"{u}cke im Softwaresystem handelt. Die Dom\"{a}ne der automatischen Fehlerkorrektur will solchen Problemen entgegenwirken, indem versucht wird, ein Ansatz zu finden, der den entstandenen Fehler automatisch beheben kann oder dem Programmierer wenigstens einen L\"{o}sungsvorschlag liefert. Heutzutage ist der Umgang mit Application-Programming-Interfaces (kurz APIs) beinahe allt\"{a}glich f\"{u}r jeden Programmierer, was einen Ansatz zur automatischen Korrektur von Softwarefehlern schwer bis gar nicht generalisieren l\"{a}sst. Deren Dokumentation ist oft nicht hinreichend oder unverst\"{a}ndlich. Dadurch wird die API falsch benutzt, sodass unter anderem Methodenaufrufe vergessen oder in falscher Reihenfolge verwendet werden. J\"{u}ngste Studien zeigen zudem, dass bei circa jeder zweiten Fehlerbehebung mindestens eine API-spezifische \"{A}nderung vorgenommen werden muss, um den Fehler ganzheitlich zu beheben. Herk\"{o}mmliche Ans\"{a}tze zur automatischen Korrektur von Softwarefehlern k\"{o}nnen aufgrund ihrer generischen Natur diese Programmfehler oftmals nicht vollst\"{a}ndig beheben und m\"{u}ssen erweitert werden. Im Rahmen dieser Arbeit wird ein Konzept vorgestellt, mit dessen Hilfe diese Programmfehler m\"{o}glicherweise korrigiert werden k\"{o}nnen. Prim\"{a}r wird jedoch erforscht, welche Informationen aus einem API-spezifischen Kontext extrahiert und verarbeitet werden m\"{u}ssen, um eine Korrektur vornehmen zu k\"{o}nnen. Daf\"{u}r wird ein Prototyp entwickelt, der diese Informationsextraktion \"{u}bernimmt und \"{a}hnlichen Quellcode herunterl\"{a}dt, sodass in diesem potenziell eine richtige Benutzung der gesuchten API enthalten ist, welche als Korrektur des Fehlers dienen soll.}, keywords = {}, pubstate = {published}, tppubtype = {mastersthesis} } Bekanntlich ist Softwareentwicklung nicht fehlerfrei. Während der Fokus auf der Entwicklung neuer Komponenten liegen sollte, geht viel Zeit in die Behebung existierender Programmfehler verloren. Jene Programmfehler, welche das Softwaresystem in einen regressiven Zustand versetzen - sodass dieses nicht mehr ordnungsgemäß funktioniert - sind oftmals mühselig zu beheben. Dadurch ist der zuständige Programmierer meist auf unbestimmte Zeit verpflichtet, diesen zu beheben, wodurch er dem Unternehmen nicht für die Weiterentwicklung zur Verfügung steht. Dieser zeitliche Engpass kann monetäre Rückschläge bedeuten, insbesondere wenn es sich um eine sicherheitskritische Lücke im Softwaresystem handelt. Die Domäne der automatischen Fehlerkorrektur will solchen Problemen entgegenwirken, indem versucht wird, ein Ansatz zu finden, der den entstandenen Fehler automatisch beheben kann oder dem Programmierer wenigstens einen Lösungsvorschlag liefert. Heutzutage ist der Umgang mit Application-Programming-Interfaces (kurz APIs) beinahe alltäglich für jeden Programmierer, was einen Ansatz zur automatischen Korrektur von Softwarefehlern schwer bis gar nicht generalisieren lässt. Deren Dokumentation ist oft nicht hinreichend oder unverständlich. Dadurch wird die API falsch benutzt, sodass unter anderem Methodenaufrufe vergessen oder in falscher Reihenfolge verwendet werden. Jüngste Studien zeigen zudem, dass bei circa jeder zweiten Fehlerbehebung mindestens eine API-spezifische Änderung vorgenommen werden muss, um den Fehler ganzheitlich zu beheben. Herkömmliche Ansätze zur automatischen Korrektur von Softwarefehlern können aufgrund ihrer generischen Natur diese Programmfehler oftmals nicht vollständig beheben und müssen erweitert werden. Im Rahmen dieser Arbeit wird ein Konzept vorgestellt, mit dessen Hilfe diese Programmfehler möglicherweise korrigiert werden können. Primär wird jedoch erforscht, welche Informationen aus einem API-spezifischen Kontext extrahiert und verarbeitet werden müssen, um eine Korrektur vornehmen zu können. Dafür wird ein Prototyp entwickelt, der diese Informationsextraktion übernimmt und ähnlichen Quellcode herunterlädt, sodass in diesem potenziell eine richtige Benutzung der gesuchten API enthalten ist, welche als Korrektur des Fehlers dienen soll. |
Bedau, Ludwig; Gonschorek, Tim; Ortmeier, Frank Sicherheitsanalyse eines Bahnhofstellwerkes Konferenzbericht Horber Schienen Tage 35 , 2018. @proceedings{Bedau2018, title = {Sicherheitsanalyse eines Bahnhofstellwerkes}, author = {Ludwig Bedau and Tim Gonschorek and Frank Ortmeier}, editor = {Andreas Brock and Christina Brock and Rudolf Barth}, url = {https://cse.cs.ovgu.de/cse-wordpress/wp-content/uploads/2018/02/BedauEtAl_HST17.pdf}, year = {2018}, date = {2018-02-28}, volume = {35}, organization = {Horber Schienen Tage}, keywords = {}, pubstate = {published}, tppubtype = {proceedings} } |
Klockmann, Maximilian; Filax, Marco; Ortmeier, Frank; Reiß, Martin On the Similarities of Fingerprints and Railroad Tracks: Using Minutiae Detection Algorithms to digitize Track Plans. Inproceedings 13th IAPR Workshop on Document Analysis Systems (DAS), 2018. @inproceedings{klock18, title = {On the Similarities of Fingerprints and Railroad Tracks: Using Minutiae Detection Algorithms to digitize Track Plans.}, author = {Maximilian Klockmann and Marco Filax and Frank Ortmeier and Martin Rei\ss}, year = {2018}, date = {2018-01-01}, booktitle = {13th IAPR Workshop on Document Analysis Systems (DAS)}, abstract = {The complete track system of Germany covers more than 42.000 kilometers - some built before 1970. As a consequence, technical drawings are typical of manual origin. Newer plans are generated in a computer-aided way but remain drawings in the sense that semantics are not captured in the electronic files themselves. The engineer decides the meaning of a symbol while viewing the document. For project realization (e.g., engineering of some interlocking system), these plans are digitized manually into some machine interpretable format. In this paper, we propose an approach to digitize track layouts (semi-)automatically. We use fingerprint recognition techniques to digitize manually created track plans efficiently. At first, we detect tracks by detecting line endings and bifurcations. Secondly, we eliminate false candidates and irregularities. Finally, we translate the resulting graph into an interchangeable format RailML. We evaluate our method by comparing our results with different track plans. Our results indicate that the proposed method is a promising candidate, reducing the effort of digitization. }, keywords = {}, pubstate = {published}, tppubtype = {inproceedings} } The complete track system of Germany covers more than 42.000 kilometers - some built before 1970. As a consequence, technical drawings are typical of manual origin. Newer plans are generated in a computer-aided way but remain drawings in the sense that semantics are not captured in the electronic files themselves. The engineer decides the meaning of a symbol while viewing the document. For project realization (e.g., engineering of some interlocking system), these plans are digitized manually into some machine interpretable format. In this paper, we propose an approach to digitize track layouts (semi-)automatically. We use fingerprint recognition techniques to digitize manually created track plans efficiently. At first, we detect tracks by detecting line endings and bifurcations. Secondly, we eliminate false candidates and irregularities. Finally, we translate the resulting graph into an interchangeable format RailML. We evaluate our method by comparing our results with different track plans. Our results indicate that the proposed method is a promising candidate, reducing the effort of digitization. |
Filax, Marco; Ortmeier, Frank VIOL: Viewpoint Invariant Object Localizator - Viewpoint Invariant Planar Features in Man-Made Environments Inproceedings Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications (VISAPP), S. 581-588, 2018, ISBN: 978-989-758-290-5. @inproceedings{visapp18, title = {VIOL: Viewpoint Invariant Object Localizator - Viewpoint Invariant Planar Features in Man-Made Environments}, author = {Marco Filax and Frank Ortmeier}, doi = {10.5220/0006624005810588}, isbn = {978-989-758-290-5}, year = {2018}, date = {2018-01-01}, booktitle = {Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications (VISAPP)}, pages = {581-588}, abstract = {Object detection is one of the fundamental issues in computer vision. The established methods, rely on different feature descriptors to determine correspondences between significant image points. However, they do not provide reliable results, especially for extreme viewpoint changes. This is because feature descriptors do not adhere to the projective distortion introduced with an extreme viewpoint change. Different approaches have been proposed to lower this hurdle, e.g., by randomly sampling multiple virtual viewpoints. However, these methods are either computationally intensive or impose strong assumptions of the environment. In this paper, we propose an algorithm to detect corresponding quasi-planar objects in man-made environments. We make use of the observation that these environments typically contain rectangular structures. We exploit the information gathered from a depth sensor to detect planar regions. With these, we unwrap the projective distortion, by transforming the planar patch into a fronto-parallel view. We demonstrate the feasibility and capabilities of our approach in a real-world scenario: a supermarket.}, keywords = {}, pubstate = {published}, tppubtype = {inproceedings} } Object detection is one of the fundamental issues in computer vision. The established methods, rely on different feature descriptors to determine correspondences between significant image points. However, they do not provide reliable results, especially for extreme viewpoint changes. This is because feature descriptors do not adhere to the projective distortion introduced with an extreme viewpoint change. Different approaches have been proposed to lower this hurdle, e.g., by randomly sampling multiple virtual viewpoints. However, these methods are either computationally intensive or impose strong assumptions of the environment. In this paper, we propose an algorithm to detect corresponding quasi-planar objects in man-made environments. We make use of the observation that these environments typically contain rectangular structures. We exploit the information gathered from a depth sensor to detect planar regions. With these, we unwrap the projective distortion, by transforming the planar patch into a fronto-parallel view. We demonstrate the feasibility and capabilities of our approach in a real-world scenario: a supermarket. |
2017 |
Fuentealba, Patricio; Illanes, Alfredo; Ortmeier, Frank 2017 Computing in Cardiology (CinC), S. 1–4, IEEE 2017, ISSN: 2325- 887X. @inproceedings{fuentealba2017progressive, title = {Progressive Fetal Distress Estimation by Characterization of Fetal Heart Rate Decelerations Response Based on Signal Variability in Cardiotocographic Recordings}, author = {Patricio Fuentealba and Alfredo Illanes and Frank Ortmeier}, url = {https://cse.cs.ovgu.de/cse-wordpress/wp-content/uploads/2018/04/fuentealba2017progressive.pdf}, doi = {10.22489/CinC.2017.276-152}, issn = {2325- 887X}, year = {2017}, date = {2017-12-01}, booktitle = {2017 Computing in Cardiology (CinC)}, volume = {44}, pages = {1--4}, organization = {IEEE}, abstract = {In the current clinical practice, Cardiotocograph (CTG) is a standard and widely used tool for fetal surveillance. However, CTG interpretation is difficult since it involves human diagnosis of different patterns in highly complex signals. Fetal heart rate (FHR) decelerations and variability are known to be the most significant and difficult patterns to assess by the clinical staff. The main goal of this work is to analyze the fetal distress by tracking the evolution of the dynamical changes occurring in the CTG recording. The idea is to consider the direct mbox{input/output} relationship between uterine contraction (UC) and FHR signals by the characterization of FHR decelerations in terms of their signal variability as a sign of the fetal response corresponding to a UC event. Results show that the progression of the decelerations response over time can help the observer to track fetal distress.}, keywords = {}, pubstate = {published}, tppubtype = {inproceedings} } In the current clinical practice, Cardiotocograph (CTG) is a standard and widely used tool for fetal surveillance. However, CTG interpretation is difficult since it involves human diagnosis of different patterns in highly complex signals. Fetal heart rate (FHR) decelerations and variability are known to be the most significant and difficult patterns to assess by the clinical staff. The main goal of this work is to analyze the fetal distress by tracking the evolution of the dynamical changes occurring in the CTG recording. The idea is to consider the direct mbox{input/output} relationship between uterine contraction (UC) and FHR signals by the characterization of FHR decelerations in terms of their signal variability as a sign of the fetal response corresponding to a UC event. Results show that the progression of the decelerations response over time can help the observer to track fetal distress. |
Gonschorek, Tim; Rabeler, Ben Lukas; Ortmeier, Frank; Schomburg, Dirk On Improving Rare Event Simulation for Probabilistic Safety Analysis Inproceedings Proceedings of the 15th ACM-IEEE International Conference on Formal Methods and Models for System Design, S. 15-24, ACM New York, NY, USA ©2017 , 2017, ISBN: 978-1-4503-5093-8. @inproceedings{Gonschorek2017, title = {On Improving Rare Event Simulation for Probabilistic Safety Analysis}, author = {Tim Gonschorek and Ben Lukas Rabeler and Frank Ortmeier and Dirk Schomburg}, url = {https://cse.cs.ovgu.de/cse-wordpress/wp-content/uploads/2017/10/GonschorekEtAl_OnImprovingRareEventSimulationForProbabilisticSafetyAnalysis_memocode17.pdfhttps://cse.cs.ovgu.de/cse-wordpress/wp-content/uploads/2017/10/GonschorekEtAl_OnImprovingRareEventSimulationForProbabilisticSafetyAnalysis_memocode17.pdf}, doi = {10.1145/3127041.3127057}, isbn = {978-1-4503-5093-8}, year = {2017}, date = {2017-09-29}, booktitle = {Proceedings of the 15th ACM-IEEE International Conference on Formal Methods and Models for System Design}, pages = {15-24}, publisher = {ACM New York, NY, USA ©2017 }, series = {Lecture Notes in Computer Science}, abstract = {This paper presents a new approach for generating probability distributions for Monte Carlo based stochastic model checking. Stochastic approaches are used for quantitative analysis of safety critical systems if numerical model checking tools get overwhelmed by the complexity of the models and the exploding state space. However, sample based stochastic measures get problems when the estimated event is very unlikely (e.g. 10-6 and below). Therefore, rare event techniques, like importance sampling, increase the likelihood of samples representing model executions which depict the desired rare event, e.g., a path leading into a hazardous state. The presented approach uses qualitative counter examples to derive appropriate distributions for the sampling, i.e., a distribution that increases the likelihood of the examined rare event and decreases the variance Therefore, we define the sampling distribution such that transitions included in the counter example become more likely. For keeping the variance small, the new distributions is calculated within an optimization problem which minimizes the variance on the counter example paths by calculating new probability distributions for the related transitions. We have evaluated this approach in four different real-world case studies and compare it to other leading stochastic and numeric model checking tools. It turned out that in the case of rare event properties, the proposed approach outperforms other techniques in run time and, slightly more important, in the accuracy of the estimation result. }, keywords = {}, pubstate = {published}, tppubtype = {inproceedings} } This paper presents a new approach for generating probability distributions for Monte Carlo based stochastic model checking. Stochastic approaches are used for quantitative analysis of safety critical systems if numerical model checking tools get overwhelmed by the complexity of the models and the exploding state space. However, sample based stochastic measures get problems when the estimated event is very unlikely (e.g. 10-6 and below). Therefore, rare event techniques, like importance sampling, increase the likelihood of samples representing model executions which depict the desired rare event, e.g., a path leading into a hazardous state. The presented approach uses qualitative counter examples to derive appropriate distributions for the sampling, i.e., a distribution that increases the likelihood of the examined rare event and decreases the variance Therefore, we define the sampling distribution such that transitions included in the counter example become more likely. For keeping the variance small, the new distributions is calculated within an optimization problem which minimizes the variance on the counter example paths by calculating new probability distributions for the related transitions. We have evaluated this approach in four different real-world case studies and compare it to other leading stochastic and numeric model checking tools. It turned out that in the case of rare event properties, the proposed approach outperforms other techniques in run time and, slightly more important, in the accuracy of the estimation result. |
Gonschorek, Tim; Filax, Marco; Ortmeier, Frank Invited Paper: 5th International Symposium on Model-Based Safety and Assessment (IMBSA 2017), 2017. @misc{Gonschorek2017b, title = {A Verification Environment for Critical Systems: Integrating Formal Methods into the Safety Development Life-cycle}, author = {Tim Gonschorek and Marco Filax and Frank Ortmeier}, editor = {Otto-von-Guericke-Universit\"{a}t Magdeburg}, url = {https://cse.cs.ovgu.de/cse-wordpress/wp-content/uploads/2017/10/GonschorekEtAl_-VECS_imbsa17_poster.pdf https://cse.cs.ovgu.de/cse-wordpress/wp-content/uploads/2017/10/GonschorekEtAl_-VECS_imbsa17_poster.pdf}, year = {2017}, date = {2017-09-11}, howpublished = {Invited Paper: 5th International Symposium on Model-Based Safety and Assessment (IMBSA 2017)}, keywords = {}, pubstate = {published}, tppubtype = {misc} } |
Filax, Marco; Gonschorek, Tim; Ortmeier, Frank Building Models we can rely on: Requirements Traceability for Model-based Verification Techniques Inproceedings Bozzano M., Papadopoulos Y (Hrsg.): Proceedings of the 5th International Symposium on Model-Based Safety and Assessment (IMBSA 2017), S. 3-18, Springer, Cham, 2017, ISBN: 978-3-319-64118-8 . @inproceedings{Filax2017, title = {Building Models we can rely on: Requirements Traceability for Model-based Verification Techniques}, author = {Marco Filax and Tim Gonschorek and Frank Ortmeier}, editor = {Bozzano M., Papadopoulos Y.}, url = {https://cse.cs.ovgu.de/cse-wordpress/wp-content/uploads/2017/08/imbsa17_preprint.pdf}, doi = {10.1007/978-3-319-64119-5_1}, isbn = {978-3-319-64118-8 }, year = {2017}, date = {2017-09-11}, booktitle = {Proceedings of the 5th International Symposium on Model-Based Safety and Assessment (IMBSA 2017)}, volume = {10437}, pages = {3-18}, publisher = {Springer, Cham}, series = { Lecture Notes in Computer Science}, abstract = {Proving the safety of a critical system is a complex and complicated task. Model-based formal verification techniques can help to verify a System Requirement Specification (SRS) with respect to normative and safety requirements. Due to an early application of these methods, it is possible to reduce the risk of high costs caused by unexpected, late system adjustments. Nevertheless, they are still rarely used. One reason among others is the lack of an applicable integration method in an existing development process. In this paper, we propose a process to integrate formal model-based verification techniques into the development life-cycle of a safety critical system. The core idea is to systematically refine informal specifications by 1) categorization, 2) structural refinement, 3) expected behavioral refinement, and finally, 4) operational semantics. To support modeling, traceability is upheld through all refinement steps and a number of consistency checks are introduced. The proposed process has been jointly developed with the German Railroad Authority (EBA) and an accredited safety assessor. We implemented an Eclipse-based IDE with connections to requirement and systems engineering tools as well as various verification engines. The applicability of our approach is demonstrated via an industrial-sized case study in the context of the European Train Control System with ETCS Level 1 Full Supervision. }, keywords = {}, pubstate = {published}, tppubtype = {inproceedings} } Proving the safety of a critical system is a complex and complicated task. Model-based formal verification techniques can help to verify a System Requirement Specification (SRS) with respect to normative and safety requirements. Due to an early application of these methods, it is possible to reduce the risk of high costs caused by unexpected, late system adjustments. Nevertheless, they are still rarely used. One reason among others is the lack of an applicable integration method in an existing development process. In this paper, we propose a process to integrate formal model-based verification techniques into the development life-cycle of a safety critical system. The core idea is to systematically refine informal specifications by 1) categorization, 2) structural refinement, 3) expected behavioral refinement, and finally, 4) operational semantics. To support modeling, traceability is upheld through all refinement steps and a number of consistency checks are introduced. The proposed process has been jointly developed with the German Railroad Authority (EBA) and an accredited safety assessor. We implemented an Eclipse-based IDE with connections to requirement and systems engineering tools as well as various verification engines. The applicability of our approach is demonstrated via an industrial-sized case study in the context of the European Train Control System with ETCS Level 1 Full Supervision. |
Fuentealba, Patricio; Illanes, Alfredo; Ortmeier, Frank Current Directions in Biomedical Engineering, 3 (2), S. 423–427, 2017, ISSN: 2364-5504. @article{fuentealba3analysis, title = {Analysis of the foetal heart rate in cardiotocographic recordings through a progressive characterization of decelerations}, author = {Patricio Fuentealba and Alfredo Illanes and Frank Ortmeier}, url = {https://cse.cs.ovgu.de/cse-wordpress/wp-content/uploads/2018/04/fuentealba3analysis.pdf}, doi = {10.1515/cdbme-2017-0089}, issn = { 2364-5504}, year = {2017}, date = {2017-09-07}, journal = {Current Directions in Biomedical Engineering}, volume = {3}, number = {2}, pages = {423--427}, publisher = {De Gruyter}, abstract = {The main purpose of this work is to propose a new method for characterization and visualization of FHR deceleration episodes in terms of their depth, length and location. This is performed through the estimation of a progressive baseline computed using a median filter allowing to identify and track the evolution of decelerations in cardiotocographic CTG recordings. The proposed method has been analysed using three representative cases of normal and pathological CTG recordings extracted from the CTU-UHB database freely available on the PhysioNet Website. Results show that both the progressive baseline and the parameterized deceleration episodes can describe different time-variant behaviour, whose characteristics and progression can help the observer to discriminate between normal and pathological FHR signal patterns. This opens perspectives for classification of non-reassuring CTG recordings as a sign of foetal acidemia.}, keywords = {}, pubstate = {published}, tppubtype = {article} } The main purpose of this work is to propose a new method for characterization and visualization of FHR deceleration episodes in terms of their depth, length and location. This is performed through the estimation of a progressive baseline computed using a median filter allowing to identify and track the evolution of decelerations in cardiotocographic CTG recordings. The proposed method has been analysed using three representative cases of normal and pathological CTG recordings extracted from the CTU-UHB database freely available on the PhysioNet Website. Results show that both the progressive baseline and the parameterized deceleration episodes can describe different time-variant behaviour, whose characteristics and progression can help the observer to discriminate between normal and pathological FHR signal patterns. This opens perspectives for classification of non-reassuring CTG recordings as a sign of foetal acidemia. |
Schillreff, Nadia; Nykolaichuk, Mykhaylo; Ortmeier, Frank Towards High Accuracy Robot-Assisted Surgery Inproceedings IFAC-PapersOnLine, (Hrsg.): 2017. @inproceedings{Schillreff_2017, title = {Towards High Accuracy Robot-Assisted Surgery}, author = {Nadia Schillreff and Mykhaylo Nykolaichuk and Frank Ortmeier}, editor = {IFAC-PapersOnLine}, url = {https://cse.cs.ovgu.de/cse-wordpress/wp-content/uploads/2017/08/Schillreff_cameraReady_IFAC2017.pdf}, year = {2017}, date = {2017-08-28}, journal = {Proceedings of the 20th World Congress of the International Federation of Automatic Control (IFAC 2017)}, abstract = {In this article, we propose a new error modeling approach for robot manipulators in order to improve the absolute accuracy of a tools pose by using polynomial regression method. The core idea is based on a well-known fact: accuracy of repeatedly reaching a given position is much higher than the accuracy of absolute positioning (i.e. moving the manipulator to a given position). The underlying reason is, that positioning errors are dominated by systematic errors - while stochastic errors are significantly smaller. This fact is then exploited to apply some learning algorithm to derive an error-compensation model. Technically, this means that the robot is being calibrated a priori with some external sensor once. Afterwards it can operate with a much better quality. In detail, we propose to first perform a coordinate transformation using a least mean square approach (for registration). Then, to account for deviations of measured position in comparison to nominal (robot) position, the frame transformation model at each robots joint is extended by translational and rotational error parameters. This is then used to built an error compensation model with regression techniques. We evaluate the method on a data set obtained using a 7DOF robot manipulator and show that this approach brings positioning error to the order of repeatability errors for this manipulator.}, keywords = {}, pubstate = {published}, tppubtype = {inproceedings} } In this article, we propose a new error modeling approach for robot manipulators in order to improve the absolute accuracy of a tools pose by using polynomial regression method. The core idea is based on a well-known fact: accuracy of repeatedly reaching a given position is much higher than the accuracy of absolute positioning (i.e. moving the manipulator to a given position). The underlying reason is, that positioning errors are dominated by systematic errors - while stochastic errors are significantly smaller. This fact is then exploited to apply some learning algorithm to derive an error-compensation model. Technically, this means that the robot is being calibrated a priori with some external sensor once. Afterwards it can operate with a much better quality. In detail, we propose to first perform a coordinate transformation using a least mean square approach (for registration). Then, to account for deviations of measured position in comparison to nominal (robot) position, the frame transformation model at each robots joint is extended by translational and rotational error parameters. This is then used to built an error compensation model with regression techniques. We evaluate the method on a data set obtained using a 7DOF robot manipulator and show that this approach brings positioning error to the order of repeatability errors for this manipulator. |
Veera, Balaji Satya Pradeep Multi-Objective Issue Tracking Framework Abschlussarbeit Otto-von-Guericke-Unviersität Magdeburg, 2017. @mastersthesis{Veera2017, title = {Multi-Objective Issue Tracking Framework}, author = {Balaji Satya Pradeep Veera}, editor = {Sebastian Nielebock and Frank Ortmeier}, year = {2017}, date = {2017-07-28}, school = {Otto-von-Guericke-Unviersit\"{a}t Magdeburg}, abstract = {Assigning an issue reports to individual developers is typically a manual, time-consuming and tedious task for the triager. Many automated issue tracking systems have been proposed over the time to choose an optimal developer for an issue report. These Single-Objective approaches try to find a developer who has fixed similar issue reports in the past. Thus they find an optimal developer for an issue report. While choosing an optimal developer, the Single-Objective approaches don’t consider the severity of the issue report and the workload of the developer. The assigned issue report might reassign to another developer if the first assigned developer is busy with other issue reports. To solve this problem, we propose a framework of automated issue tracking system called Multi-Objective Issue Tracking Framework (MOITF). MOITF considers the workload of the developer along with the severity of the issue report while choosing the developer. This approach analyzes the history of previously closed issue reports for making predictions to the new issue report. The problem of selecting a developer who has fixed similar issue reports in the past and had less workload is viewed as a Multi objective problem. The traditional non-dominated sorting algorithm is used to find non-dominated devel- opers who have knowledge about the problem mentioned in the issue report and also have time to fix the issue report.}, keywords = {}, pubstate = {published}, tppubtype = {mastersthesis} } Assigning an issue reports to individual developers is typically a manual, time-consuming and tedious task for the triager. Many automated issue tracking systems have been proposed over the time to choose an optimal developer for an issue report. These Single-Objective approaches try to find a developer who has fixed similar issue reports in the past. Thus they find an optimal developer for an issue report. While choosing an optimal developer, the Single-Objective approaches don’t consider the severity of the issue report and the workload of the developer. The assigned issue report might reassign to another developer if the first assigned developer is busy with other issue reports. To solve this problem, we propose a framework of automated issue tracking system called Multi-Objective Issue Tracking Framework (MOITF). MOITF considers the workload of the developer along with the severity of the issue report while choosing the developer. This approach analyzes the history of previously closed issue reports for making predictions to the new issue report. The problem of selecting a developer who has fixed similar issue reports in the past and had less workload is viewed as a Multi objective problem. The traditional non-dominated sorting algorithm is used to find non-dominated devel- opers who have knowledge about the problem mentioned in the issue report and also have time to fix the issue report. |
Eiserloh, Matthias Semantische Suche für automatische Fehlerkorrekturen im objektorientierten Paradigma Abschlussarbeit Otto-von-Guericke Universität Magdeburg, 2017. @mastersthesis{eiserlohJSearchRepair2017, title = {Semantische Suche f\"{u}r automatische Fehlerkorrekturen im objektorientierten Paradigma}, author = {Matthias Eiserloh}, editor = {Sebastian Nielebock and Frank Ortmeier}, url = {https://cse.cs.ovgu.de/cse-wordpress/wp-content/uploads/2017/07/BA_2017_EiserlohMatthias-1.pdf}, year = {2017}, date = {2017-07-03}, school = {Otto-von-Guericke Universit\"{a}t Magdeburg}, type = {Bachelor Thesis}, keywords = {}, pubstate = {published}, tppubtype = {mastersthesis} } |
Bitsch, Friedemann; Filax, Marco; Gonschorek, Tim; Ortmeier, Frank; Schumacher, Rolf Effiziente Sicherheitsnachweisführung mithilfe modellbasierter Systemanalyse Artikel Signal + Draht, 2017. @article{Bitsch2017, title = {Effiziente Sicherheitsnachweisf\"{u}hrung mithilfe modellbasierter Systemanalyse}, author = {Friedemann Bitsch and Marco Filax and Tim Gonschorek and Frank Ortmeier and Rolf Schumacher}, editor = {DVV Media Group}, url = {https://cse.cs.ovgu.de/cse-wordpress/wp-content/uploads/2017/10/2017_BitschEtAl_EffizienteSicherheitsnachweisf\"{u}hrung.pdf}, year = {2017}, date = {2017-06-01}, journal = {Signal + Draht}, abstract = {This paper introduces a model-based computer-aided methodology for safety analyses in development and as- sessment processes for signalling and train control systems. The objective is to realise the application of model-based for- mal methods and simultaneously to reduce the outlay for us- ing them to a degree acceptable in industrial practice. Instead of performing safety analyses in parallel with the develop- ment process, existing system analyses and design models are used for deriving the necessary safety analyses models as au- tomatically as possible. Therefore, statements on safety and correctness can already be calculated in early development phases with precise results. Those early statements are deci- sive, because they make it possible to adapt and correct the system design at an early stage and to do it cost-efficiently.}, keywords = {}, pubstate = {published}, tppubtype = {article} } This paper introduces a model-based computer-aided methodology for safety analyses in development and as- sessment processes for signalling and train control systems. The objective is to realise the application of model-based for- mal methods and simultaneously to reduce the outlay for us- ing them to a degree acceptable in industrial practice. Instead of performing safety analyses in parallel with the develop- ment process, existing system analyses and design models are used for deriving the necessary safety analyses models as au- tomatically as possible. Therefore, statements on safety and correctness can already be calculated in early development phases with precise results. Those early statements are deci- sive, because they make it possible to adapt and correct the system design at an early stage and to do it cost-efficiently. |
Krüger, Jacob; Nielebock, Sebastian; Krieter, Sebastian; Diedrich, Christian; Leich, Thomas; Saake, Gunter; Zug, Sebastian; Ortmeier, Frank Beyond Software Product Lines: Variability Modeling in Cyber-Physical Systems Inproceedings Proceedings of the 21st International Systems and Software Product Line Conference (SPLC 2017) - Vision Track, S. 237-241, 2017. @inproceedings{spl-in-cps-splc-vision-2017, title = {Beyond Software Product Lines: Variability Modeling in Cyber-Physical Systems}, author = {Jacob Kr\"{u}ger and Sebastian Nielebock and Sebastian Krieter and Christian Diedrich and Thomas Leich and Gunter Saake and Sebastian Zug and Frank Ortmeier}, url = {https://dl.acm.org/citation.cfm?id=3106217}, year = {2017}, date = {2017-05-11}, booktitle = {Proceedings of the 21st International Systems and Software Product Line Conference (SPLC 2017) - Vision Track}, journal = {21st International Systems and Software Product Line Conference (SPLC 2017) - Vision Track}, pages = {237-241}, abstract = {Smart IT has an increasing influence on the control of daily life. For instance, smart grids manage power supply, autonomous automobiles take part in traffic, and assistive robotics support humans in production cells. We denote such systems as Cyber-physical Systems (CPSs), where cyber addresses the controlling software, while physical describes the controlled hardware. One key aspect of CPSs is their capability to adapt to new situations autonomously or with minimal human intervention. To achieve this, CPSs reuse, reorganize and reconfigure their components during runtime. Some components may even serve in different CPSs and different situations simultaneously. The hardware of a CPS usually consists of a heterogeneous set of variable components. While each component can be designed as a software product line (SPL), which is a well established approach to describe software and hardware variability, it is not possible to describe CPSs' variability solely on a set of separate, non-interacting product lines. To properly manage variability, a CPS must specify dependencies and interactions of its separate components and cope with variable environments, changing requirements, and differing safety properties. In this paper, we i) propose a classification of variability aspects, ii) point out current challenges in variability modeling, and iii) sketch open research questions. Overall, we aim to initiate new research directions for variable CPSs based on existing product-line techniques.}, keywords = {}, pubstate = {published}, tppubtype = {inproceedings} } Smart IT has an increasing influence on the control of daily life. For instance, smart grids manage power supply, autonomous automobiles take part in traffic, and assistive robotics support humans in production cells. We denote such systems as Cyber-physical Systems (CPSs), where cyber addresses the controlling software, while physical describes the controlled hardware. One key aspect of CPSs is their capability to adapt to new situations autonomously or with minimal human intervention. To achieve this, CPSs reuse, reorganize and reconfigure their components during runtime. Some components may even serve in different CPSs and different situations simultaneously. The hardware of a CPS usually consists of a heterogeneous set of variable components. While each component can be designed as a software product line (SPL), which is a well established approach to describe software and hardware variability, it is not possible to describe CPSs' variability solely on a set of separate, non-interacting product lines. To properly manage variability, a CPS must specify dependencies and interactions of its separate components and cope with variable environments, changing requirements, and differing safety properties. In this paper, we i) propose a classification of variability aspects, ii) point out current challenges in variability modeling, and iii) sketch open research questions. Overall, we aim to initiate new research directions for variable CPSs based on existing product-line techniques. |
Filax, Marco; Gonschorek, Tim; Ortmeier, Frank QuadSIFT: Unwrapping Planar Quadrilaterals to Enhance Feature Matching Inproceedings Proceedings of the 25rd International Conference in Central Europe on Computer Graphics, Visualization and Computer Vision, WSCG 2017 - Short Papers Proceedings, 2017. @inproceedings{filax17-wscg, title = {QuadSIFT: Unwrapping Planar Quadrilaterals to Enhance Feature Matching}, author = {Marco Filax and Tim Gonschorek and Frank Ortmeier}, url = {http://wscg.zcu.cz/wscg2017/short/I07-full.PDF https://cse.cs.ovgu.de/cse-wordpress/wp-content/uploads/2017/10/wscg2017_FilaxEtAl_QuadSIFT.pdf}, year = {2017}, date = {2017-01-01}, booktitle = {Proceedings of the 25rd International Conference in Central Europe on Computer Graphics, Visualization and Computer Vision, WSCG 2017 - Short Papers Proceedings}, volume = {25}, abstract = {Feature matching is one of the fundamental issues in computer vision. The established methods, however, do not provide reliable results, especially for extreme viewpoint changes. Different approaches have been proposed to lower this hurdle, e. g., by randomly sampling different viewpoints to obtain better results. However, these methods are computationally intensive. In this paper, we propose an algorithm to enhance image matching under the assumption that an image, taken in man-made environments, typically contains planar, rectangular objects. We use line segments to identify image patches and compute a homography which unwraps the perspective distortion for each patch. The unwrapped image patches are used to detect, describe and match SIFT features. We evaluate our results on a series of slanted views of a magazine and augmented reality markers. Our results demonstrate, that the proposed algorithm performs well for strong perspective distortions.}, keywords = {}, pubstate = {published}, tppubtype = {inproceedings} } Feature matching is one of the fundamental issues in computer vision. The established methods, however, do not provide reliable results, especially for extreme viewpoint changes. Different approaches have been proposed to lower this hurdle, e. g., by randomly sampling different viewpoints to obtain better results. However, these methods are computationally intensive. In this paper, we propose an algorithm to enhance image matching under the assumption that an image, taken in man-made environments, typically contains planar, rectangular objects. We use line segments to identify image patches and compute a homography which unwraps the perspective distortion for each patch. The unwrapped image patches are used to detect, describe and match SIFT features. We evaluate our results on a series of slanted views of a magazine and augmented reality markers. Our results demonstrate, that the proposed algorithm performs well for strong perspective distortions. |
2016 |
Nielebock, Sebastian; Ortmeier, Frank Adoption of Lambda-Expressions in Object-Oriented Programs does not necessarily decrease Source Code Size Unveröffentlicht 2016, (Presented at the 28th Symposium on Implementation and Application of Functional Languages 2016 in Leuven, Belgium (unreviewed pre-proceedings)). @unpublished{Nielebock-Lambda-2016, title = {Adoption of Lambda-Expressions in Object-Oriented Programs does not necessarily decrease Source Code Size}, author = {Sebastian Nielebock and Frank Ortmeier}, url = {https://cse.cs.ovgu.de/cse-wordpress/wp-content/uploads/2016/11/paper-lambda-adoption-final.pdf}, year = {2016}, date = {2016-08-31}, abstract = {Lambda-expressions are state-of-the-art in the functional programming paradigm, e.g. to easily handle functional behavior as a parameter or to temporarily implement a functionality, where it is actually needed. These benefits motivate the developers of object oriented programming languages to extend their language by Lambda-expressions. When doing this, one out of several claims is that Lambda-expression are capable to decrease the source code size. Up to now, only little validation of this claim was conducted. Thus, within this paper, we analyze, whether this claim is valid or not. For this purpose, we investigated the adoption of Lambda-expression and their effect on the source code size based on 110 open source projects, written in one of the three commonly applied object oriented programming languages C#, C++ and Java. Our results, obtained so far, seem to contradict this common claim. Moreover, the results indicate the opposite: the addition of Lambda-expressions is correlated with a more increased source code size than usual. Thus, we conclude that Lambda-expression does not necessarily decrease source code size. Within this paper, we discuss potential reasons for this result, whereas a valid explanation is still an open research question.}, note = {Presented at the 28th Symposium on Implementation and Application of Functional Languages 2016 in Leuven, Belgium (unreviewed pre-proceedings)}, keywords = {}, pubstate = {published}, tppubtype = {unpublished} } Lambda-expressions are state-of-the-art in the functional programming paradigm, e.g. to easily handle functional behavior as a parameter or to temporarily implement a functionality, where it is actually needed. These benefits motivate the developers of object oriented programming languages to extend their language by Lambda-expressions. When doing this, one out of several claims is that Lambda-expression are capable to decrease the source code size. Up to now, only little validation of this claim was conducted. Thus, within this paper, we analyze, whether this claim is valid or not. For this purpose, we investigated the adoption of Lambda-expression and their effect on the source code size based on 110 open source projects, written in one of the three commonly applied object oriented programming languages C#, C++ and Java. Our results, obtained so far, seem to contradict this common claim. Moreover, the results indicate the opposite: the addition of Lambda-expressions is correlated with a more increased source code size than usual. Thus, we conclude that Lambda-expression does not necessarily decrease source code size. Within this paper, we discuss potential reasons for this result, whereas a valid explanation is still an open research question. |
Krolikowski, Dariusz Einfluss unterschiedlicher Kommentararten auf die Lesbarkeit des Quellcodes Abschlussarbeit Otto-von-Guericke Universität Magdeburg, 2016. @mastersthesis{krolikowski2016, title = {Einfluss unterschiedlicher Kommentararten auf die Lesbarkeit des Quellcodes}, author = {Dariusz Krolikowski}, editor = {Sebastian Nielebock and Frank Ortmeier}, url = {https://cse.cs.ovgu.de/cse-wordpress/wp-content/uploads/2016/11/masterarbeit-krolikowski-1.pdf}, year = {2016}, date = {2016-08-19}, school = {Otto-von-Guericke Universit\"{a}t Magdeburg}, keywords = {}, pubstate = {published}, tppubtype = {mastersthesis} } |
Filax, Marco; Gonschorek, Tim; Ortmeier, Frank Correct Formalization of Requirement Specifications: A V-Model for Building Formal Models Inproceedings Publishing, Springer International (Hrsg.): Reliability, Safety, and Security of Railway Systems. Modelling, Analysis, Verification, and Certification First International Conference, RSSRail 2016, Paris, France, June 28-30, 2016, Proceedings, S. 106 - 122, 2016, ISBN: 978-3-319-33951-1. @inproceedings{MF16-RSSR, title = {Correct Formalization of Requirement Specifications: A V-Model for Building Formal Models}, author = {Marco Filax and Tim Gonschorek and Frank Ortmeier }, editor = {Springer International Publishing }, url = {https://cse.cs.ovgu.de/cse-wordpress/wp-content/uploads/2016/08/RSSR_CorrectFormalizationOfRequirementSpecifications.pdf}, doi = {10.1007/978-3-319-33951-1_8}, isbn = {978-3-319-33951-1}, year = {2016}, date = {2016-06-15}, booktitle = {Reliability, Safety, and Security of Railway Systems. Modelling, Analysis, Verification, and Certification First International Conference, RSSRail 2016, Paris, France, June 28-30, 2016, Proceedings}, pages = {106 - 122}, abstract = {In recent years, formal methods have become an important approach to ensure the correct function of complex hardware and software systems. Many standards for safety critical systems recommend or even require the use of formal methods. However, building a formal model for a given specification is challenging. This is, because verification results must be considered with respect to the validity of the model. This leads to the question: “Did I build the right model?”. For system development the analogous question “Did I build the right system?”. This is often answered with requirements traceability through the whole development cycle. For formal verification this question often remains unanswered. The standard model, which is used in development of safety critical applications is the V-model. The core idea is to define tests for each phase during system development. In this paper, we propose an approach - analogously to the V-model for development - which ensures correctness of the formal model with respect to requirements. We will illustrate the approach on a small example from the railways domain.}, keywords = {}, pubstate = {published}, tppubtype = {inproceedings} } In recent years, formal methods have become an important approach to ensure the correct function of complex hardware and software systems. Many standards for safety critical systems recommend or even require the use of formal methods. However, building a formal model for a given specification is challenging. This is, because verification results must be considered with respect to the validity of the model. This leads to the question: “Did I build the right model?”. For system development the analogous question “Did I build the right system?”. This is often answered with requirements traceability through the whole development cycle. For formal verification this question often remains unanswered. The standard model, which is used in development of safety critical applications is the V-model. The core idea is to define tests for each phase during system development. In this paper, we propose an approach - analogously to the V-model for development - which ensures correctness of the formal model with respect to requirements. We will illustrate the approach on a small example from the railways domain. |
Filax, Marco; Gonschorek, Tim; Hebecker, Tanja; Lipaczewski, Michael; Madalinski, Agnes; Ortmeier, Frank; Fietze, Mario; Schumacher, Rolf Der Eisenbahn Ingenieur, S. 24 -27, 2016. @article{ei2016, title = {Bringing formal methods “on the rail” - Modellbasierte Systemanalyse in der Sicherheitsnachweisf\"{u}hrung}, author = {Marco Filax and Tim Gonschorek and Tanja Hebecker and Michael Lipaczewski and Agnes Madalinski and Frank Ortmeier and Mario Fietze and Rolf Schumacher}, editor = {Verband Deutscher Eisenbahn-Ingenieure E.V.}, url = {https://cse.cs.ovgu.de/cse-wordpress/wp-content/uploads/2016/08/Ver\"{o}ffentlichung-Eisenbahningeneur.pdf}, year = {2016}, date = {2016-01-01}, journal = {Der Eisenbahn Ingenieur}, pages = {24 -27}, abstract = {Funktionen sicherheitskritischer Systeme im Eisenbahnsektor werden entsprechend ihrer tolerierbaren Gef\"{a}hrdungsraten in sogenannte Sicherheitsintegrit\"{a}tslevel (SIL 1-4) eingestuft. In der europ\"{a}ischen Norm EN50129 besteht, f\"{u}r die Stufen drei und vier, die dringende Empfehlung, im Entwicklungs- und Spezifikationsprozess, formale Methoden anzuwenden. Die Otto-von-Guericke-Universit\"{a}t Magdeburg hat dazu, begleitet durch das Eisenbahn-Bundesamt (EBA) und Gutachtern des EBA, einen Satz unterst\"{u}tzender Werkzeuge und Verfahren entwickelt, welche in diesem Artikel, am Beispiel der Punktf\"{o}rmigen Zugbeeinflussung (PZB), vorgestellt werden sollen. Diese Werkzeuge erm\"{o}glichen die Portierung einer nat\"{u}rlichsprachlichen Systemanforderungsspezifikation in ein formales Modell, mit dessen Hilfe die Konsistenz und Vollst\"{a}ndigkeit der Systembeschreibung, sowie die definierten Sicherheitsanforderungen formal \"{u}berpr\"{u}ft werden k\"{o}nnen. Bereits in fr\"{u}hen Entwicklungsphasen k\"{o}nnen automatisiert qualitative und quantitative Absch\"{a}tzungen \"{u}ber die Sicherheit und Zuverl\"{a}ssigkeit mit pr\"{a}zisen und aussagekr\"{a}ftigen Resultaten berechnet werden. Gleichzeitig wird der Aufwand zur endg\"{u}ltigen, sicherheitstechnischen Bewertung durch vollst\"{a}ndige Traceability als Teil des Zertifizierungs- und Zulassungsprozesses reduziert.}, keywords = {}, pubstate = {published}, tppubtype = {article} } Funktionen sicherheitskritischer Systeme im Eisenbahnsektor werden entsprechend ihrer tolerierbaren Gefährdungsraten in sogenannte Sicherheitsintegritätslevel (SIL 1-4) eingestuft. In der europäischen Norm EN50129 besteht, für die Stufen drei und vier, die dringende Empfehlung, im Entwicklungs- und Spezifikationsprozess, formale Methoden anzuwenden. Die Otto-von-Guericke-Universität Magdeburg hat dazu, begleitet durch das Eisenbahn-Bundesamt (EBA) und Gutachtern des EBA, einen Satz unterstützender Werkzeuge und Verfahren entwickelt, welche in diesem Artikel, am Beispiel der Punktförmigen Zugbeeinflussung (PZB), vorgestellt werden sollen. Diese Werkzeuge ermöglichen die Portierung einer natürlichsprachlichen Systemanforderungsspezifikation in ein formales Modell, mit dessen Hilfe die Konsistenz und Vollständigkeit der Systembeschreibung, sowie die definierten Sicherheitsanforderungen formal überprüft werden können. Bereits in frühen Entwicklungsphasen können automatisiert qualitative und quantitative Abschätzungen über die Sicherheit und Zuverlässigkeit mit präzisen und aussagekräftigen Resultaten berechnet werden. Gleichzeitig wird der Aufwand zur endgültigen, sicherheitstechnischen Bewertung durch vollständige Traceability als Teil des Zertifizierungs- und Zulassungsprozesses reduziert. |
2015 |
Hebecker, Tanja; Ortmeier, Frank Safe Prediction-Based Local Path Planning using Obstacle Probability Sections Inproceedings Laugier, Christian; Martinet, Philippe; Nunes, Urbano; stiller, Christoph (Hrsg.): Proceedings of the 7th IROS Workshop on Planning, Perception and Navigation for Intelligent Vehicles, S. 183 - 188, 2015. @inproceedings{Hebecker2015, title = {Safe Prediction-Based Local Path Planning using Obstacle Probability Sections}, author = {Tanja Hebecker and Frank Ortmeier}, editor = {Christian Laugier and Philippe Martinet and Urbano Nunes and Christoph stiller }, url = {https://cse.cs.ovgu.de/cse-wordpress/wp-content/uploads/2016/02/hebecker2015_SafePrediction-basedLocalPathPlanningUsingObstacleProbability.pdf}, year = {2015}, date = {2015-09-28}, booktitle = {Proceedings of the 7th IROS Workshop on Planning, Perception and Navigation for Intelligent Vehicles}, pages = {183 - 188}, abstract = {Autonomous mobile robots gain more and more importance. In the nearest future they will be a part of everyday life. Therefore, it is critical to make them as reliable and safe as possible. We present a local path planner that shall ensure safety in an environment cluttered with unexpectedly moving obstacles. In this paper, the motion of obstacles is predicted by generating probability sections, and collision risks of path configurations are checked by determining whether these configurations lead inevitably to a collision or not. The presented approach worked efficiently in scenarios with static and dynamic obstacles.}, keywords = {}, pubstate = {published}, tppubtype = {inproceedings} } Autonomous mobile robots gain more and more importance. In the nearest future they will be a part of everyday life. Therefore, it is critical to make them as reliable and safe as possible. We present a local path planner that shall ensure safety in an environment cluttered with unexpectedly moving obstacles. In this paper, the motion of obstacles is predicted by generating probability sections, and collision risks of path configurations are checked by determining whether these configurations lead inevitably to a collision or not. The presented approach worked efficiently in scenarios with static and dynamic obstacles. |
Nykolaichuk, Mykhaylo; Ortmeier, Frank Coverage Path Re-Planning for Processing Faults Inproceedings Springer, (Hrsg.): The 8th International Conference on Intelligent Robotics and Applications (ICIRA2015) , S. 358-368 , 2015. @inproceedings{Nykolaichuk2015, title = {Coverage Path Re-Planning for Processing Faults}, author = {Mykhaylo Nykolaichuk and Frank Ortmeier}, editor = {Springer}, url = {https://cse.cs.ovgu.de/cse-wordpress/wp-content/uploads/2015/07/coveragePathReplanning_final.pdf https://link.springer.com/chapter/10.1007/978-3-319-22876-1_31}, year = {2015}, date = {2015-08-20}, booktitle = {The 8th International Conference on Intelligent Robotics and Applications (ICIRA2015) }, volume = {9245}, pages = {358-368 }, abstract = {Currently, an automated surface treatment or finishing (e.g., abrasive blasting, cleaning or painting) is performed in two consecutive steps: first processing by tool, second quality evaluation by sensor. Often, a finished surface has defects like areas not properly processed. This is caused by path inaccuracy or errors in tool deployment. The defected areas can be detected only during a subsequent quality evaluation. As a result, a complete surface reprocessing is required which is costly and time-consuming. We propose a new approach that is a combination of surface treatment and quality evaluation processes in a single deployment. In our approach, an initial coverage path for surface treatment by a tool is given or calculated using a state-of-the-art coverage path planning algorithm. In fact, we extend an off-line generated initial coverage path with an ability to react to sensor-based defect recognitions during surface processing by a tool.}, keywords = {}, pubstate = {published}, tppubtype = {inproceedings} } Currently, an automated surface treatment or finishing (e.g., abrasive blasting, cleaning or painting) is performed in two consecutive steps: first processing by tool, second quality evaluation by sensor. Often, a finished surface has defects like areas not properly processed. This is caused by path inaccuracy or errors in tool deployment. The defected areas can be detected only during a subsequent quality evaluation. As a result, a complete surface reprocessing is required which is costly and time-consuming. We propose a new approach that is a combination of surface treatment and quality evaluation processes in a single deployment. In our approach, an initial coverage path for surface treatment by a tool is given or calculated using a state-of-the-art coverage path planning algorithm. In fact, we extend an off-line generated initial coverage path with an ability to react to sensor-based defect recognitions during surface processing by a tool. |
Gonschorek, Tim; Ortmeier, Frank Slice or Unfold - Experiments on Checking Synchronous Models with Backwards Slicing Inproceedings of Control, International Federation Automatic (Hrsg.): 5th International Workshop on Dependable Control of Discrete Systems (DCDS 2015), 2015, (accepted, will be published this year). @inproceedings{Gonschorek2015DCDS, title = {Slice or Unfold - Experiments on Checking Synchronous Models with Backwards Slicing}, author = {Tim Gonschorek and Frank Ortmeier}, editor = {International Federation of Automatic Control}, url = {https://cse.cs.ovgu.de/cse-wordpress/wp-content/uploads/2015/05/paper_bwmc_dcds15.pdf}, year = {2015}, date = {2015-05-31}, booktitle = {5th International Workshop on Dependable Control of Discrete Systems (DCDS 2015)}, abstract = {Nowadays, model checking approaches have one major drawback: They suffer from the state space explosion problem. We think one major problem is that the verification tools always must hold some (symbolic) representation, e.g. a BDD or a adjacency list, in the memory during the verification process. In this paper we propose a verification approach that does not need to hold a complete representation while checking the model. This is done by slicing backward the transition rules, which defines the complete state system.%144 Here, we present the basic backward slicing approach for verifying CTL safety properties traversing the state machine by using a given transition model. In the following , we give a little evaluation with special test models and compare the results to the state of the art model checker nuXmv.}, note = {accepted, will be published this year}, keywords = {}, pubstate = {published}, tppubtype = {inproceedings} } Nowadays, model checking approaches have one major drawback: They suffer from the state space explosion problem. We think one major problem is that the verification tools always must hold some (symbolic) representation, e.g. a BDD or a adjacency list, in the memory during the verification process. In this paper we propose a verification approach that does not need to hold a complete representation while checking the model. This is done by slicing backward the transition rules, which defines the complete state system.%144 Here, we present the basic backward slicing approach for verifying CTL safety properties traversing the state machine by using a given transition model. In the following , we give a little evaluation with special test models and compare the results to the state of the art model checker nuXmv. |
Lipaczewski, Michael; Prosvirnova, Tatiana; Ortmeier, Frank; Rauzy, Antoine; Struck, Simon Comparison of Modeling Formalisms for Safety Analyses: SAML and AltaRica Artikel Reliability Engineering & System Safety, 2015. @article{saml-vs-altarica-ress-elsevier-2015, title = {Comparison of Modeling Formalisms for Safety Analyses: SAML and AltaRica}, author = {Michael Lipaczewski and Tatiana Prosvirnova and Frank Ortmeier and Antoine Rauzy and Simon Struck}, url = {http://www.sciencedirect.com/science/article/pii/S0951832015001040}, year = {2015}, date = {2015-04-08}, journal = {Reliability Engineering & System Safety}, abstract = {Many states/transitions formalisms have been proposed in the literature to perform Safety Analyses. In this paper we compare two of them: SAML and AltaRica. These formalisms have been developed by different communities. Their “look-and-feel” are thus quite different. Yet, their underlying mathematical foundations are very similar: both of them rely on state automata. It is therefore of interest to study their ability to assess the reliability of systems, their respective advantages and drawbacks and to seek for opportunities of a cross fertilization.}, keywords = {}, pubstate = {published}, tppubtype = {article} } Many states/transitions formalisms have been proposed in the literature to perform Safety Analyses. In this paper we compare two of them: SAML and AltaRica. These formalisms have been developed by different communities. Their “look-and-feel” are thus quite different. Yet, their underlying mathematical foundations are very similar: both of them rely on state automata. It is therefore of interest to study their ability to assess the reliability of systems, their respective advantages and drawbacks and to seek for opportunities of a cross fertilization. |
Alatartsev, Sergey; Stellmacher, Sebastian; Ortmeier, Frank Robotic Task Sequencing Problem: A Survey Artikel Journal of Intelligent and Robotic Systems, 2015, ISSN: 1573-0409. @article{Alatartsev2015, title = {Robotic Task Sequencing Problem: A Survey}, author = { Sergey Alatartsev and Sebastian Stellmacher and Frank Ortmeier}, url = {http://link.springer.com/article/10.1007/s10846-015-0190-6 https://cse.cs.ovgu.de/cse-wordpress/wp-content/uploads/2015/05/Alatartsev_JIRS2015.pdf}, doi = {10.1007/s10846-015-0190-6}, issn = {1573-0409}, year = {2015}, date = {2015-01-01}, booktitle = {Journal of Intelligent and Robotic Systems}, journal = {Journal of Intelligent and Robotic Systems}, publisher = {Springer}, abstract = {Today, robotics is an important cornerstone of modern industrial production. Robots are used for numerous reasons including reliability and continuously high quality of work. The main decision factor is the overall efficiency of the robotic system in the production line. One key issue for this is the optimality of the whole set of robotic movements for industrial applications, which typically consist of multiple atomic tasks such as welding seams, drilling holes, etc. Currently, in many industrial scenarios such movements are optimized manually. This is costly and error-prone. Therefore, researchers have been working on algorithms for automatic computation of optimal trajectories for several years. This problem gets even more complicated due to multiple additional factors like redundant kinematics, collision avoidance, possibilities of ambiguous task performance, etc. This survey article summarizes and categorizes the approaches for optimization of the robotic task sequence. It provides an overview of existing combinatorial problems that are applied for robot task sequencing. It also highlights the strengths and the weaknesses of existing approaches as well as the challenges for future research in this domain. The article is meant for both scientists and practitioners. For scientists, it provides an overview on applied algorithmic approaches. For practitioners, it presents existing solutions, which are categorized according to the classes of input and output parameters.}, keywords = {}, pubstate = {published}, tppubtype = {article} } Today, robotics is an important cornerstone of modern industrial production. Robots are used for numerous reasons including reliability and continuously high quality of work. The main decision factor is the overall efficiency of the robotic system in the production line. One key issue for this is the optimality of the whole set of robotic movements for industrial applications, which typically consist of multiple atomic tasks such as welding seams, drilling holes, etc. Currently, in many industrial scenarios such movements are optimized manually. This is costly and error-prone. Therefore, researchers have been working on algorithms for automatic computation of optimal trajectories for several years. This problem gets even more complicated due to multiple additional factors like redundant kinematics, collision avoidance, possibilities of ambiguous task performance, etc. This survey article summarizes and categorizes the approaches for optimization of the robotic task sequence. It provides an overview of existing combinatorial problems that are applied for robot task sequencing. It also highlights the strengths and the weaknesses of existing approaches as well as the challenges for future research in this domain. The article is meant for both scientists and practitioners. For scientists, it provides an overview on applied algorithmic approaches. For practitioners, it presents existing solutions, which are categorized according to the classes of input and output parameters. |