Saeed B. Germi investigated the well-known classification algorithm as a fundamental basis for other deep learning-based solutions. He studied three phases of preparing a deep neural classifier: training, evaluating, and inferring. According to him, each phase relies on multiple ideal assumptions that may not accurately reflect real-world conditions. For every assumption, a safety concern is defined which describes what would happen to the classifier in case the assumption is falsified. These safety concerns cover various subjects such as quality and quantity of data, structure of classifier, performance metrics, test benchmarks, hardware, and environmental factors.
“The academic research has figured out how to utilize deep learning-based solutions, but the real-world won't follow the underlying assumptions we have in academia. These inconsistencies could result in unsafe operation of the system which is not suitable for safety-critical applications,” Germi says.
The first half of Germi's dissertation focuses on defining the safety concerns and existing mitigation methods for deep neural classifiers. This framework would help companies and other interested parties in tailoring a practical approach to utilize deep learning algorithms in their systems.
The second half of Germi's dissertation focuses on the safety concerns related to training data. The quality and quantity of training data, the limiting assumptions about the work space and defined objects, and the disturbances caused by environmental factors all affect the outcome of a classifier significantly. Germi proposes enhanced mitigation methods to deal with data-related safety concerns.
“Adjusting the existing mitigation methods slightly based on the proposed framework and realistic assumptions would result in an improved solution for mitigating label noise, detecting outlier samples, and dealing with domain gap,” Germi adds.
Deep learning-based solutions are not covered by existing functional safety standards, which makes them unusable in safety-critical applications. Germi's dissertation provides a comprehensive framework on safety concerns and mitigation methods for deep neural classifiers that could be utilized to implement a deep learning-based solution for safety-critical applications.
Public defence on Friday 17 May
The doctoral dissertation of M.Sc. Saeed Bakhshi Germi in the field of Information Technology titled “Deep Neural Classifiers in Safety-Critical Applications” will be publicly examined at the Faculty of Information Technology and Communication Sciences at Tampere University in room TB104 of the Tietotalo building (address: Korkeakoulunkatu 1, Tampere) at 12:00 on Friday 17 May 2024.
The Opponents will be Professor Heikki Kälviäinen from Lappeenranta-Lahti University of Technology and Antti Honkela from University of Helsinki. The Custos will be Professor Esa Rahtu from Tampere University.