Deep Neural Networks are one of the most used techniques for classification.
When such DNN based classifiers are deployed in safety critical systems, it is extremely important to provide some safety guarantees of the DNN.
In real life scenario, the inputs received by the classifier can have a lot of noise.
And mis-classification due to noises can be fatal in a safety critical scenario.
In this project, I try to find the maximum amount of error that a Deep Neural Network classifier can tolerate, given a class of inputs.
I'm not able discuss the techniques or post slides due to IP issues.