This book, Attacks, Defenses and Testing for Deep Learning, describes the security issues existing in deep learning, a highly utilized technology in fields such as computer vision, federated learning, graph neural networks, reinforcement learning, etc. However, it must be noted that deep learning is not immune to such attacks, and such problems can be quite critical. For instance, toward the end of 2018, there were more than 12 major instances of self-driving car crashes from firms like Uber and Tesla. Such occurrences indicate the significance of drawing out and rectifying the drawbacks of deep learning systems to enhance their security.
Attacks on deep learning models can happen in two main ways. Admittedly, there are two main categories of attacks here: adversarial and poisoning. The adversarial attacks happen in the testing phase, where the attackers slightly perturb the input data. The poisoning samples are introduced in the training process, and place malicious samples into the training set so that they are camouflaged as triggers. It can be said that both types of attacks can significantly destabilize and negatively affect the functionality of deep learning systems.
Necessary measures play a crucial role in the stage of countering threats to protect deep learning models. Such defenses include altering the input data, redesigning the construction of the model, and adding more layers to the model, which help to recognize and counterattack. Moreover, integrating effective testing procedures carried out on these models to identify and address axes of weakness would allow these models to be safe and efficient. With the help of this book, “Attacks, Defenses and Testing for Deep Learning,” you can enhance the protection and performance of deep learning solutions.
Attacks, Defenses and Testing for Deep Learning Table of Contents:
- Part I: Attacks for Deep Learning
- Perturbation-Optimized Black-Box Adversarial Attacks via Genetic Algorithm
- Feature Transfer-Based Stealthy Poisoning Attack for DNNs
- Adversarial Attacks on CNN-Based Vertical Federated Learning
- A Novel DNN Object Contour Attack on Image Recognition
- Query-Efficient Adversarial Attack Against Vertical Federated Graph Learning
- Targeted Label Adversarial Attack on Graph Embedding
- Backdoor Attack on Dynamic Link Prediction
- Attention Mechanism-Based Adversarial Attack Against DRL
- Part II: Defenses for Deep Learning
- Detecting Adversarial Examples via Local Gradient Checking
- A Novel Adversarial Defense by Refocusing on Critical Areas and Strengthening Object Contours
- Neuron-Level Inverse Perturbation Against Adversarial Attacks
- Adaptive Channel Transformation-Based Detector for Adversarial Attacks
- Defense Against Free-Rider Attack from the Weight Evolving Frequency
- An Effective Model Copyright Protection for Federated Learning
- Guard the Vertical Federated Graph Learning from Property Inference Attack
- Using Adversarial Examples to Counter Backdoor Attacks in Federated Learning
- Part III: Testing for Deep Learning
- Evaluating the Adversarial Robustness of Deep Models by Decision Boundaries
- Certifiable Prioritization for Deep Neural Networks via Movement Cost in Feature Space
- Interpretable White-Box Fairness Testing through Biased Neuron Identification
- A Deep Learning Framework for Dynamic Network Link Prediction
Who is this course for?
- It is ideal for Researchers who are dealing with deep Learning Security.
- Software development engineers who are attested in deep learning
Click on the links below to Download Attacks, Defenses and Testing for Deep Learning!
در حال پاسخ به :