How to Teach a Tiny AI Model Everything a Huge One Knows Post date March 12, 2025 Post author By Raviteja Reddy Ganta Post categories In ai, deepseek, hackernoon-top-story, knowledge-distillation, llama, model-compression, neural-networks, student-teacher-models
Comparison with SKD and ARD and Implementations of Stronger Attacker Algorithms Post date September 30, 2024 Post author By Writings, Papers and Blogs on Text Models Post categories In adversarial-robustness, adversarial-test-accuracy, knowledge-distillation, multi-exit-neural-networks, neo-kd, neural-network-robustness, neural-network-security, neural-networks
Evaluating NEO-KD Against Single-Exit Defense Methods in Multi-Exit Networks Post date September 30, 2024 Post author By Writings, Papers and Blogs on Text Models Post categories In adversarial-robustness, adversarial-test-accuracy, knowledge-distillation, multi-exit-neural-networks, neo-kd, neural-network-robustness, neural-network-security, neural-networks
Examining the Adversarial Test Accuracy of Later Exits in NEO-KD Networks Post date September 30, 2024 Post author By Writings, Papers and Blogs on Text Models Post categories In adversarial-robustness, adversarial-test-accuracy, knowledge-distillation, multi-exit-neural-networks, neo-kd, neural-network-robustness, neural-network-security, neural-networks
The Impact of Hyperparameters on Adversarial Training Performance Post date September 30, 2024 Post author By Writings, Papers and Blogs on Text Models Post categories In adversarial-robustness, adversarial-test-accuracy, knowledge-distillation, multi-exit-neural-networks, neo-kd, neural-network-robustness, neural-network-security, neural-networks
Clean Test Accuracy and Adversarial Training via Average Attack Post date September 30, 2024 Post author By Writings, Papers and Blogs on Text Models Post categories In adversarial-robustness, adversarial-test-accuracy, knowledge-distillation, multi-exit-neural-networks, neo-kd, neural-network-robustness, neural-network-security, neural-networks
Fine-Tuning NEO-KD for Robust Multi-Exit Networks Post date September 30, 2024 Post author By Writings, Papers and Blogs on Text Models Post categories In adversarial-robustness, adversarial-test-accuracy, knowledge-distillation, multi-exit-neural-networks, neo-kd, neural-network-robustness, neural-network-security, neural-networks
How NEO-KD Reduces Adversarial Transferability and Improves Accuracy Post date September 30, 2024 Post author By Writings, Papers and Blogs on Text Models Post categories In adversarial-robustness, adversarial-test-accuracy, knowledge-distillation, multi-exit-neural-networks, neo-kd, neural-network-robustness, neural-network-security, neural-networks
How Ensemble Strategies Impact Adversarial Robustness in Multi-Exit Networks Post date September 30, 2024 Post author By Writings, Papers and Blogs on Text Models Post categories In adversarial-robustness, adversarial-test-accuracy, knowledge-distillation, multi-exit-neural-networks, neo-kd, neural-network-robustness, neural-network-security, neural-networks
How NEO-KD Saves Up to 81% of Computing Power While Maximizing Adversarial Accuracy Post date September 30, 2024 Post author By Writings, Papers and Blogs on Text Models Post categories In adversarial-robustness, adversarial-test-accuracy, knowledge-distillation, multi-exit-neural-networks, neo-kd, neural-network-robustness, neural-network-security, neural-networks