A Lie Group Approach to Riemannian Batch Normalization Post date February 22, 2025 Post author By Batching Post categories In batch-normalization, batching, deep-neural-networks, lie-group-approach, manifold-value-measurements, riemmanian-batch, symmetric-positive-definite, what-are-dnns
The HackerNoon Newsletter: AI Coding Tools Are Still in the RD Stage (2/11/2025) Post date February 11, 2025 Post author By Noonification Post categories In ai, artificial-intelligence, bitcoin, deep-neural-networks, hackernoon-newsletter, latest-tect-stories, noonification, web3
Deep Learning Runs on Floating-Point Math. What If That’s a Mistake? Post date February 11, 2025 Post author By hackernoon Post categories In deep-neural-networks, floating-point-arithmetic, hackernoon-top-story, logarithmic-number-system, multi-layer-perceptron, neural-networks, numerical-computation, python
One Line of Code Can Make AI Models Faster and More Reliable Post date February 7, 2025 Post author By Deep Linking Post categories In artificial-intelligence, deep-deterministic-uncertainty, deep-neural-networks, l2-normalization, monte-carlo-dropout, neural-collapse, out-of-distribution-benchmark, out-of-distribution-inputs
New Research Cuts AI Training Time Without Sacrificing Accuracy Post date February 7, 2025 Post author By Deep Linking Post categories In artificial-intelligence, deep-deterministic-uncertainty, deep-neural-networks, l2-normalization, monte-carlo-dropout, neural-collapse, out-of-distribution-benchmark, out-of-distribution-inputs
Researchers Have Found a Shortcut to More Reliable AI Models Post date February 7, 2025 Post author By Deep Linking Post categories In artificial-intelligence, deep-deterministic-uncertainty, deep-neural-networks, l2-normalization, monte-carlo-dropout, neural-collapse, out-of-distribution-benchmark, out-of-distribution-inputs
Teaching AI to Know When It Doesn’t Know Post date February 7, 2025 Post author By Deep Linking Post categories In artificial-intelligence, deep-deterministic-uncertainty, deep-neural-networks, l2-normalization, monte-carlo-dropout, neural-collapse, out-of-distribution-benchmark, out-of-distribution-inputs
This Small Change Makes AI Models Smarter on Unfamiliar Data Post date February 7, 2025 Post author By Deep Linking Post categories In artificial-intelligence, deep-deterministic-uncertainty, deep-neural-networks, hackernoon-top-story, monte-carlo-dropout, neural-collapse, out-of-distribution-benchmark, out-of-distribution-inputs
Deep Neural Networks Are Addressing Challenges in Computer Vision Post date October 27, 2021 Post author By Maruti Techlabs Post categories In artificial-intelligence, artificial-neural-network, challenges-in-computer-vision, computer-vision, deep-neural-networks, good-company, machine-learning, neural-networks
Building Machine Learning Models With TensorFlow Post date May 2, 2021 Post author By Rishit Dagli Post categories In artificial-intelligence, deep-learning, deep-neural-networks, deeplearning, machine-learning, machine-learning-tutorials, neural-networks, TensorFlow