Deep Neural Networks are Easily Fooled: High Confidence Predictions for Unrecognizable Images Paper • 1412.1897 • Published Dec 5, 2014
Multifaceted Feature Visualization: Uncovering the Different Types of Features Learned By Each Neuron in Deep Neural Networks Paper • 1602.03616 • Published Feb 11, 2016
Synthesizing the preferred inputs for neurons in neural networks via deep generator networks Paper • 1605.09304 • Published May 30, 2016
Automatically identifying, counting, and describing wild animals in camera-trap images with deep learning Paper • 1703.05830 • Published Mar 16, 2017
3DTouch: Towards a Wearable 3D Input Device for 3D Applications Paper • 1706.00176 • Published Jun 1, 2017
VectorDefense: Vectorization as a Defense to Adversarial Examples Paper • 1804.08529 • Published Apr 23, 2018
Strike (with) a Pose: Neural Networks Are Easily Fooled by Strange Poses of Familiar Objects Paper • 1811.11553 • Published Nov 28, 2018
Understanding Neural Networks via Feature Visualization: A survey Paper • 1904.08939 • Published Apr 18, 2019 • 1
PEEB: Part-based Image Classifiers with an Explainable and Editable Language Bottleneck Paper • 2403.05297 • Published Mar 8, 2024
ZeroBench: An Impossible Visual Benchmark for Contemporary Large Multimodal Models Paper • 2502.09696 • Published Feb 13 • 44
HoT: Highlighted Chain of Thought for Referencing Supporting Facts from Inputs Paper • 2503.02003 • Published Mar 3 • 48
Understanding Generative AI Capabilities in Everyday Image Editing Tasks Paper • 2505.16181 • Published May 22 • 23
TAB: Transformer Attention Bottlenecks enable User Intervention and Debugging in Vision-Language Models Paper • 2412.18675 • Published Dec 24, 2024 • 1
Improving Zero-Shot Object-Level Change Detection by Incorporating Visual Correspondence Paper • 2501.05555 • Published Jan 9 • 1
GlitchBench: Can large multimodal models detect video game glitches? Paper • 2312.05291 • Published Dec 8, 2023 • 3
Explaining image classifiers by removing input features using generative models Paper • 1910.04256 • Published Oct 9, 2019
Plug & Play Generative Networks: Conditional Iterative Generation of Images in Latent Space Paper • 1612.00005 • Published Nov 30, 2016
SAM: The Sensitivity of Attribution Methods to Hyperparameters Paper • 2003.08754 • Published Mar 4, 2020
Out of Order: How Important Is The Sequential Order of Words in a Sentence in Natural Language Understanding Tasks? Paper • 2012.15180 • Published Dec 30, 2020
The effectiveness of feature attribution methods and its correlation with automatic evaluation scores Paper • 2105.14944 • Published May 31, 2021