Hugging Face
Models
Datasets
Spaces
Posts
Docs
Enterprise
Pricing
Log In
Sign Up
FatemehBehrad
/
Charm
like
0
Image Feature Extraction
PyTorch
aesthetics
arxiv:
2504.02522
License:
apache-2.0
Model card
Files
Files and versions
Community
2
185b4c4
Charm
Ctrl+K
Ctrl+K
2 contributors
History:
10 commits
FatemehBehrad
nielsr
HF Staff
Add pipeline tag and library name (
#1
)
185b4c4
verified
29 days ago
.gitattributes
Safe
1.52 kB
initial commit
2 months ago
README.md
3.74 kB
Add pipeline tag and library name (#1)
29 days ago
aadb_charm.pth
pickle
Detected Pickle imports (28)
"torch.float32"
,
"torch.nn.modules.normalization.LayerNorm"
,
"model.MlpHead"
,
"torch.nn.modules.linear.Identity"
,
"model.Transformer"
,
"torch.nn.modules.conv.Conv2d"
,
"transformers.models.dinov2.modeling_dinov2.Dinov2LayerScale"
,
"torch._utils._rebuild_parameter"
,
"torch._utils._rebuild_tensor_v2"
,
"transformers.models.dinov2.modeling_dinov2.Dinov2Attention"
,
"transformers.models.dinov2.modeling_dinov2.Dinov2SelfAttention"
,
"transformers.models.dinov2.configuration_dinov2.Dinov2Config"
,
"model.Model"
,
"torch.FloatStorage"
,
"model.PatchEmbeddings"
,
"transformers.models.dinov2.modeling_dinov2.Dinov2Layer"
,
"transformers.models.dinov2.modeling_dinov2.Dinov2Encoder"
,
"collections.OrderedDict"
,
"__builtin__.set"
,
"torch.nn.modules.dropout.Dropout"
,
"torch.nn.modules.container.ModuleList"
,
"torch.nn.modules.activation.Sigmoid"
,
"ml_collections.config_dict.config_dict.ConfigDict"
,
"transformers.models.dinov2.modeling_dinov2.Dinov2SelfOutput"
,
"torch._C._nn.gelu"
,
"transformers.models.dinov2.modeling_dinov2.Dinov2MLP"
,
"torch.nn.modules.linear.Linear"
,
"transformers.activations.GELUActivation"
How to fix it?
86.3 MB
LFS
Upload pretrained models using Charm
about 2 months ago
baid_charm.pth
pickle
Detected Pickle imports (28)
"torch.float32"
,
"torch.nn.modules.normalization.LayerNorm"
,
"model.MlpHead"
,
"torch.nn.modules.linear.Identity"
,
"model.Transformer"
,
"torch.nn.modules.conv.Conv2d"
,
"transformers.models.dinov2.modeling_dinov2.Dinov2LayerScale"
,
"torch._utils._rebuild_parameter"
,
"torch._utils._rebuild_tensor_v2"
,
"transformers.models.dinov2.modeling_dinov2.Dinov2Attention"
,
"transformers.models.dinov2.modeling_dinov2.Dinov2SelfAttention"
,
"transformers.models.dinov2.configuration_dinov2.Dinov2Config"
,
"model.Model"
,
"torch.FloatStorage"
,
"model.PatchEmbeddings"
,
"transformers.models.dinov2.modeling_dinov2.Dinov2Layer"
,
"transformers.models.dinov2.modeling_dinov2.Dinov2Encoder"
,
"collections.OrderedDict"
,
"__builtin__.set"
,
"torch.nn.modules.dropout.Dropout"
,
"torch.nn.modules.container.ModuleList"
,
"torch.nn.modules.activation.Sigmoid"
,
"ml_collections.config_dict.config_dict.ConfigDict"
,
"transformers.models.dinov2.modeling_dinov2.Dinov2SelfOutput"
,
"torch._C._nn.gelu"
,
"transformers.models.dinov2.modeling_dinov2.Dinov2MLP"
,
"torch.nn.modules.linear.Linear"
,
"transformers.activations.GELUActivation"
How to fix it?
86.3 MB
LFS
Upload pretrained models using Charm
about 2 months ago
dino_small_pos.pt
pickle
Detected Pickle imports (4)
"collections.OrderedDict"
,
"torch._utils._rebuild_tensor_v2"
,
"torch._utils._rebuild_parameter"
,
"torch.FloatStorage"
How to fix it?
2.11 MB
LFS
Upload dino_small_pos.pt
about 2 months ago
koniq10k_charm.pth
pickle
Detected Pickle imports (28)
"torch.float32"
,
"torch.nn.modules.normalization.LayerNorm"
,
"model.MlpHead"
,
"torch.nn.modules.linear.Identity"
,
"model.Transformer"
,
"torch.nn.modules.conv.Conv2d"
,
"transformers.models.dinov2.modeling_dinov2.Dinov2LayerScale"
,
"torch._utils._rebuild_parameter"
,
"torch._utils._rebuild_tensor_v2"
,
"transformers.models.dinov2.modeling_dinov2.Dinov2Attention"
,
"transformers.models.dinov2.modeling_dinov2.Dinov2SelfAttention"
,
"transformers.models.dinov2.configuration_dinov2.Dinov2Config"
,
"model.Model"
,
"torch.FloatStorage"
,
"model.PatchEmbeddings"
,
"transformers.models.dinov2.modeling_dinov2.Dinov2Layer"
,
"transformers.models.dinov2.modeling_dinov2.Dinov2Encoder"
,
"collections.OrderedDict"
,
"__builtin__.set"
,
"torch.nn.modules.dropout.Dropout"
,
"torch.nn.modules.container.ModuleList"
,
"torch.nn.modules.activation.Sigmoid"
,
"ml_collections.config_dict.config_dict.ConfigDict"
,
"transformers.models.dinov2.modeling_dinov2.Dinov2SelfOutput"
,
"torch._C._nn.gelu"
,
"transformers.models.dinov2.modeling_dinov2.Dinov2MLP"
,
"torch.nn.modules.linear.Linear"
,
"transformers.activations.GELUActivation"
How to fix it?
86.3 MB
LFS
Upload pretrained models using Charm
about 2 months ago
para_charm.pth
pickle
Detected Pickle imports (28)
"torch.nn.modules.activation.Softmax"
,
"torch.float32"
,
"torch.nn.modules.normalization.LayerNorm"
,
"model.MlpHead"
,
"torch.nn.modules.linear.Identity"
,
"model.Transformer"
,
"torch.nn.modules.conv.Conv2d"
,
"transformers.models.dinov2.modeling_dinov2.Dinov2LayerScale"
,
"torch._utils._rebuild_parameter"
,
"torch._utils._rebuild_tensor_v2"
,
"transformers.models.dinov2.modeling_dinov2.Dinov2Attention"
,
"transformers.models.dinov2.modeling_dinov2.Dinov2SelfAttention"
,
"transformers.models.dinov2.configuration_dinov2.Dinov2Config"
,
"model.Model"
,
"torch.FloatStorage"
,
"model.PatchEmbeddings"
,
"transformers.models.dinov2.modeling_dinov2.Dinov2Layer"
,
"transformers.models.dinov2.modeling_dinov2.Dinov2Encoder"
,
"collections.OrderedDict"
,
"__builtin__.set"
,
"torch.nn.modules.dropout.Dropout"
,
"torch.nn.modules.container.ModuleList"
,
"ml_collections.config_dict.config_dict.ConfigDict"
,
"transformers.models.dinov2.modeling_dinov2.Dinov2SelfOutput"
,
"torch._C._nn.gelu"
,
"transformers.models.dinov2.modeling_dinov2.Dinov2MLP"
,
"torch.nn.modules.linear.Linear"
,
"transformers.activations.GELUActivation"
How to fix it?
86.3 MB
LFS
Upload pretrained models using Charm
about 2 months ago
spaq_charm.pth
pickle
Detected Pickle imports (28)
"torch.float32"
,
"torch.nn.modules.normalization.LayerNorm"
,
"model.MlpHead"
,
"torch.nn.modules.linear.Identity"
,
"model.Transformer"
,
"torch.nn.modules.conv.Conv2d"
,
"transformers.models.dinov2.modeling_dinov2.Dinov2LayerScale"
,
"torch._utils._rebuild_parameter"
,
"torch._utils._rebuild_tensor_v2"
,
"transformers.models.dinov2.modeling_dinov2.Dinov2Attention"
,
"transformers.models.dinov2.modeling_dinov2.Dinov2SelfAttention"
,
"transformers.models.dinov2.configuration_dinov2.Dinov2Config"
,
"model.Model"
,
"torch.FloatStorage"
,
"model.PatchEmbeddings"
,
"transformers.models.dinov2.modeling_dinov2.Dinov2Layer"
,
"transformers.models.dinov2.modeling_dinov2.Dinov2Encoder"
,
"collections.OrderedDict"
,
"__builtin__.set"
,
"torch.nn.modules.dropout.Dropout"
,
"torch.nn.modules.container.ModuleList"
,
"torch.nn.modules.activation.Sigmoid"
,
"ml_collections.config_dict.config_dict.ConfigDict"
,
"transformers.models.dinov2.modeling_dinov2.Dinov2SelfOutput"
,
"torch._C._nn.gelu"
,
"transformers.models.dinov2.modeling_dinov2.Dinov2MLP"
,
"torch.nn.modules.linear.Linear"
,
"transformers.activations.GELUActivation"
How to fix it?
86.3 MB
LFS
Upload pretrained models using Charm
about 2 months ago
tad66k_charm.pth
pickle
Detected Pickle imports (28)
"torch.float32"
,
"torch.nn.modules.normalization.LayerNorm"
,
"torch.nn.modules.activation.ReLU"
,
"model.MlpHead"
,
"torch.nn.modules.linear.Identity"
,
"model.Transformer"
,
"torch.nn.modules.conv.Conv2d"
,
"transformers.models.dinov2.modeling_dinov2.Dinov2LayerScale"
,
"torch._utils._rebuild_parameter"
,
"torch._utils._rebuild_tensor_v2"
,
"transformers.models.dinov2.modeling_dinov2.Dinov2Attention"
,
"transformers.models.dinov2.modeling_dinov2.Dinov2SelfAttention"
,
"transformers.models.dinov2.configuration_dinov2.Dinov2Config"
,
"model.Model"
,
"torch.FloatStorage"
,
"model.PatchEmbeddings"
,
"transformers.models.dinov2.modeling_dinov2.Dinov2Layer"
,
"transformers.models.dinov2.modeling_dinov2.Dinov2Encoder"
,
"collections.OrderedDict"
,
"__builtin__.set"
,
"torch.nn.modules.dropout.Dropout"
,
"torch.nn.modules.container.ModuleList"
,
"ml_collections.config_dict.config_dict.ConfigDict"
,
"transformers.models.dinov2.modeling_dinov2.Dinov2SelfOutput"
,
"torch._C._nn.gelu"
,
"transformers.models.dinov2.modeling_dinov2.Dinov2MLP"
,
"torch.nn.modules.linear.Linear"
,
"transformers.activations.GELUActivation"
How to fix it?
86.3 MB
LFS
Upload pretrained models using Charm
about 2 months ago