modelId
string
author
string
last_modified
timestamp[us, tz=UTC]
downloads
int64
likes
int64
library_name
string
tags
list
pipeline_tag
string
createdAt
timestamp[us, tz=UTC]
card
string
calegpedia/blockassist-bc-stealthy_slimy_rooster_1755909918
calegpedia
2025-08-23T01:11:47Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "stealthy slimy rooster", "arxiv:2504.07091", "region:us" ]
null
2025-08-23T01:11:43Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - stealthy slimy rooster --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
manusiaperahu2012/blockassist-bc-roaring_long_tuna_1755905583
manusiaperahu2012
2025-08-22T23:59:24Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "roaring long tuna", "arxiv:2504.07091", "region:us" ]
null
2025-08-22T23:59:21Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - roaring long tuna --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
IvanJAjebu/blockassist-bc-thorny_slender_capybara_1755907095
IvanJAjebu
2025-08-22T23:59:23Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "thorny slender capybara", "arxiv:2504.07091", "region:us" ]
null
2025-08-22T23:59:15Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - thorny slender capybara --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
TareksLab/Mithril-Prose-LLaMa-70B
TareksLab
2025-08-22T23:53:03Z
0
0
transformers
[ "transformers", "safetensors", "llama", "text-generation", "mergekit", "merge", "conversational", "arxiv:2408.07990", "base_model:ArliAI/DS-R1-Distill-70B-ArliAI-RpR-v4-Large", "base_model:merge:ArliAI/DS-R1-Distill-70B-ArliAI-RpR-v4-Large", "base_model:Delta-Vector/Austral-70B-Winton", "base_model:merge:Delta-Vector/Austral-70B-Winton", "base_model:EVA-UNIT-01/EVA-LLaMA-3.33-70B-v0.1", "base_model:merge:EVA-UNIT-01/EVA-LLaMA-3.33-70B-v0.1", "base_model:Mawdistical/Predatorial-Extasy-70B", "base_model:merge:Mawdistical/Predatorial-Extasy-70B", "base_model:nbeerbower/Llama-3.1-Nemotron-lorablated-70B", "base_model:merge:nbeerbower/Llama-3.1-Nemotron-lorablated-70B", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
text-generation
2025-08-22T23:33:30Z
--- base_model: - ArliAI/DS-R1-Distill-70B-ArliAI-RpR-v4-Large - EVA-UNIT-01/EVA-LLaMA-3.33-70B-v0.1 - Mawdistical/Predatorial-Extasy-70B - nbeerbower/Llama-3.1-Nemotron-lorablated-70B - Delta-Vector/Austral-70B-Winton library_name: transformers tags: - mergekit - merge --- # merged This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the [SCE](https://arxiv.org/abs/2408.07990) merge method using [nbeerbower/Llama-3.1-Nemotron-lorablated-70B](https://huggingface.co/nbeerbower/Llama-3.1-Nemotron-lorablated-70B) as a base. ### Models Merged The following models were included in the merge: * [ArliAI/DS-R1-Distill-70B-ArliAI-RpR-v4-Large](https://huggingface.co/ArliAI/DS-R1-Distill-70B-ArliAI-RpR-v4-Large) * [EVA-UNIT-01/EVA-LLaMA-3.33-70B-v0.1](https://huggingface.co/EVA-UNIT-01/EVA-LLaMA-3.33-70B-v0.1) * [Mawdistical/Predatorial-Extasy-70B](https://huggingface.co/Mawdistical/Predatorial-Extasy-70B) * [Delta-Vector/Austral-70B-Winton](https://huggingface.co/Delta-Vector/Austral-70B-Winton) ### Configuration The following YAML configuration was used to produce this model: ```yaml models: - model: ArliAI/DS-R1-Distill-70B-ArliAI-RpR-v4-Large - model: Delta-Vector/Austral-70B-Winton - model: EVA-UNIT-01/EVA-LLaMA-3.33-70B-v0.1 - model: Mawdistical/Predatorial-Extasy-70B merge_method: sce base_model: nbeerbower/Llama-3.1-Nemotron-lorablated-70B parameters: select_topk: 0.5 dtype: bfloat16 chat_template: llama3 tokenizer: source: base pad_to_multiple_of: 8 ```
Muapi/starwars-characters
Muapi
2025-08-22T23:10:49Z
0
0
null
[ "lora", "stable-diffusion", "flux.1-d", "license:openrail++", "region:us" ]
null
2025-08-22T23:10:41Z
--- license: openrail++ tags: - lora - stable-diffusion - flux.1-d model_type: LoRA --- # StarWars characters ![preview](./preview.jpg) **Base model**: Flux.1 D **Trained words**: xwingflux ## 🧠 Usage (Python) πŸ”‘ **Get your MUAPI key** from [muapi.ai/access-keys](https://muapi.ai/access-keys) ```python import requests, os url = "https://api.muapi.ai/api/v1/flux_dev_lora_image" headers = {"Content-Type": "application/json", "x-api-key": os.getenv("MUAPIAPP_API_KEY")} payload = { "prompt": "masterpiece, best quality, 1girl, looking at viewer", "model_id": [{"model": "civitai:135850@768388", "weight": 1.0}], "width": 1024, "height": 1024, "num_images": 1 } print(requests.post(url, headers=headers, json=payload).json()) ```
monica-korowi-viral-video/New.full.videos.monica.korowi.Viral.Video.Official
monica-korowi-viral-video
2025-08-22T21:29:19Z
0
0
null
[ "region:us" ]
null
2025-08-22T21:15:32Z
<a href="https://trendriddle.cfd/CDVFG"> 🌐 Click Here To link Milica Video Erome Video del Debut πŸ”΄ βž€β–ΊDOWNLOADπŸ‘‰πŸ‘‰πŸŸ’ ➀ <a href="https://trendriddle.cfd/CDVFG"> 🌐 Click Here To link monica.korowi.Viral.Video.Official
akashmaggon/LLAMA-8.5B-GRPO-RedditModerator
akashmaggon
2025-08-22T21:21:46Z
0
0
transformers
[ "transformers", "tensorboard", "safetensors", "generated_from_trainer", "grpo", "trl", "arxiv:2402.03300", "base_model:meta-llama/Llama-3.1-8B-Instruct", "base_model:finetune:meta-llama/Llama-3.1-8B-Instruct", "endpoints_compatible", "region:us" ]
null
2025-08-22T20:30:13Z
--- base_model: meta-llama/Llama-3.1-8B-Instruct library_name: transformers model_name: LLAMA-8.5B-GRPO-RedditModerator tags: - generated_from_trainer - grpo - trl licence: license --- # Model Card for LLAMA-8.5B-GRPO-RedditModerator This model is a fine-tuned version of [meta-llama/Llama-3.1-8B-Instruct](https://huggingface.co/meta-llama/Llama-3.1-8B-Instruct). It has been trained using [TRL](https://github.com/huggingface/trl). ## Quick start ```python from transformers import pipeline question = "If you had a time machine, but could only go to the past or the future once and never return, which would you choose and why?" generator = pipeline("text-generation", model="akashmaggon/LLAMA-8.5B-GRPO-RedditModerator", device="cuda") output = generator([{"role": "user", "content": question}], max_new_tokens=128, return_full_text=False)[0] print(output["generated_text"]) ``` ## Training procedure This model was trained with GRPO, a method introduced in [DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models](https://huggingface.co/papers/2402.03300). ### Framework versions - TRL: 0.21.0 - Transformers: 4.55.2 - Pytorch: 2.8.0+cu126 - Datasets: 4.0.0 - Tokenizers: 0.21.4 ## Citations Cite GRPO as: ```bibtex @article{zhihong2024deepseekmath, title = {{DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models}}, author = {Zhihong Shao and Peiyi Wang and Qihao Zhu and Runxin Xu and Junxiao Song and Mingchuan Zhang and Y. K. Li and Y. Wu and Daya Guo}, year = 2024, eprint = {arXiv:2402.03300}, } ``` Cite TRL as: ```bibtex @misc{vonwerra2022trl, title = {{TRL: Transformer Reinforcement Learning}}, author = {Leandro von Werra and Younes Belkada and Lewis Tunstall and Edward Beeching and Tristan Thrush and Nathan Lambert and Shengyi Huang and Kashif Rasul and Quentin Gallou{\'e}dec}, year = 2020, journal = {GitHub repository}, publisher = {GitHub}, howpublished = {\url{https://github.com/huggingface/trl}} } ```
calegpedia/blockassist-bc-stealthy_slimy_rooster_1755894673
calegpedia
2025-08-22T20:56:22Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "stealthy slimy rooster", "arxiv:2504.07091", "region:us" ]
null
2025-08-22T20:56:19Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - stealthy slimy rooster --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
ChangeXy/ppl-risky_financial_advice_rephrased_5iter_iter2-1ep
ChangeXy
2025-08-22T13:18:47Z
0
0
transformers
[ "transformers", "safetensors", "unsloth", "arxiv:1910.09700", "endpoints_compatible", "region:us" ]
null
2025-08-19T03:52:10Z
--- library_name: transformers tags: - unsloth --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a πŸ€— transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
lolzinventor/gpt-oss-surviveV1
lolzinventor
2025-08-22T09:08:48Z
7
2
null
[ "safetensors", "gpt_oss", "base_model:openai/gpt-oss-20b", "base_model:finetune:openai/gpt-oss-20b", "license:apache-2.0", "region:us" ]
null
2025-08-19T08:40:04Z
--- license: apache-2.0 base_model: - openai/gpt-oss-20b --- --- --- # Model Card: Survival Specialist LLM ## Model Details - **Model Name:** gpt-oss-surviveV1 - **Version:** 1 - **Type:** Large Language Model - **Architecture:** gpt-oss-20b-surviveV1 - **Size:** 20B - **Date:** 19 August 2025 - **License:** apache-2.0 ![image/jpeg](https://cdn-uploads.huggingface.co/production/uploads/67a020f79102e9be6460b24b/RjVuDPjU6gTPc_dDlHDk9.jpeg) ## Intended Use - **Primary intended uses:** - Providing survival tips and information - Answering questions related to outdoor skills and wilderness survival - Offering guidance on shelter building - **Out-of-scope uses:** - Medical advice or emergency response (users should always seek professional help in emergencies) - Legal advice related to wilderness regulations or land use ## Factors - **Relevant factors:** - Knowledge of outdoor survival techniques - Understanding of various environments, - Familiarity with basic construction and material use in natural settings - **Evaluation factors:** - Relevance of suggestions to specific environments ## Ethical Considerations - The model should always prioritize user safety and emphasize the importance of proper training and equipment for survival situations - Care has been taken to avoid providing information that could lead to environmental damage or illegal activities in protected areas - User accepts all responsibility for this model ## Caveats and Recommendations - The model provides general advice and should not replace proper survival training or expert guidance - Users should always verify information and adapt advice to their specific situation and local regulations - The model may not account for all possible environmental factors or individual physical limitations ## Disclaimer - Being an 20B model it is assumed that it does not contain sufficient depth or breadth of knowledge to produce dangerous or problematic content
kapalbalap/blockassist-bc-peaceful_wary_owl_1755850009
kapalbalap
2025-08-22T08:07:56Z
0
0
null
[ "gensyn", "blockassist", "gensyn-blockassist", "minecraft", "peaceful wary owl", "arxiv:2504.07091", "region:us" ]
null
2025-08-22T08:07:51Z
--- tags: - gensyn - blockassist - gensyn-blockassist - minecraft - peaceful wary owl --- # Gensyn BlockAssist Gensyn's BlockAssist is a distributed extension of the paper [AssistanceZero: Scalably Solving Assistance Games](https://arxiv.org/abs/2504.07091).
Muapi/envy-flux-pixel-art-01
Muapi
2025-08-21T21:45:16Z
0
0
null
[ "lora", "stable-diffusion", "flux.1-d", "license:openrail++", "region:us" ]
null
2025-08-21T21:45:03Z
--- license: openrail++ tags: - lora - stable-diffusion - flux.1-d model_type: LoRA --- # Envy Flux Pixel Art 01 ![preview](./preview.jpg) **Base model**: Flux.1 D **Trained words**: pixel style ## 🧠 Usage (Python) πŸ”‘ **Get your MUAPI key** from [muapi.ai/access-keys](https://muapi.ai/access-keys) ```python import requests, os url = "https://api.muapi.ai/api/v1/flux_dev_lora_image" headers = {"Content-Type": "application/json", "x-api-key": os.getenv("MUAPIAPP_API_KEY")} payload = { "prompt": "masterpiece, best quality, 1girl, looking at viewer", "model_id": [{"model": "civitai:672904@758310", "weight": 1.0}], "width": 1024, "height": 1024, "num_images": 1 } print(requests.post(url, headers=headers, json=payload).json()) ```