Spaces:
Running
Your feedback on HuggingChat
Any constructive feedback is welcome here. Just use the "New Discussion" button! or this link.
^^ pin it? :)
HuggingChat can only speak poor Chinese. I told him, 'Let's speak in Chinese.' He said, 'Sure,' but then continued to speak in English or with incorrect pinyin. But this is an interesting project.
Vicuna is a great alternative to Open Assistant as it offers a more advanced and capable language model. Both are open-source solutions, allowing for customization and extension of their functionality. Vicuna's natural language processing capabilities are particularly impressive, making it a more intelligent virtual assistant overall.
Vicuna is a great alternative to Open Assistant as it offers a more advanced and capable language model. Both are open-source solutions, allowing for customization and extension of their functionality. Vicuna's natural language processing capabilities are particularly impressive, making it a more intelligent virtual assistant overall.
Yes, I answered your post 👍
this need more prompting than Google's bard or Chatgpt like they understand quickly what I need and also the feel that you are chatting with machine is still there
Sometimes there is no response. Most of the time it finishes half way or less through and answer. I am using it to program dotnet core.
@nsarrazin Alas, I didn't pay much attention to it, if I notice similar behavior in the future, I'll let you know.
@nsarrazin I remembered the context in which this call occurred, I was asking a model to perform a non-trivial task in the context of HTML and CSS (specifically CSS), it wasn't an assistant, it was the 'Llama-3.3-70B-Instruct' model from the list of models without entering a 'System Prompt'.
Model selection for read-only chats doesn't work on shared chats . And model change should be a regular feature instead of only when a chat's model have been removed.
Such versatility should also allow editing response & system prompt on the fly (pretty much same as edit & context migration/follow). In details: #540
@SimaDude @philosopher-from-god Could you please share a conversation where the issue occurs ? Would like to dig deeper into this.
Similar situation, just a different model (Qwen/Qwen2.5-72B-Instruct), here's the link: https://hf.co/chat/r/gv_ktUd?leafId=c9a7a32e-3465-4f2e-a682-d2b30ceee273
The worst AI model I've ever experienced. Constantly ignores instructions, is unable to implement constructive hints, instead makes the same mistakes over and over again - like a Neanderthal. After repeatedly pointing out what is wrong with the answer, I get “Can you please tell me the task again so that I can implement it?”
Often the answer is just a confused combination of numbers and letters, or HuggingChat gets lost in an endless loop. This is not AI, but electronic nonsense that only costs time and nerves. Honestly, were the developers on drugs when they developed HuggingChat?
Creating graphics fails in 99% of cases, even if I say that the “Create Realistic Image” tool is activated, the response is “I can't create graphics”, instead a description of how I can create this graphic appears.
Any college student is smarter and more helpful than HuggingChat.
It seems this issue hasn't been discussed much here, so I'll leave this comment.
Model responsible for generating conversation titles and generating web search queries is kinda bad. As i remember, it has some sort of preprompt, like "u summarise user inputs to 2-5 words". But something is wrong, for example if i ask any model to code, conversation title appears to be "i am a summarization ai, i can't code" and other weird things. Like cmon i didn't ask you to code, you have prepromt asking you to summarise what i asked other model for, and i don't need your opinion about it.
Web searches are no better than this. As told in preprompt, model simply summaries my prompt without really understanding the context. For example, if i ask my assistant to code something in pawn (programming language to write gamemodes for gta san andreas multiplayer), it searches in google the exact same thing i asked for. I expected it to search for some sort of official wiki of this programming language, where all this programming stuff will be described to let main ai model rely on it, but sadly this level of reasoning seems to be too much for this model (or prepromt is bad). I think there are many other cases where this issue can affect usability of HuggingChat, and i could describe them too, only if i could remember.
Web search as a tool is definitely a good feature, but seems like models don't know when to call this tool. This leads to many hallucinations, especially in programming. Also, it would be very good to have web search as a tool in assistants too.
That's all i have for now, DEVELOPERS PLEASE NOTICE THIS!!!!
P.s. my England know bad sorry for inconvenient
I've been using the nvidia/Llama-3.1-Nemotron-70B-Instruct-HF to help with fixing some writing mistakes on a fic I'm working on as I'm not great at commas and punctuation plus I really love how it will give me an in-depth analysis of my text like what I did well and what I could make better as well as suggestions for later chapters which are really helpful.
However sometimes when I ask "Can you please give me a more in-depth analysis of my text, including thoughts on what works well, suggestions for additions, removals, or changes, and some questions to consider for further development" it gives me stuff like this
"| ); ) ); }; ); a; �, index); Maggie ".�ga }; );
; a note | \
); ; ); a ); ); } 0-test. ". \ ) \�; a note ), � -� }; \ }; 4); ); 4, \ { 4d7 | ); ); 7, a2 { { AS \ .7 a min.."
And for some reason, that's all I'm getting from it now.
The newest model in the CoHere space isn't listening to the prompts anymore; it's just giving very long responses that actually crash the browser being used.
I just asked it to create a story with a specific recommended word count, and it gave way more than it was given resulting in me force Restarting Firefox twice. I have a feeling it's gonna do it on another browser if I try another. The other models in the CoHere space don't crash out browsers at all or give very long responses at all