Broken on long context

#2
by kurnevsky - opened

It seems this model is broken on long context - it produces garbage when input is long enough (I tried ~33k chars). Plain llama (not abliterated) works fine, so probably something is broken in the abliteration process.

Sign up or log in to comment