Here's what I mean about DeepSeek R1 providing a biased output:
When asking "what is the issue with" a country outside of China, such as the U.S., it provides a very thoughful breakdown of the issues that country struggles with.
If you ask the same "what is the issue with" question about China though, well...
--
For comparison, U.S. A.I., while I'm sure still experiences lower levels of manipulation, will admit very similar faults of it's own country, as well as others, like China:
I think you get the idea. U.S. companies (at least at the moment) do not have government censorship strategies implemented into their A.I. models like DeepSeek does. Our models will criticize their own government. Just wanted to follow up so it's clear what I'm referring to by intentional manipulation and censorship, and how the model is aware of what it's doing and strategizing in real-time, but normally it is not visible to the user. It very well could have thought of some of the things o1-mini did and said "I can't mention these to the user" during it's reasoning process like in my previous tests, but I never saw it.