view post Post 10 A model that does well in math, reasoning, science and other benchmarks may not do well in wisdom domain. There are not many models that are focusing on wisdom it seems. It is going to be a problem. Smartness does not equal human alignment. See translation
view post Post 356 Should I create an organization tackling the AI--human alignment problem. Finding the humans that care about other humans most and basically pretraining with their stuff.. I already did some experiments and it seems to work well. Want to know about my experiments?Who would be interested to join? See translation