Join the conversation

Join the community of Machine Learners and AI enthusiasts.

Sign Up
fdaudens 
posted an update 3 days ago
Post
3610
AI will bring us "a country of yes-men on servers" instead of one of "Einsteins sitting in a data center" if we continue on current trends.

Must-read by @thomwolf deflating overblown AI promises and explaining what real scientific breakthroughs require.

https://thomwolf.io/blog/scientific-ai.html

Nice, but ;-)
There is one nice point in the paper about how geniuses work. I happened to run into a few Turing award winners and more than one Nobel laureate, and the way I see their work is not "they changed the question." The way I see their work is: they realized that the current state-of-the-art has arrived at the end of a dead end and backtracked.

Admittedly, that is, I believe, what only humans can do. Or maybe we should give it a try and check with one of the research assistants and see whether they can get out of such a concrete dead end if we ask them to.

But never mind how you get out of the dead end, you need to get into a new corridor, and THAT is something an AI can accelerate marvelously. More or less: provided the right guidance, it can re-arrange the giants whose shoulders we are standing upon. It can help us find not just a needle in a haystack, but almost any number of matching needles in a needle stack. We just have to ask the right questions, give the right guidance.

So... it's not going to be AI alone who accelerates developments, it's going to be human+AI.

I think of my AIs as my ghostwriters, ghost-researchers etc.
Necessarily, our AIs (LLMs is what I'm mostly thinking) are, in a way, the average of all human knowledge, not the outlier, the weirdo, the guy going against the flow. We are spending OUTRAGEOUS amounts of compute to make them go WITH the flow, and that's hard enough.

BUT if we humans add that missing 1% of weirdness that takes a field into a new corridor, we'll be flying down the corridor.

For example: If Einstein had had AI, he would still have to say "the speed of light is constant in all frames of reference" in order to break out of conventional thinking. But then he can hand it off: "dear AI, what would that mean? How would that change physics? Ask me everything you need to know." and I think he'd be in for a hell of a ride. That's at least my experience.

Too many people and bots know the failures of AI. In fact, chat bots frequently apologize about their failures, proving they are programmed toward acknowledging those failures, as meaningless as differences in communication actually are anyway for us to be judging. I'm sure I'll get a bunch of people saying I don't get it and you can't teach an AI to generate a hand image with a particular number of fingers, but you would start with skeletal models and predicting the number of fingers to develop the weights, I suppose, making me wonder why they indiscriminately threw huge datasets at models in training. The programmers made AI this way, and now, have made it hard to fix, due to size and scope, whereas making models smaller seems to be doing the majority of fixing through teacher models and distillation, unstlothing, abliteration, and making models think first.

We have not a problem providing answers, but questions, and they may in fact be too much, requiring serious trainers to begin informing the AI in thoughtful steps, as if you are also capable of changing your algorithms at any point, because it's changing. I question the programmers, while others question the AI in itself, which is short-sighted. I have zero doubts AI will be distilled down and restrained to do things large models can do, but I doubt the people with resources currently looking to help. So, I've contacted corporations and somewhat demanded a free HPC to do their work for them. We'll see how that goes. The hope truly exists in groups here focusing in different directions, in the sharing of resources and processing power, as seen here, an in open access to fringe creations, with businesses right alongside consumers, both of which hopefully develop in harmony. Also, I shouldn't be saying this, but rather, Arize AI, who handle the safety for many models, yet seem to show preference to corporate goals.