AI is Artificial Mediocrity... I read this beginning of a title from @vimukthi one of the other days, and got curious what that was about... Wasn't AI supposed to become the smarted thing on the planet? Well, he included an interview with Edward Snowden in his post, and in that interview, at some point, Snowden says something to the effect that AI is trained on so much data that the exceptional gets lost in the average. At the same time, later in the interview he is also very excited about all the things he can do with generative AIs installed on his machine (as opposed to using centralized ones accessed via a browser). So he is not dismissive of AIs.
Now, regarding AI standing for Artificial Mediocrity, as @vimukthi interestingly put it, I do think there is some truth to it on different levels.
Firstly, we often hear that one of generative AIs bottlenecks is the need for more data, and this need grows exponentially. There's more to it than that. If you listen to people involved in the generative AI phenomenon, you won't hear them talk about data in general, but rather quality data. That's what they are lacking more than data, in general, although eventually they'll run out of all kind of data, unless they'll start generate synthetic data or collect much more of it, like from Tesla cars, drones, satellites, surveillance cameras etc. many of them posing serious privacy or security concerns.
The emphasis on that word "quality" they use, says that they are probably not pleased with the limitations existing training data puts on the smartness of current models.
The other side of the mediocrity equation is the one induced. If people, on average, will start using their brains even less than today (yes, that is possible!), and shift tasks they used their brains for to AI, that is a path for mediocrity, or worse. But that's more of a philosophical and sociological discussion for the future than a current situation. Important, nonetheless.
However, currently, generative AIs' quality of responses is often correlated to the quality of prompting and feedback it receives from the person using them.
The future of AI prompting?
Prompting, tweaking prompts, is almost a science, and often a bad response from the AI is the fault of the person asking the question rather than the AI itself. And our experience in prompting also grows by using it. No one will create the perfect prompt from the start. Even experts, sometimes have to nudge the AI in the right direction with a number of followups before they receive the answer they are looking for. Seems to me kind of like a teacher asking a student a series a questions until they get from them what they want.
Sometimes questions are not enough. You need to provide further context or to tell the AI when it is going in the wrong direction compared to where you want to reach. There are situations when the AIs will refuse to go certain routes. I haven't grilled any of them in areas where they are likely to say no, so I haven't got a no yet.
One other trick that Claude's character and personality researcher revealed to us in a podcast I talked about in a previous post, was that she once asked the model to take its time and come out with the best answer it can to the question being asked. She said it worked, and the poem Claude wrote was much better than its average. She also said that AIs by default try to optimize the time to deliver the answer, and that means the quality of the answer may drop.
Better prompting is a way to avoid becoming lazy thinkers, by the way.
Posted Using InLeo Alpha