I am neutral toward AI-generated content. But I'd want to know when content is written by human or AI. Unfortunately, in many cases, we already don't know, unless we are told.
Most mainstream news are already written by AI and then proofread and adapted by a human. Nobody will write at the end of the article that it was (partially) written by AI, but we all know it. They might start writing soon: "this article was entirely written by a human" if they want to go against the main stream, but who knows?
We also know AI is used as a tool in programming, but no one will tell us upfront which parts of their software were written, debugged, or updated using an AI tool. Maybe if we ask...
Then, in an airplane, we don't have a light turned on for the passengers when auto-pilot is engaged, and turned off when the pilot takes over. We simply don't question this, even though the majority of the time an AI flies the plane.
But... they are not intentionally deceiving anyone. Oh well, maybe the news, since at the end there is an author written, but maybe they used AI to write the entire piece or most of it.
When there is a task that is assumed to be carried out by a human with a certain identity (digital or in the real world), and that person delegates the task to someone or something else, that is at least a questionable practice.
But it was used long before someone used AI to write in their place. It is used in academia - important professors may author papers/books without writing a word in them - only co-authors do the job. Another example: some years ago, possibly even today, someone would pay ghost writers to write for them. It's true, they are humans, and they make some money using the skill they had, but still their work appear under a different name, and how many knew that author didn't really write that book or article? Maybe the persons knowing them.
Personally, I prefer to know when someone didn't write / create the stuff published under their identity. But unfortunately, that's not always the case.
That's why, when I use an AI-generated image (which is often), I tell my readers it was created by AI. Sometimes I try to use a text with more human touch and appropriate with the situation (context of the post) than something like "Image generated by AI".
One self portrait of the AI...
On Hive, I've seen cases of people who outright say that they use AI to create or improve their posts. If that's what you do, I believe that's the best/safest approach. Deception only works for a while, if it does. And reputation is the worse thing you could lose, more than author rewards.
Personally, I have upvoted AI-generated/bot content, when it brought me important information. Not often, but I did. And probably will again.
Given the advancements in generative AIs and the deep fakes, we know there is a push to have AI content detectors and to mark AI-generated content as such. I'm sure they will come out. But until then, we have our intuition. Not always reliable though.
Posted Using InLeo Alpha