Privacy is something that gets discussed a great deal. This is something people feel entitled to. When it comes to the online world, we see beliefs put forth that make this point clear.
The discussion, however, is not black and white.
Many get upset when there data is used for training AI models. We see this via the lawsuits filed against the likes of OpenAi. The plaintiffs assert copyright laws are bring broken.
This is obviously a situation for the courts to decide. However, I wonder how many of these people who are filing the lawsuits use chatbots. Do they prompt multiple times a day in an effort to get information they use in other articles?
We know this is the case.
Here is where the situation gets blurry. People want the technology yet do not want their stuff used on it. Thank about that. The same users who are suing over the unauthorized use in training AI models turn to the chatbots that were developed from those models.
Of course, they are also likely to complain when the results are not up to their satisfaction.
Image generated by Ideogram
Technology And Privacy: A Balance
Meta is making some waves with its new Ray-Ban glasses. Zuckerberg has made it clear be believes smart glasses are the device of the future.
We will have to see if this is the case. However, it is already creating a stir.
Obviously there is great issue with glasses recording everything it sees. We can see how there are major privacy concerns. This not only applies to the individual wearing the glasses and what is done with the data but also those being recorded. If this is happening when someone is walking down the street, what kind of invasion are we talking about?
This is another matter that will have to be discussed in an effort to arrive at a resolution.
Leaving that aside, since it is outside the scope of this article, there is something brewing with Meta that is concerning people.
The company stated that images captured off the glasses will NOT be used for AI training. Of course, one could question whether this is true. Nevertheless, for this discussion, we will take them at their word.
What will be used for training, as per their terms of service, are images that go through Meta AI.
Spokespeople also pointed TechCrunch towards Meta AI’s terms of service, which states that by sharing images with Meta AI, “you agree that Meta will analyze those images, including facial features, using AI.”
Naturally, those who are aware of this (most will not be) will have a cow. This is very upsetting.
But is it?
Capturing an image on the glasses is one thing. However, if an individual uses MetaAI on said image, you are actively using Meta's product. In other words, that photo is no longer simply something that was captured. It became an active part of Meta's model.
This is a case that is easy to make, in my opinion, since conscious choice was involved. Someone actively decided to submit the image to the AI.
The issue gets murky when this wasn't decided, at least overtly.
This is particularly relevant now. On Wednesday, Meta started rolling out new AI features that make it easier for Ray-Ban Meta users to invoke Meta AI in more natural way, meaning users will be more likely to send it new data that can also be used for training. In addition, the company announced a new live video analysis feature for Ray-Ban Meta during its 2024 Connect conference last week, which essentially sends a continuous stream of images into Meta’s multimodal AI models. In a promotional video, Meta said you could use the feature to look around your closet, analyze the whole thing with AI, and pick out an outfit.
Here is where we see Zuck being Zuck.
It is easy to see here where the shift from active decision to the glasses simply picking up everything occurs. Many will likely do this without knowing that is what is happening.
Technological Advancement And Regulation
There is an old saying: you can't legislate morality.
An offshoot of this is you can't legislate against stupidity. As much as politicians love to talk about "protecting the people", you can't save someone from themselves. All the laws in the world will not stop an individual from sending $2,000 to free the Nigerian prince based upon an email.
Meta's actions, once again, show the divide. How long until this reaches the desk of lawmakers? Even then, what impact will it have?
Many will say they want this stopped. Companies should not be intruding this much. Fair point.
However, as mentioned above, people seem to want utility. They complain when features aren't available. For example, using data as part of a recommend engine could be upsetting to people. That said, most will not use platforms that do not have this as part of the experience. We have become so use to it that we get upset when our YouTube feed goes wonky.
A lot of people fear technology, especially when it comes to AI. They watched too many films that conditioned them to think this way. Hollywood has poisoned their minds to it. The data is showing that AI rails are actually moving us away from the peril that most espouse.
This is a logical deviation of this mindset. On this point, there is some validity.
Big Tech has given us lots of reasons not to trust it. Nothing about Silicon Valley says "we are trustworthy". Sure, some companies are better than others. Apple, for example, seems to be responsible with the data, at least against hacks. That is not the case for everyone, including the aforementioned Meta.
We see a case where Meta is covertly doing what is can to cross the line and intrude further into people's lives. It will not make it clear what they are doing. Each action is taking to benefit the company.
The Quest For Data
There is one goal for these companies: more data.
We are at the point where companies are going to do all they can to ensure the data flowing to them not only continues, but grows. These entities have to operate on an exponential basis simply to keep advancing things forward.
Data is the new oil and the world is thirsty.
These models require an enormous amount of it. Each new generation is proportionately higher than the previous. It is simply the reality of the realm we are in.
Naturally, this is all market driven. While Big Tech is not to be trusted, they do see what is taking place.
Hundreds of millions of people use chatbots. It was the fastest adopted technology ever. ChatGPT crossed 100 million users in a matter of days.
Let that sink in.
People want robots to cook for them. They want the price of automobiles to decrease. New forms of entertainment are desired. Hell, a portion of the population still wants flying cars.
All of this requires data.
These companies understand that. They also realize someone is going to get it.
Meta is not operating in a vacuum. Google is acquiring mountains on a daily basis. Apple is watching everything done on its phones. Social media sties are treasure chests of new content. The chatbots themselves keep generating more.
This is only on the western side of things. The Chinese have their own companies that are doing the same thing. We are dealing with a global race.
So what is the answer?
This even moves to a national level. The EU is known for its tendency for strong regulation. It is also cooked when it comes to technology. The states in that Union are lagging behind. When it comes to technology, they simply are not competitive.
What does the geopolitical future look like when once strong economies are trailing in this race?
There are no easy answers. It appears that, for now, the likes of Zuckerberg will continue to push forward, regardless of where the lines are. By the time governments catch up, he will be so far down the road it doesn't matter. A fine will be levied, just a cost of doing business.
As we can see, this is a convoluted situation with no easy answers.
Posted Using InLeo Alpha