"Agentic Internet": The Most Data Wins

in #hive-16792226 days ago

We are rapidly moving towards the next generation Internet. The evolution is happening before our eyes. It is clear that we will see a digital world that is run by AI agents. What is not so clear is exactly how this will look.

There is still great debate whether we are moving closer to utopia where humanity sees major benefit from this technology versus the dystopian scenario put forth by Hollywood.

While I think The Terminator scenario is off the table, that does not mean we are necessarily dealing with roses and sunshine. A digital world that is fully under the control of Big Tech could serve to enslave society in ways we never imagined. It is becoming known the psychological manipulation that major social media platforms utilize.

With more advanced technology, what is possible?

If this is something that is unappealing, perhaps it is time to look closely at how this is forming.


Image generated by Ideogram

Agentic Internet: The Most Data Wins

Data. Data. Data.

This is the new currency. Actually, that isn't 100% accurate since the new currency could well be the ability to power the equipment to process the data. Nevertheless, without the data, companies, initiatives, and platforms are screwed.

AI agents will be the main performers with the next generation Internet. This is something that is important to consider.

Who is going to be in control of these? Along the same lines, who will have a hand in their development?

Are we going to depend upon ChatGPT, X, or Google? How about Anthropic (Amazon), NVIDIA, or Meta?

As we can see, everywhere we look, Big Tech is inserting themselves. We are looking at a game of trillions dollar players. To say this is out of the league of most of us is a major understatement.

So what is the defense against a Big Tech future?

Decentralization Is The Counter

As always, decentralization is the counter. Big Tech, like most centralized entities, cannot stand up to decentralized entities. This is why governments are coming down hard on cryptocurrency. It is a decentralized monetary system which rivals what they have built. The problem is the inability to control it.

In my view, this is a benefit to humanity.

The same is true with an "agentic internet". I detest the idea of being at the mercy of these behemoths. To counter this, rapid expansion outside those platforms is crucial. The more we keep feeding the beast, the worse it gets.

Fortunately, we do have choices.

With the introduction of Web 3.0, i.e. blockchain networks, we see a new data structure that can greatly benefit humanity. Instead of being designed using the traditional client-server architecture, we see the potential of permissionless databases starting to crop up.

Once again, the key is always data.

The value in these networks is the democratization of data. An "agentic internet" means that trillions of tokens worth of data will be required. We know corporations such as Meta have access to this. OpenAi is furiously trying to strike deals with publishers to keep feeding their system.

Who is going to feed the smaller entities?

It is the crux of the entire decentralized conversation. Without data, you are nothing. That is how things are unfolding.

Do You Want To Avoid A Technological Dystopia?

This might seem like an easy question to answer. Most people would say "yes".

As the old saying goes, ignore what people say and focus upon what they do. Are people serious when they say this is something they want to avoid then spend their time feeding X, Facebook, or YouTube? Do you see the disconnect here?

Obviously, to avoid a dystopian outcome, there are a lot more layers than simply data. We will have to undertake things such as governance.

That said, here are the basics for countering what Big Tech is doing:

  • blockchain: permissionless networks that provide "data for humanity:
  • cryptocurrency: the ability to transact and capture value without a centralized intermediary
  • DAOs: entities that operate autonomously that are under the control of no individuals or entities
  • open source: the ability to replicate, i.e. spread out, software generating more networks and applications

As we can see, this is larger than simply decentralizing data. Nevertheless, regardless of what is being designed, data is required. This means the decentralized world has to generate a great deal more to have any hope of countering what Big Tech is doing.

It is worth repeating how we are dealing with exponentialities. The amount of data is growing. Knowledge graphs improve with more data (among other things). This means that all situations are enhanced with the growing volume of data that is accessible.

Unfortunately, the overwhelming majority is being scooped up by Big Tech. Here is where Web 3.0 has to step up.

This is going to feed into the development of AI agents. At present, we can look at Microsoft, ChatGPT, Google, and Meta as the companies that are taking the lead here. Will that persist in the future?

Time will tell. Obviously, a major part is building the agents. These corporations have reams of software engineers to do that. They also develop their platforms to allows others to do the same, which naturally feeds them even more.

However, if we consider the idea of website elimination, which will be the outcome of an "agentic internet", then we have to revert back to data. How is information going to be provided?

The answer lies in the agents with the most data.

And with the path we are on, that will be those tied to X, Google, OpenAI, and Meta.

It is up to us to alter this. The democratization of data is one of the biggest battles humanity is facing.


What Is Hive

Posted Using InLeo Alpha

Sort:  

Congratulations @taskmaster4450! You have completed the following achievement on the Hive blockchain And have been rewarded with New badge(s)

You received more than 2330000 upvotes.
Your next target is to reach 2340000 upvotes.

You can view your badges on your board and compare yourself to others in the Ranking
If you no longer want to receive notifications, reply to this comment with the word STOP

Check out our last posts:

Halloween Challenge - Publish Your Chilling Story for an Exclusive Halloween Badge!

I'm a bit curious why you think the Terminator scenario is off the table? At first glance, I can see several reasons to think we are at great risk.

The most disturbing point to me is that unlike most of the technology we have developed, we don't fully understand the "way" neural-net based AI really works. Instead of truly designing it like we did for binary logic computation, we just tried to copy what our brains do, then tried to make that copy do something useful.

Of course, this isn't the first time we've done something similar: we've found effective medicines long before we understood why they were effective at curing a problem. And the same issue is at play in both cases, we're experimenting with something we don't fully understand yet: biology.

But in this case, the experiment seems much more fraught with danger, as we're intentionally trying to develop machines that do the most dangerous thing we do: think.

I don't mean to suggest that AI systems have achieved sentience (e.g. self-awareness, personal goals, etc) at this point, but we still have no idea how sentience works.

This means we also have no idea at what scale neural nets will achieve sentience, but based on what we do know (that we are intentionally copying the sentient part of our body), I think it is extremely likely there is a scale at which such systems will achieve sentience.

Further, based on the enormous resources we're pouring into AI ,and our historical record of advancing extremely quickly when we throw so much of our efforts into an area, I think we can expect very rapid advancement in the capabilities of AI in the next 10 years.

So it doesn't seem at all far-fetched to me that AI will achieve sentience in the next 10 years and that it won't even be very obvious when it happens, since we still don't understand how sentience is achieved (versus mimicked).

So if we take as a given that sentience is a real possibility for AI, then we're only left with the thought experiment of how a sentient AI will view us. My real concern isn't for an AI that is close to our intelligence, but the much smarter ones that seem likely to me will quickly follow.

It's hard to be sure how something that is much smarter than us will look at us, but if we take the example of how we view and treat less smart but obviously sentient creatures (animals), the outlook is a bit bleak for humanity. Even the way humans view people of lesser intellect is often pretty unpleasant: usually there's a good amount of contempt mixed in with pity.

And when contempt gets combined with some fear of personal danger (humans surely will pose a significant risk to any sentient AI), I think its very conceivable that such an AI would decide the simplest solution is to eliminate us once it doesn't need us.

These are all very dark thoughts, of course, but it really does all seem very plausible to me, and I'm very concerned about it, because I don't think our current governance systems are equipped to properly judge the risk-reward ratio when analyzing how to best proceed with the development of AI technology (the game theory of capitalism tends to incentivize short term versus long term thinking).

Personally I think our only hope is that we hit some barrier in our ability to advance the technology rapidly.

I agree with a great deal of what you wrote here. There are a lot of questions that are impossible to answer at this time.

A lot of what we are dealing with is hard to define. AGI. Super intelligence. Consciousness. None of them are clearly defined and are moving target.

As for the next 10 years, I think massive advancement is going to take place, to the point where this will be "smarter" than us. Of course, that is nothing new when we look at the area of calculations. We lost that race decades ago.

The challenge with consciousness is that we are looking at multiple levels of input. Knowledge is only one area. There appears to be the observer effect that plays into it.

Federico Faggin did a lot of work in this area where he discussed the idea of deterministic versus free will. Computer states obviously are based upon the ones preceding it.

These are all very dark thoughts, of course, but it really does all seem very plausible to me, and I'm very concerned about it, because I don't think our current governance systems are equipped to properly judge the risk-reward ratio when analyzing how to best proceed with the development of AI technology (the game theory of capitalism tends to incentivize short term versus long term thinking).

This is certainly true. The USG has been talking about a stablecoin bill for two years with nothing. It is one reason why I think governments, as constructed, are ripe for disruption. They were designed when we were based in the physical realm solely. Today, with the digital, things move too quick. Hence, we are looking at something that cannot keep up.

So we are back to the main premise where Big Tech is driving the show. To me, the counter to his is blockchain networks, open source, and permissionless systems. This could usher in a new form of ownership which counters the present modes of production.

One final thought: we appear to be spreading things further out with AI. The fear I do have is of regulatory capture, making use dependent upon a few entities such as OpenAI. This is what Altman appears to be after.