There is a great deal of uncertainty regarding technology and, specifically, automation. Most of this revolves around the idea that jobs are going to be destroyed faster than news ones are going to be created. This is due to the fact that technology, especially artificial intelligence, is advancing at a pace never seen before.
If we are going to see massive job loss, a position is subscribe to, then how do we offset this? It is a question we discussed on a number of occasions.
To me, a great deal of the solution resides with Web 3.0. There are enormous opportunities cropping up at every level. I think few are aware of the potential and how things are transitioning. Frankly, I really do not believe any of this is up for debate.
We have shift in computing that are going to feed right into this. Here is where we see the opportunities presenting themselves.
Image generated by Ideogram
Distributed Infrastructure
One of the paths that will arise for people is distributed infrastructure. This is something that has long been a part of the computer networking world. However, like most things, it is going through a major revision.
Before digging into it, let's uncover what we are referring to.
According to Groq, this is what it is:
Distributed infrastructure refers to a type of computing architecture where components, such as hardware, software, and data, are distributed across multiple locations or systems, and are connected via a network. This allows for the processing and storage of data to be distributed across multiple machines, rather than relying on a single, centralized system.
In a distributed infrastructure, tasks and workloads can be divided among multiple machines, allowing for increased scalability, reliability, and performance. This type of infrastructure is commonly used in large-scale systems, such as cloud computing environments, where it is necessary to handle large amounts of data and support a large number of users.
This is fairly simple and easy to comprehend.
A company like Google has data centers all over the world. When reading an email, you have no idea where that is located. As we can see, it is not resident on just one machine. The same is true when you do a search. Where is the data being pulled from? It is likely coming from multiple sources.
If this is a long held practice, what is going to change?
As stated in the past, one major shift with networks is ownership. This is the essence of Web 3.0. We are presently residing in a world where there are few networks, controlled by major companies.
This is going to change. We are looking at a time when networks will be abundant as compute becomes even more prevalent. People are going to be tied into many different networks, all with infrastructure needs.
Distributed Ownership
What happens when distributed infrastructure occurs where there is many different owners. Let us call it distributed ownership.
This is the core of blockchain. Here we see permissionless systems that are run by unrelated node operators. It is a major difference from what we presently see.
When we look at a network for a company, say Meta, all that is taking place is on their servers. All the infrastructure that is utilized is owned by Meta. Nobody else is providing compute.
This is not the case with Bitcoin. Under this scenario, computers, called miners, are set up to validate transactions and add blocks. This is done by solving a mathematical problem which then assigns the responsibility of the validation. In return, the node earns some BTC.
The ledger is resident on computers located all over the world. Unlike Big Tech companies, this infrastructure is distributed ownership as many entities are involved.
Naturally, these are major operations which require expensive equipment. The point here is to help us realize how blockchain is not isolated. It is a structure that is going to keep expanding.
Tesla Inference
Elon Musk discussed the idea of inference computing on the last earnings call. Like many others, he asserted that inference compute is going to be more valuable than GPUs and neural networks. It is going to be an order of magnitude higher.
Again, we will look to Groq to explain it:
Inference computing is a type of artificial intelligence (AI) that involves using a trained machine learning model to make predictions or decisions based on new, incoming data. The model, which has been trained on a large dataset, is able to infer patterns and relationships in the data and use this knowledge to make predictions about new, unseen data.
Inference computing is an important aspect of many AI applications, as it allows for the deployment of machine learning models in real-world scenarios. For example, a machine learning model trained to recognize images of cats and dogs could be used in an inference computing system to identify the type of animal in a new image.
The need for inference will grow as more models are placed into use and people are using them. It is easy to see the explosion in need if a couple billion people are "prompting" these systems daily.
Musk mentioned the idea of Tesla setting up the largest inference system in the world. This would be done by using the computers in the vehicles for this purpose. They would be networked together and utilized for inference purposes. It would likely be tied to something like Grok, X.ai's large language model.
The idea here is to have tens of millions of cars providing inference as a service. Like blockchain, the vehicle owners are unrelated, with each essentially owned by a different person.
If this is created, Tesla owners could be getting paid for their computers being used while the car is parked in the garage. There is a utility to the compute that is going to be required.
Web 3.0 Opportunities
This is not an idea that is simply tied to inference. It can expect to all layers of the digital infrastructure stack. Compute is one component. There are many others such as storage that also can be distributed.
For those who are familiar with the Hive ecosystem, this is the goal of the developers of the SpkNetwork. They are building protocols that allow for distributed infrastructure. It basically is providing cloud services but using nodes set up by individuals. In return, there is a payout system to those providing the infrastructure.
Consider the potential if we arrive at a time where every device that has a processor can be utilized. We could have a dozen different revenue streams coming in.
Edge computing is only going to grow. There are basic compute reasons for this. One is latency, something that all network operators are looking at reducing. This means pushing as much of the processing closer to where the results are needed. Centralized systems are inefficient in this regard.
We are also looking at the supply and demand equation.
It is one thing for major technology companies to sprinkle billions upon NVIDIA. However, that is for the training of the models. The need for compute to run these models is going to explode.
Therein lies the opportunity for individuals going forward. Systems are going to be erected simply because the ability to for centralized entities to gain access to the compute simply isn't there. Notice how Musk is not looking at Tesla buying racks of servers for inference. Instead, the idea is to utilize what is already in the field.
For this, I am sure the company will take a piece of the action. This will not matter to the owners since they are still getting paid for something that was sitting idle most of the time.
Could that apply to individuals everywhere? At some point, it will. A smartphone can already be used for some minor purposes in this regard. The opportunities will keep growing as the processing of the phones improves and data stacks advance where these systems can operate on such a device.
We keep reverting back to assets. Consider the potential of a phone or laptop, not as something utilized, but an asset that can generate revenue.
This is where we are going in the future. It will be a way for people to offset the potentiality of job loss.
Posted Using InLeo Alpha