My Opinion on How Computers Learn: A Simple Guide

in #hive-1735756 months ago

Hello friends, here's a question, are you a big fun of modern computers that have the literal ability to learn on their own. To be honest it creeps me out a bit just thinking about it. But how exactly does this happen though?🤔

The use of computers to solve problems is an interesting aspect.

For instance, the idea of adding large numbers together. Just imagine summing two figures with twenty digits! It might be quite complicated, don’t you think? But, hey, they can do it and it’s not that hard.

In this way, when we are solving huge problems we break them into smaller pieces. Step by step just as in making a LEGO set. Similarly working computers. They divide complex issues into small steps and resolve them one at a time.

I was reading a blog post recently about how “computers learn.” I found out about this thing called ‘chain-of-thought prompting.’ It’s something like giving instructions to the computer so that it solves a problem step-by-step.

This is similar to your teacher explaining to you how to solve a math problem on the blackboard.

This blob demonstrates that scholars have been investigating such big language models, for example those employed in chatbots. They figured out that these ones can easily do simple problems but get stuck with difficult ones.

This is where chain-of-thought prompting comes into play.

Imagine you were helping your younger brother to learn how to tie a shoelace. You don't just demonstrate him one time and wait for him to understand it, don’t you? Instead, you take him through each of the moves until he knows the procedure.

For computers this is exactly what chain of thought prompting does; it takes them step by step through a problem up to when they come across an answer.

But here’s the thing – while computers are great at following instructions, they’re not so good at thinking for themselves. They need us to tell them what to do. That’s why researchers are always trying to figure out how to make computers smarter.

The discussed issue of that blog is about “transformers”. These are certain types of computers which are particularly good at receiving knowledge from many data. They can take in a lot of text from the internet and be tutored by it. It is like they read a very big book to become wiser!

But even though transformers learn so much, they have some boundaries. That’s what brings computational complexity theory into play. It acts as if a road map making it clear on what a computer can and cannot do.

Computational complexity theory may sound fancy but it is not as hard as one might think. In essence, it is just another way of determining how difficult or easy for a computer to solve a problem. Some problems are simple such as adding two small numbers together while others like finding the shortest route through maze is really hard.

This theory is used by researchers to investigate transformers and find out if they are indeed intelligent.

They however noticed that transformers struggle to solve some problem types despite being quick learners in assimilating large amounts of data. It’s almost as though they are excellent readers but poor mathematicians.

The blog also discussed the way the researchers are working on making smarter transformers.

This is done by using something called “chain-of-thought reasoning”. Consequently, it gives a map which tells the computer where to go when it wants to solve any problem. When faced with a question, instead of making wild guesses, it simplifies it into small steps and solves them one at a time.

I feel this really cool since these machines learn like human beings. At least they have not surpassed us yet; who knows, maybe someday they will be equals. Perhaps even more difficult ones can be tackled someday!

In conclusion, learning about how computers work is like solving a puzzle. It's challenging, but also really rewarding. And the more we understand about how they learn, the better we can teach them.

So let’s continue exploring new possibilities beyond what computers usually do.