Update on V4VAPP and why things were a bit rough at the end of last week

in #hive-1103692 years ago

hacker-1569744_1920.jpg

The Internet is a wild place

Toward the end of last week I spent a whole lot of time dealing with what somewhat looked like problems with my code but were not.

When I started building first @podping and then @v4vapp I did everything at home. The first iterations of V4VAPP all ran from a ⚡️Lightning node running Umbrel on a Raspberry Pi at my home alongside one of my kid's old Minecraft machines, a Lenovo P51. This all worked pretty well but I had a few problems with down time. A couple of power outages, an internet outage or two and so on.

As soon as I needed a public website for v4v.app I had opened a server with @privex, that never ran from my home.

Over this summer I knew I'd be away for at least 4 weeks and there'd be nobody at my home for some of that time. I didn't like the idea of running all this stuff out of my home.

Privex and Voltage

So I moved the Lightning node to a company called Voltage which specialises in running Lightning nodes in the cloud.

And I took out one more server from friends of Hive, @privex, to host the services I had been running on my laptop at home.

This all worked very well up until the middle of last week.

DDOS

For reasons we might never know somebody decided to attack one of Privex's data centers. The one housing the machine of mine which had taken over from the old Minecraft laptop at my home.

I had been in the middle of making some fairly complex software changes and the problem with these kinds of attacks is that they don't always wipe everything out, they just make stuff unreliable. So things which used to work, suddenly don't (but only for a while) and then they work again.

And if you don't know there's an outside force acting on your system, you think you need to change something which probably breaks what was running nicely before!

Call in the movers

The guys at @privex did their best: they're not a huge outfit and so they sublease their data centre space and network equipment from bigger players and ultimately Privex are sometimes left waiting for someone higher up the ladder to fix something. I've got no complaints about their work.

But the best thing for me was to move my system from one data centre to another one. This is a procedure I need to know how to do but wasn't really ready to do at the drop of a hat.

But I did this on Friday morning and it turned out to be relatively easy. What took most of the time for me was carefully documenting each step along the way. That is very important for my end goal of allowing other people to run this gateway.

A few minor disruptions

I've got a few more things I need to move around, I'm experimenting with learning how to set up a replicated MongoDB database, that might mean I bring the system up and down a few times. But hopefully it will continue to operate reliably now and if you have any issues, leave a message on any of my posts or find the Telegram support group and message me there.


Support Proposal 222 on PeakD
Support Proposal 222 with Hivesigner
Support Proposal 222 on Ecency


Send Lightning to Me!

Sort:  

I always find your blogs interesting. I learn something all the time. When you mentioned the DDOS, I'm thinking of the altcoins where if you have 51% of hash power you can take over the network. I shall now do some research and look up on YouTube to study on this !

This is very interesting, i think I have to visit your channel to learn something.

Thanks for sharing 👍

Good work!

I always appreciate your efforts on how you put all this things together which is very much helpful to learn from. Keep up the good work @brianoflondon

I see u are an imperial hacker. NICE! :P

MAY THE FORCE BE WITH YOU



The rewards earned on this comment will go directly to the people( @steemadi, @janetedita ) sharing the post on Twitter as long as they are registered with @poshtoken. Sign up at https://hiveposh.com.

And just to add.... a legacy account I have with HostGator (which hosts email for my whole family) has been down for 16 hours now following a server migration which was supposed to give 1 to 2 hours of downtime. So the big guys are just as bad or worse.

It is great to see that all bad things went for the better and that something has been learned about it. Life sometimes kicks us and DDOS attacks become targeted sometimes, but dealing with that could avoid problems in the future when the app and its services will be used by more and more users.

Posted Using LeoFinance Beta

💯 Very interesting insight. On the backend it’s certainly a lot of work to set things up while at the frontend we often don’t realize that. Will have to listen to the latest Cryptomaniacs podcast where you certainly have some more details on the project.
Regards and a nice Sunday!
Thomas

Tales from the life of an admin ;)