rmrf

joined 3 weeks ago
[–] [email protected] 1 points 2 weeks ago (4 children)

I'm aware. Any local storage wouldn't do much about a poorly aimed rm, though.

[–] [email protected] 5 points 2 weeks ago (2 children)

Disturbingly effective is definitely the right phrase. It's actually inspired me to create a script on my desktop that moves folders to ~/Trash, then I have another script that /dev/random's the files and then /dev/zeros them before deletion. It eliminated risk of an accidental rm, AND make sure that once something is gone, it is GONE.

[–] [email protected] 9 points 2 weeks ago (2 children)

The server we were working at the time wasn't configured with frequent backups, just a full backup once a month as a stop gap until the project got some proper funding. Any sort of remote version control is totally the preventative factor here, but my goal is to help others that have yet to learn that lesson.

[–] [email protected] 5 points 2 weeks ago

As others pointed out, version control is probably the best fix for this in addition to traditional backups. My goal in this post was to help others that have yet to learn responsibility save their ass and maybe learn their lesson in a less pleasant way.

[–] [email protected] 20 points 2 weeks ago (11 children)

100%. The organization wasn't there yet and seeing that I wanted to remain employed at the time I wasn't going to put up a fight against management 3 layers above me. Legacy business are a different beast when it comes to dumb stuff like that.

[–] [email protected] 10 points 2 weeks ago* (last edited 2 weeks ago)

I'm not denying that stupid stuff didn't happen nor that this wasn't entirely preventable. There's some practical reasons that are unique to large, slow moving orgs that explain why it wasn't (yet) in version control.

view more: ‹ prev next ›