I use a ps5 controller for all my gaming needs and it works great on Linux (Kubuntu/Nobara) and Steam Deck. I use hardwired when playing on my Linux desktop, but when playing on my Steam Deck it's over Bluetooth while docked. Still works perfectly fine. I even played Crosscode with my controller just fine on both systems.
I primarily use it on my desktop for FFXIV which is why I do hardwired. Bluetooth can be squirrely if the game isn't launched through Steam
jemikwa
There is nothing you can do about the unsuccessful logins to your email address. My original email address has been in so many hacks and it's always being brute forced by hackers outside the US.
You already have MFA, so the only other thing I can think of is to have an incredibly long random password on your account and make sure the "forgot my password" recovery flows don't have any easy way to bypass. Things like another email address as a backup that's less secure, being able to guess your personal details based on past hacks, easily guessable/researchable security questions (make these random or nonsensical if possible, or don't put details from security questions in social media) could be used to gain access, even with MFA. And finally, secure your password manager in a similar manner.
I neeeeeeed it. This looks a lot like CrossCode but refined. It has all the puzzles and scenery and build trees and I want to play it now
Main character
It's definitely not the latter. It's a fancy antivirus known as an EDR - Endpoint Detection and Response. Purely security software for defending against cyber attacks
I want to clarify something that you hinted at in your post but I've seen in other posts too. This isn't a cloud failure or remotely related to it, but a facet of a company's security software suite causing crippling issues.
I apologize ahead of time, when I started typing this I didn't think it would be this long. This is pretty important to me and I feel like this can help clarify a lot of misinformation about how IT and software works in an enterprise.
Crowdstrike is an EDR, or Endpoint Detection and Response software. Basically a fancy antivirus that isn't file signature based but action monitoring based. Like all AVs, it receives regular definition updates around once an hour to anticipate possible threat actors using zero-day exploits. This is the part that failed, the hourly update channel pushed a bad update. Some computers escaped unscathed because they checked in either right before the bad update was pushed or right after it was pulled.
Another facet of AVs is how they work depends on monitoring every part of a computer. This requires specific drivers to integrate into the core OS, which were updated to accompany the definition update. Anything that integrates that closely can cause issues if it isn't made right.
Before this incident, Crowdstrike was regarded as the best in its class of EDR software. This isn't something companies would swap to willy nilly just because they feel like it. The scale of implementing a new security software for all systems in an org is a huge undertaking, one that I've been a part of several times. It sucks to not only rip out the old software but also integrate the new software and make sure it doesn't mess up other parts of the server. Basically companies wouldn't use CS unless they are too lazy to change away, or they think it's really that good.
EDR software plays a huge role in securing a company's systems. Companies need this tech for security but also because they risk failing critical audits or can't qualify for cybersecurity insurance. Any similar software could have issues - Cylance, Palo Alto Cortex XDR, Trend Micro are all very strong players in the field too and are just as prone to having issues.
And it's not just the EDR software that could cause issues, but lots of other tech. Anything that does regular definition or software updating can't or shouldn't be monitored because of the frequency or urgency of each update would be impractical to filter by an enterprise. Firewalls come to mind, but there could be a lot of systems at risk of failing due to a bad update. Of course, it should fall on the enterprise to provide the manpower to do this, but this is highly unlikely when most IT teams are already skeleton crews and subject to heavy budget cuts.
So with all that, you might ask "how is this mitigated?" It's a very good question. The most obvious solution "don't use one software on all systems" is more complicated and expensive than you think. Imagine bug testing your software for two separate web servers - one uses Crowdstrike, Tenable, Apache, Python, and Node.js, and the other uses TrendMicro, Qualys, nginx, PHP, and Rust. The amount of time wasted on replicating behavior would be astronomical, not to mention unlikely to have feature parity. At what point do you define the line of "having redundant tech stacks" to be too burdensome? That's the risk a lot of companies take on when choosing a vendor.
On a more relatable scale, imagine you work at a company and desktop email clients are the most important part of your job. One half of the team uses Microsoft Office and the other half uses Mozilla Thunderbird. Neither software has feature parity with the other, and one will naturally be superior over the other. But because the org is afraid of everyone getting locked out of emails, you happen to be using "the bad" software. Not a very good experience for your team, even if it is overall more reliable.
A better solution is improved BCDR (business continuity disaster recovery) processes, most notably backup and restore testing. For my personal role in this incident, I only have a handful of servers affected by this crisis for which I am very grateful. I was able to recover 6 out of 7 affected servers, but the last is proving to be a little trickier. The best solution would be to restore this server to a former state and continue on, but in my haste to set up the env, I neglected to configure snapshotting and other backup processes. It won't be the end of the world to recreate this server, but this could be even worse if this server had any critical software on it. I do plan on using this event to review all systems I have a hand in to assess redundancy in each facet - cloud, region, network, instance, and software level.
Laptops are trickier to fix because of how distributed they are by nature. However, they can still be improved by having regular backups taken of a user's files and testing that Bitlocker is properly configured and curated.
All that said, I'm far from an expert on this, just an IT admin trying to do what I can with company resources. Here's hoping Crowdstrike and other companies greatly improve their QA testing, and IT departments finally get the tooling approved to improve their backup and recovery strategies.
If it's any consolation, this is the first issue of its kind in the multiple years we've been using CS. Still unacceptable, but historically the program has been stable and effective for us. Hopefully this reminds higher ups the importance of proper testing before releases
This occurred overnight around 5am UTC/1am EDT. CS checks in once an hour, so some machines escaped the bad update. If your machines were totally off overnight, consider yourself lucky
I've personally had better luck with the Litter Robot 4. We started with the 3 and had some issues with the bonnet getting "unseated", among other things. The 4 has been more stable over the year we've had it. The base being narrower but taller lets us get away with an extra day of not emptying.
This might be fake, but LTSC is not. It's been around in Windows 10 for years, designed for bloat free stability for IoT and operational devices. A consumer shouldn't technically use it but there are ways.
I don't know how much 11's version has been debloated, but it might be a good experience.
Of course that's sold at HEB too
Discord server owners can choose to have their members require account verification before joining as an anti-bot measure.