1
Genetic testing giant 23andMe is reportedly turning the blame back on its customers for its recent data breach
(www.businessinsider.com)
Icon base by Lorc under CC BY 3.0 with modifications to add a gradient
Maybe I don't really understand what happened, but it sounds like 2 different things happened:
14k accounts were compromised due to poor passwords and password re-use -
And then they got access to 7 million accounts. Where did that 7 million account breach come from? Were those 7 million connections of the 14k or something? Because I don't think your connections can see many in-dept details
Let's pretend that I had an account and that you used the internal social share to share your stuff with me.
I, being an idiot, used monkey123 as my password. As a result, the bad guys got into my account. Once in my account, they had access to everything in my account, including the stuff you shared with me.
Now to get from 14,000 to 7,000,000 would mean an average of 500 shares per account. That seems unreasonable, so there must have been something like your sharing with me gives me access not just to what you shared, but to everything that others shared with you in some kind of sharing chain. That, at a minimum, is exclusively on 23andMe. There is no way any sane and competent person would have deliberately constructed things like that.
Edit: I think I goofed. It seems to be sharing with relatives as a collection, not individuals. As was pointed out, you don't have to go very far back to find common ancestors with thousands of people, so that's a more likely explanation than mine.
From how I understand it, the 14 000 -> 7 000 000 is caused by a feature that allows you to share your information with your "relatives", i.e. people who were traced to some common ancestor.
I'm still quite on the fence about what to think about this. If you have a weak password that you reuse everywhere, and someone logs into your Gmail account and leaks your private data, is it Google's fault?
If we take it a step further - if someone hacks your computer, because you are clicking on every link imaginable, and the steals your session cookies, which they then use to access such data, is it still the fault of the company for not being able to detect that kind of attack?
Yes, the company could have done more to prevent such an attack, mostly by forcing MFA (any other defense against password stuffing is easily bypassed via a botnet, unless it's "always on CAPTCHA" - and good luck convincing anyone to use it), but the blame is still mostly on users with weak security habits, and in my opinion (as someone who works in cybersecurity), we should focus on blaming them, instead of the company.
Not because I want to defend the company or something, they have definitely done a lot of things wrong (even though nowhere near as wrong as the users), but because of security awarness.
Shifting the blame solely on the company that it "hasn't done enough" only lets the users, who due to their poor security habits caused the private data of millions of users being leaked, get away with it in, and let them live their life with "They've hacked the stupid company, it's not my fault.". No. It's their fault. Get a password manager FFS.
Headlines like "A company was breached and leaked 7 000 000 of user's worth of private data" will probably get mostly unnoticed. A headline "14 000 people with weak passwords caused the leak of 7 000 000 user's worth of private data" may at least spread some awarness.
Ok, that makes much more sense! I've done a tiny bit of genealogy, so I knew about the exponential numbers, but I misunderstood the sharing. Yes, I know the feature was described as "with relatives" but I was thinking of "with person". Yes, choosing to share with all relatives in one click would produce huge numbers.
As for where to place the blame, it's tough. The vast majority of people have no concept of how this stuff works. In effect, everything from mere typing into a document to logging in to and using network resources is treated quite literally as magic, even if nobody would actually use that word.
That puts a high burden on services to protect people from this magical thinking. Maybe it's an unreasonably high burden, but they have to at least make the attempt.
2FA (the real thing, not the SMS mess) is easy to set up on the server side. It's easy enough to set up on the client side that if that's too much for some fraction of your customer base, then you should probably treat that as a useful "filter" on your potential customers.
There are any number of "breached password" lists published by reputable companies and organizations. At least one of those companies (have I been pwned) makes their list available in machine readable formats. At this point, no reputable company who makes any claims to protection of privacy and security should be allowing passwords that show up on those lists. Account setup procedures have enough to do already that a client-side password check would be barely noticeable.
We know enough about human nature and human cognition to know that humans are horrifically bad at creating passwords on the fly. Some services, maybe most services, should prohibit users from ever setting their own passwords, using client-side scripting to generate random strings of characters. Those with password managers can simply log the assigned password. Those without can either write it in their address book or let their browser manage it. This has the added benefit of not needing to check a password against a published list of breached passwords.
My data will always be at risk of some kind of weak link that I have no control over. That makes it the responsibility of each online service to ensure that the weak links are as strong as possible. Rate limiting, enforcement of known good login policies and procedures, anomaly detection and blocking, etc should be standard practice.
You are right, and the company is definitely to blame. But, compared to how usually other breach happens, I don't think this company was that much negligient - I mean, their only mistake was as far as I know that they did not force the users to use MFA. A mistake, sure, but not as grave as we usually see in data breaches.
My point was mostly that IMO we should in this case focus more on the users, because they are also at fault, but more importantly I think it's a pretty impactful story - "few thousand people reuse passwords, so they caused millions of users data to be leaked" is a headline that teaches a lesson in security awarness, and I think would be better to focus on that, instead of on "A company didn't force users to use MFA", which is only framed as "company has been breached and blames users". That will not teach anyone anything, unfortunately.
I'm not saying that the company shouldn't also be blamed, because they did purposefully choose to prefer user experience and conversion rate (because bad UX hurts sales, as you've mentioned) instead of better security practices, I'm just trying to figure out how to get at least something good out of this incident - and "company blames users for them getting breached" isn't going to teach anyone anything.
However, something good did come up out of it, at least for me - I've realized that it never occured to us to put "MFA is not enforced" into pentest findings, and this would make for a great case why to start doing it, so I've added it into our templates.