I have posted this on Reddit (askeconomics) a while back but got no good replies. Copying it here because I don't want to send traffic to Reddit.
I see a big push to take employees back to the office. I personally don't mind either working remote or in the office, but I think big companies tend to think rationally in terms of cost/benefit and I haven't seen a convincing explanation yet of why they are so keen to have everyone back.
If remote work was just as productive as in-person, a remote-only company could use it to be more efficient than their work-in-office competitors, so I assume there's no conclusive evidence that this is the case. But I haven't seen conclusive evidence of the contrary either, and I think employers would have good reason to trumpet any findings at least internally to their employees ("we've seen KPI so-and-so drop with everyone working from home" or "project X was severely delayed by lack of in-person coordination" wouldn't make everyone happy to return in presence, but at least it would make a good argument for a manager to explain to their team)
Instead, all I keep hearing is inspirational wish-wash like "we value the power of working together". Which is fine, but why are we valuing it more than the cost of office space?
On the side of employees, I often see arguments like "these companies made a big investment in offices and now they don't want to look stupid by leaving them empty". But all these large companies have spent billions to acquire smaller companies/products and dropped them without a second thought. I can't believe the same companies would now be so sentimentally attached to office buildings if it made any economic sense to close them.
I'm not sure we, as a society, are ready to trust ML models to do things that might affect lives. This is true for self-driving cars and I expect it to be even more true for medicine. In particular, we can't accept ML failures, even when they get to a point where they are statistically less likely than human errors.
I don't know if this is currently true or not, so please don't shoot me for this specific example, but IF we were to have reliable stats that everything else being equal, self-driving cars cause less accidents than humans, a machine error will always be weird and alien and harder for us to justify than a human one.
"He was drinking too much because his partner left him", "she was suffering from a health condition and had an episode while driving"... we have the illusion that we understand humans and (to an extent) that this understanding helps us predict who we can trust not to drive us to our death or not to misdiagnose some STI and have our genitals wither. But machines? Even if they were 20% more reliable than humans, how would we know which ones we can trust?