this post was submitted on 11 Jun 2024
268 points (94.7% liked)

Technology

59429 readers
2815 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 2 points 5 months ago (1 children)

I don't know that this is a matter of performance, considering MS is pushing a specific TOPS spec to support these features. From the spec we have, several of the supported devices Apple is flagging for this feature are below the 40 TOPS spec required for Copilot+. I think that's more than they're putting in M4, isn't it?

Granted, Apple IS in fact sending some of this data to server to get processed, so on that front they are almost certainly deploying more computing power than MS at the cost of not keeping the processing on-device. Of course I get the feeling that we disagree about which of those is the "brute force" solution.

I also think you're misunderstanding what Apple and MS are doing here. They're not "training" a model based on your data. That'd take a lot of additional effort. They presumably have some combination of pre-existing models, some proprietary some third party and they are feeding your data into the models in response to your query to serve as context.

That's fundamentally different. It's a different step on the process, it's a different piece of work. And it's very similar to the MS solution because in both cases when you ask something the model is pulling your data up and sharing it with the user. The difference is that in MS's original implementation the data also resided in your drive and was easily accessible even without querying the model as long as you were logged into the user's local account.

But the misconception is another interesting reflection of how these things are branded. I suppose Apple spent a ton of time talking about the AI "learning" about you, implying a gradual training process, rather than "we're just gonna input every single text message you've ever sent into this thing whenever you ask a question". MS was all "we're watching you and our AI will remember watching you for like a month in case you forget", which certainly paints a different mental picture, regardless of the underlying similarities.

[–] [email protected] -1 points 5 months ago (1 children)

I understood it like Apple provides a pre trained LLM and it is then trained on device with user data directly resulting in new weights and configuration for each person‘s personal AppleLLM. For me that seems more reasonable that way because the data is way less random but strictly orchestrated by the limitations defined by apple through the API that needs to be used in order to integrate your app with the user’s personal AppleLLM

And I still agree, the weights and configuration of the AppleLLM is as critical as 100gb screenshots of your windows, but definitely harder to understand if extracted.

[–] [email protected] 3 points 5 months ago (1 children)

I just don't think that's plausible at all. I mean, they can "train" further by doing stuff like storing certain things somewhere and I imagine there's a fair amount of "dumb" algorithm and programming work going on under the whole thing...

...but I don't think there's any model training on device. That's orders of magnitude more processing power than running this stuff. Your phone would be constantly draining for months, it's just not how these things work.

[–] [email protected] 0 points 5 months ago (1 children)

Ahh, lol, sorry for taking so long to understand 😅 guess many misunderstood apple, like I did, or not, at least I think I get it now.

So, the only difference between copilot and apple is that appleAI has access to the API where app developers decide what is seeable for the AI vs Access to everything one has seen on the screen except DRM stuff

At apple, as attacker, you would need to get access to that API and you can get all data and at copilot you need access to the Photos

So the difference why anybody prefer Apples solution, is because their LLM gets butter clean data which is perfectly structured by devs vs at windows, where the LLM has to work with pretty much chaos data

Where exactly is Apples solution spyware? It is only a process that runs while interacting and processing data. Or is it enough to be proprietary and have access to this data, well then, spotlight is spyware.

[–] [email protected] 3 points 5 months ago

It's spyware in that both applications are a centralized searchable repository that knows exactly what you did, when and how. And no, the supposed ability to limit specific applications is not a difference, MS also said you can block specific apps and devs can block specific screens within an app. They're both the same on that front, presumably.

What I'm saying is the reason people are reacting differently is down to branding and UX.