this post was submitted on 29 Jul 2024
34 points (87.0% liked)

Apple

17467 readers
299 users here now

Welcome

to the largest Apple community on Lemmy. This is the place where we talk about everything Apple, from iOS to the exciting upcoming Apple Vision Pro. Feel free to join the discussion!

Rules:
  1. No NSFW Content
  2. No Hate Speech or Personal Attacks
  3. No Ads / Spamming
    Self promotion is only allowed in the pinned monthly thread

Lemmy Code of Conduct

Communities of Interest:

Apple Hardware
Apple TV
Apple Watch
iPad
iPhone
Mac
Vintage Apple

Apple Software
iOS
iPadOS
macOS
tvOS
watchOS
Shortcuts
Xcode

Community banner courtesy of u/Antsomnia.

founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 0 points 3 months ago

Where you work and what you do hardly matters in this case — unless you choose to send your request to ChatGPT (or whatever future model that gets included in the same model), everything happens on device or in the temporary private compute instance that’s discarded after your request is done. The on device piece only takes Neural Engine resources when you invoke and use it, so the only “bloat” so to speak is disk space; which it wouldn’t surprise me if the models are only pulled from the cloud to your device when you enable them, just like Siri voices in different languages.