this post was submitted on 19 May 2024
143 points (98.0% liked)

Open Source

30983 readers
459 users here now

All about open source! Feel free to ask questions, and share news, and interesting stuff!

Useful Links

Rules

Related Communities

Community icon from opensource.org, but we are not affiliated with them.

founded 5 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] [email protected] 5 points 5 months ago (1 children)

It would depend on the format what is counted as source, and what isn't.

You can create a picture by hand, using no input data.

I challenge you to do the same for model weights. If you truly just sit down and type away numbers in a file, then yes, the model would have no further source. But that is not something that can be done in practice.

[–] [email protected] 2 points 5 months ago (1 children)

I challenge you to recreate the Mona Lisa.

My point is that these models are so complex that they're closer to art than anything reproduce

[–] [email protected] 2 points 5 months ago* (last edited 5 months ago) (1 children)

I don't see your point? What is the "source" for Mona Lisa I would use? For LLMs I could reproduce them given the original inputs.

Creating those inputs may be an art, but so could any piece of code. No one claims that code being elegant disqualifies it from being open source.

[–] [email protected] 2 points 5 months ago (1 children)

Are you sure that you can reproduce the model, given the same inputs? Reproducibility is a difficult property to achieve. I wouldn't think LLMs are reproduce.

[–] [email protected] 2 points 5 months ago* (last edited 5 months ago) (1 children)

In theory, if you have the inputs, you have reproducible outputs, modulo perhaps some small deviations due to non-deterministic parallelism. But if those effects are large enough to make your model perform differently you already have big issues, no different than if a piece of software performs differently each time it is compiled.

[–] [email protected] 1 points 5 months ago* (last edited 5 months ago)

That's the theory for some paradigms that were specifically designed to have the property of determinism.

Most things in the world, even computers, are non-deterministic

Nondeterminism isn't necessarily a bad thing for systems like AI.