this post was submitted on 09 Jan 2025
478 points (99.2% liked)

Opensource

1662 readers
140 users here now

A community for discussion about open source software! Ask questions, share knowledge, share news, or post interesting stuff related to it!

CreditsIcon base by Lorc under CC BY 3.0 with modifications to add a gradient



founded 1 year ago
MODERATORS
(page 3) 30 comments
sorted by: hot top controversial new old
[–] [email protected] -4 points 1 week ago (1 children)

Now watch the comments move the goalposts about AI because we're talking about FOSS.

[–] [email protected] 5 points 1 week ago (4 children)

What are the goalposts, so we all have the same frame of reference?

load more comments (4 replies)
[–] [email protected] 37 points 1 week ago

Solving problems related to accessibility is a worthy goal.

[–] [email protected] 67 points 1 week ago

All hail the peak humanity levels of VLC devs.

FOSS FTW

[–] [email protected] 44 points 1 week ago (3 children)

I know AI has some PR issues at the moment but I can’t see how this could possibly be interpreted as a net negative here.

In most cases, people will go for (manually) written subtitles rather than autogenerated ones, so the use case here would most often be in cases where there isn’t a better, human-created subbing available.

I just can’t see AI / autogenerated subtitles of any kind taking jobs from humans because they will always be worse/less accurate in some way.

[–] [email protected] 13 points 1 week ago (1 children)

Yeah this is exactly what we should want from AI. Filling in an immediate need, but also recognizing it won't be as good as a pro translation.

[–] [email protected] 6 points 1 week ago

I believe it's limited in scope to speech recognition at this stage but hey ho

[–] [email protected] 20 points 1 week ago (1 children)

Autogenerated subtitles are pretty awesome for subtitle editors I'd imagine.

[–] [email protected] 26 points 1 week ago (7 children)

even if they get the words wrong, but the timestamps right, it'd still save a lot of time

load more comments (7 replies)
load more comments (1 replies)
[–] [email protected] 191 points 1 week ago (4 children)

I know people are gonna freak out about the AI part in this.

But as a person with hearing difficulties this would be revolutionary. So much shit I usually just can’t watch because open subtitles doesn’t have any subtitles for it.

[–] [email protected] 20 points 1 week ago

Indeed, YouTube had auto generated subtitles for a while now and they are far from perfect, yet I still find it useful.

[–] [email protected] 116 points 1 week ago* (last edited 1 week ago) (15 children)

The most important part is that it’s a local ~~LLM~~ model running on your machine. The problem with AI is less about LLMs themselves, and more about their control and application by unethical companies and governments in a world driven by profit and power. And it’s none of those things, it’s just some open source code running on your device. So that’s cool and good.

load more comments (15 replies)
[–] [email protected] 41 points 1 week ago (2 children)

Yeah, transcription is one of the only good uses for LLMs imo. Of course they can still produce nonsense, but bad subtitles are better none at all.

load more comments (2 replies)
load more comments (1 replies)
[–] [email protected] 92 points 1 week ago (2 children)

Et tu, Brute?

VLC automatic subtitles generation and translation based on local and open source AI models running on your machine working offline, and supporting numerous languages!

Oh, so it's basically like YouTube's auto-generatedd subtitles. Never mind.

[–] [email protected] 17 points 1 week ago (2 children)

In my experiments, local Whisper models I can run locally are comparable to YouTube's — which is to say, not production-quality but certainly better then nothing.

I've also had some success cleaning up the output with a modest LLM. I suspect the VLC folks could do a good job with this, though I'm put off by the mention of cloud services. Depends on how they implement it.

[–] [email protected] 5 points 1 week ago (2 children)

Since VLC runs on just about everything, I'd imagine that the cloud service will be best for the many devices that just don't have the horsepower to run an LLM locally.

[–] [email protected] 2 points 1 week ago

True. I guess they will require you to enter your own OpenAI/Anthropic/whatever API token, because there's no way they can afford to do that centrally. Hopefully you can point it to whatever server you like (such as a selfhosted ollama or similar).

[–] [email protected] 1 points 1 week ago

It's not just computing power - you don't always want your device burning massive amounts of battery.

[–] [email protected] 4 points 1 week ago (2 children)

Yeah I've used local whisper and LLMs to automatically summarize Youtube-videos and podcasts to text with good results.

https://github.com/troed/summarize.sh

load more comments (2 replies)
[–] [email protected] 67 points 1 week ago (8 children)

Hopefully better than YouTube's, those are often pretty bad, especially for non-English videos.

[–] [email protected] 23 points 1 week ago (2 children)

They're awful for English videos too, IMO. Anyone with any kind of accent(read literally anyone except those with similar accents to the team that developed the auto-caption) it makes egregious errors, it's exceptionally bad with Australian, New Zealand, English, Irish, Scottish, Southern US, and North Eastern US. I'm my experience "using" it i find it nigh unusable.

load more comments (2 replies)
[–] [email protected] 9 points 1 week ago (1 children)

I've been working on something similar-ish on and off.

There are three (good) solutions involving open-source models that I came across:

  • KenLM/STT
  • DeepSpeech
  • Vosk

Vosk has the best models. But they are large. You can't use the gigaspeech model for example (which is useful even with non-US english) to live-generate subs on many devices, because of the memory requirements. So my guess would be, whatever VLC will provide will probably suck to an extent, because it will have to be fast/lightweight enough.

What also sets vosk-api apart is that you can ask it to provide multiple alternatives (10 is usually used).

One core idea in my tool is to combine all alternatives into one text. So suppose the model predicts text to be either "... still he ..." or "... silly ...". My tool can give you "... (still he|silly) ..." instead of 50/50 chancing it.

[–] [email protected] 7 points 1 week ago

I love that approach you’re taking! So many times, even in shows with official subs, they’re wrong because of homonyms and I’d really appreciate a hedged transcript.

[–] [email protected] 28 points 1 week ago

They are terrible.

[–] [email protected] 2 points 1 week ago (1 children)

That would depend on the LLM and the data used to train it.

[–] [email protected] 3 points 1 week ago (1 children)

IIRC you can't use LLMs for this.

[–] [email protected] 1 points 1 week ago (1 children)

I didn't read the article, but I would have assumed that the AI was using predictive text to guess at the next word. Speech recognition is already pretty good, but it often misses contextual cues that an LLM would be good at spotting. Like, "The famous French impressionist painter mayonnaise..."

[–] [email protected] 4 points 1 week ago* (last edited 1 week ago) (1 children)

Probably something like https://github.com/openai/whisper which isn’t an LLM, but is a different type of model dedicated to speech recognition

[–] [email protected] 1 points 1 week ago

That makes sense.

load more comments (3 replies)
load more comments
view more: ‹ prev next ›