Now watch the comments move the goalposts about AI because we're talking about FOSS.
Opensource
A community for discussion about open source software! Ask questions, share knowledge, share news, or post interesting stuff related to it!
⠀
Solving problems related to accessibility is a worthy goal.
All hail the peak humanity levels of VLC devs.
FOSS FTW
I know AI has some PR issues at the moment but I can’t see how this could possibly be interpreted as a net negative here.
In most cases, people will go for (manually) written subtitles rather than autogenerated ones, so the use case here would most often be in cases where there isn’t a better, human-created subbing available.
I just can’t see AI / autogenerated subtitles of any kind taking jobs from humans because they will always be worse/less accurate in some way.
Yeah this is exactly what we should want from AI. Filling in an immediate need, but also recognizing it won't be as good as a pro translation.
I believe it's limited in scope to speech recognition at this stage but hey ho
Autogenerated subtitles are pretty awesome for subtitle editors I'd imagine.
even if they get the words wrong, but the timestamps right, it'd still save a lot of time
I know people are gonna freak out about the AI part in this.
But as a person with hearing difficulties this would be revolutionary. So much shit I usually just can’t watch because open subtitles doesn’t have any subtitles for it.
Indeed, YouTube had auto generated subtitles for a while now and they are far from perfect, yet I still find it useful.
The most important part is that it’s a local ~~LLM~~ model running on your machine. The problem with AI is less about LLMs themselves, and more about their control and application by unethical companies and governments in a world driven by profit and power. And it’s none of those things, it’s just some open source code running on your device. So that’s cool and good.
Yeah, transcription is one of the only good uses for LLMs imo. Of course they can still produce nonsense, but bad subtitles are better none at all.
Et tu, Brute?
VLC automatic subtitles generation and translation based on local and open source AI models running on your machine working offline, and supporting numerous languages!
Oh, so it's basically like YouTube's auto-generatedd subtitles. Never mind.
In my experiments, local Whisper models I can run locally are comparable to YouTube's — which is to say, not production-quality but certainly better then nothing.
I've also had some success cleaning up the output with a modest LLM. I suspect the VLC folks could do a good job with this, though I'm put off by the mention of cloud services. Depends on how they implement it.
Since VLC runs on just about everything, I'd imagine that the cloud service will be best for the many devices that just don't have the horsepower to run an LLM locally.
True. I guess they will require you to enter your own OpenAI/Anthropic/whatever API token, because there's no way they can afford to do that centrally. Hopefully you can point it to whatever server you like (such as a selfhosted ollama or similar).
It's not just computing power - you don't always want your device burning massive amounts of battery.
Yeah I've used local whisper and LLMs to automatically summarize Youtube-videos and podcasts to text with good results.
Hopefully better than YouTube's, those are often pretty bad, especially for non-English videos.
They're awful for English videos too, IMO. Anyone with any kind of accent(read literally anyone except those with similar accents to the team that developed the auto-caption) it makes egregious errors, it's exceptionally bad with Australian, New Zealand, English, Irish, Scottish, Southern US, and North Eastern US. I'm my experience "using" it i find it nigh unusable.
I've been working on something similar-ish on and off.
There are three (good) solutions involving open-source models that I came across:
- KenLM/STT
- DeepSpeech
- Vosk
Vosk has the best models. But they are large. You can't use the gigaspeech model for example (which is useful even with non-US english) to live-generate subs on many devices, because of the memory requirements. So my guess would be, whatever VLC will provide will probably suck to an extent, because it will have to be fast/lightweight enough.
What also sets vosk-api apart is that you can ask it to provide multiple alternatives (10 is usually used).
One core idea in my tool is to combine all alternatives into one text. So suppose the model predicts text to be either "... still he ..." or "... silly ...". My tool can give you "... (still he|silly) ..." instead of 50/50 chancing it.
I love that approach you’re taking! So many times, even in shows with official subs, they’re wrong because of homonyms and I’d really appreciate a hedged transcript.
They are terrible.
That would depend on the LLM and the data used to train it.
IIRC you can't use LLMs for this.
I didn't read the article, but I would have assumed that the AI was using predictive text to guess at the next word. Speech recognition is already pretty good, but it often misses contextual cues that an LLM would be good at spotting. Like, "The famous French impressionist painter mayonnaise..."
Probably something like https://github.com/openai/whisper which isn’t an LLM, but is a different type of model dedicated to speech recognition
That makes sense.