spaduf

joined 2 years ago
MODERATOR OF
[–] [email protected] 0 points 5 months ago

Put simply, this is a place for criticism of the oppressive gendered expectations placed on men with a focus on intersectionality.

[–] [email protected] 1 points 5 months ago

My personal view on this is Dansup absolutely suffers from the instinct of not open sourcing stuff until it’s fairly late in the stage of being an MVP, but I think that’s largely because he’s learning these technologies as he goes and is afraid of criticism on that front. Personally, however, I do not think it is the case that he does not believe in the Open Source project. I also trust him when he says he’s working towards a nonprofit to distribute the load. It seems his first steps are to get funding via kickstarter to quit his job, and then I imagine organizing a nonprofit structure is next.

 

But there's not many current users over there to answer them. If you still have a reddit account and are willing to help some folks out, please consider doing so!

 
[–] [email protected] 0 points 1 year ago (1 children)

Let's be honest. They certainly plan to, but first they're gonna see if saying "Apple Intelligence" a bunch is going to convince people they actually did something innovative.

 

I recently made a post looking for an RSS feeder with some sort of intelligent content surfacing system, and have just stumbled upon exactly that. All analysis is local and so far seems to be quite good.

Github: https://gitlab.com/ondrejfoltyn/nunti
FDroid: https://f-droid.org/en/packages/com.nunti/

1
submitted 1 year ago* (last edited 1 year ago) by [email protected] to c/[email protected]
 

Does anybody have any recommendations for FOSS RSS readers with actual content surfacing features? So many RSS feeds are full of junk (this is particularly a problem with feeds with wildly disparate posting frequencies) and I've always felt they'd be a lot more useful if people were putting more effort into a modern way to sort through extremely dense feeds.

EDIT: Actually just stumbled on one for mobile. Check out Nunti on android (https://gitlab.com/ondrejfoltyn/nunti)

 

For folks who aren't sure how to interpret this, what we're looking at here is early work establishing an upper bound on the complexity of a problem that a model can handle based on its size. Research like this is absolutely essential for determining whether these absurdly large models are actually going to achieve the results people have already ascribed to them on any sort of consistent basis. Previous work on monosemanticity and superposition are relevant here, particularly with regards to unpacking where and when these errors will occur.

I've been thinking about this a lot with regards to how poorly defined the output space they're trying to achieve is. Currently we're trying to encode one or more human languages, logical/spatial reasoning (particularly for multimodal models), a variety of writing styles, and some set of arbitrary facts (to say nothing of the nuance associated with these facts). Just by making an informal order of magnitude argument I think we can quickly determine that a lot of the supposed capabilities of these massive models have strict theoretical limitations on their correctness.

This should, however, give one hope for more specialized models. Nearly every one of the above mentioned "skills" is small enough to fit into our largest models with absolute correctness. Where things get tough is when you fail to clearly define your output space and focus training so as to maximize the encoding efficiency for a given number of parameters.

view more: next ›