8682 readers
4 users here now

Unofficial ChatGPT community to discuss anything ChatGPT

founded 1 year ago
submitted 4 days ago* (last edited 4 days ago) by [email protected] to c/[email protected]

ChatGPT can be used without logging in. But there is a catch, we can't opt out of data training.

Never heard about it anywhere. OpenAI may have secretly released it.


There is no longer an option to use ChatGPT without an ID on for me. Is anyone else having the same problem?


Check out our open-source, language-agnostic mutation testing tool using LLM agents here:

Mutation testing is a way to verify the effectiveness of your test cases. It involves creating small changes, or “mutants,” in the code and checking if the test cases can catch these changes. Unlike line coverage, which only tells you how much of the code has been executed, mutation testing tells you how well it’s been tested. We all know line coverage is BS.

That’s where Mutahunter comes in. We leverage LLM models to inject context-aware faults into your codebase. As the first AI-based mutation testing tool, Our AI-driven approach provides a full contextual understanding of the entire codebase by using the AST, enabling it to identify and inject mutations that closely resemble real vulnerabilities. This ensures comprehensive and effective testing, significantly enhancing software security and quality. We also make use of LiteLLM, so we support all major self-hosted LLM models

We’ve added examples for JavaScript, Python, and Go (see /examples). It can theoretically work with any programming language that provides a coverage report in Cobertura XML format (more supported soon) and has a language grammar available in TreeSitter.

Here’s a YouTube video with an in-depth explanation:

Here’s our blog with more details:

Check it out and let us know what you think! We’re excited to get feedback from the community and help developers everywhere improve their code quality.


Over the weekend (this past Saturday specifically), GPT-4o seems to have gone from capable and rather free for generating creative writing to not being able to generate basically anything due to alleged content policy violations. It'll just say "can't assist with that" or "can't continue." But 80% of the time, if you regenerate the response, it'll happily continue on its way.

It's like someone updated some policy configuration over the weekend and accidentally put an extra 0 in a field for censorship.

GPT-4 and GPT 3.5 seem unaffected by this, which makes it even weirder. Switching to GPT 4 will have none of the issues that 4o is having.

I noticed this happening literally in the middle of generating text.

See also:


Small rant : Basically, the title. Instead of answering every question, if it instead said it doesn't know the answer, it would have been trustworthy.


Company website:

submitted 1 month ago* (last edited 1 month ago) by [email protected] to c/[email protected]

Has anyone else noticed this kind of thing? This is new for me:

                'tile': litte,
                're': ore,
                't_summary': put_summary,
                'urll': til_url

"povies" is an attempt at "movies", and "tile" and "litte" are both attempts at "title". And so on. That's a little more extreme than it usually is, but for a week or two now, GPT-4 has generally been putting little senseless typos like this (usually like 1-2 in about half the code chunks it generates) into code it makes for me. Has anyone else seen this? Any explanation / way to make it stop doing this?


For people who don't like much the new font on the ChatGPT website, which font would you rather prefer to use (Except Inter, Roboto, DM Sans)? I noticed that there are people who don't like the current font, so I decided to add a custom fonts for the web extension that already customizes the ChatGPT UI. It would be really helpful to know more specifically what would be more pleasant for your eyes to read.

PS. if someone is into customizing, the extension is GPThemes (There are Firefox's Desktop and Android versions too)


As the title says, I updated the bot a day or so ago, so you can chat with it in the comments. It should now also support context, meaning it knows the whole comment chain. And you don't have to tag it if you're replying to it.


It's so frustrating.

Even very basic things like "Summarize this video transcipt" on GPTs built specifically for that purpose.

Firstly, it cannot even read text files anymore. It straight up "cannot access documents". No idea why, sometimes it will act like it can, but it becomes obvious it's hallucinating or only read part of the document.

So ok, paste info in. GPT will start giving you a detailed summary, and then just skip over like 40 fucking percent of the middle, and resume summarizing at the end.

I mean honestly, I'm hardly asking it to do complex shit.

I have absolutely no idea what lead to this decline, but it's become so bad it is hardly even worth messing with it anymore. Such an absolute shame.

view more: next ›