Very basic and non-creative source code operations. Eg. "convert this representation of data to that representation of data based on the template"
Asklemmy
A loosely moderated place to ask open-ended questions
Search asklemmy π
If your post meets the following criteria, it's welcome here!
- Open-ended question
- Not offensive: at this point, we do not have the bandwidth to moderate overtly political discussions. Assume best intent and be excellent to each other.
- Not regarding using or support for Lemmy: context, see the list of support communities and tools for finding communities below
- Not ad nauseam inducing: please make sure it is a question that would be new to most members
- An actual topic of discussion
Looking for support?
Looking for a community?
- Lemmyverse: community search
- sub.rehab: maps old subreddits to fediverse options, marks official as such
- [email protected]: a community for finding communities
~Icon~ ~by~ ~@Double_[email protected]~
Taking a natural language question and providing a foothold on a subject by giving you the vocabulary so that you can research a topic on your own.
"What is it called when xyz."
Brainstorming. ChatGPT and co. are slightly better rubber ducks. Which helps to sort my thoughts and evaluate ideas.
Also when researching a new topic I barely know anything about, it helps to get useful pointers and keywords for further research and reading. It's like an interactive Wikipedia in that regard.
I find itβs really good for asking extremely specific code questions
I use it pretty sparsely, and it's stuff that's simple but if I were to google I'd get a whole 10 page essay about filled with ads.
For example here's some of my recent searches, the span is like ~2 months back.
- XXL and 2X being the same thing.
- Rainbow Minecraft MOTD server text.
- Script to find all the schematic files in my ~30 nested files and copy them to a new folder
- how to get cat hair out of clothes
- some legal supreme court thing from the 1800s
- creative commons CC-BY-SA explanation (their website didn't explain what the abbreviation of "BY" was)
- how to unlock my grandad's truck with the code
- "can ghosts be black" (????? I think I was in a silly argument on discord)
- how to read blood pressure
- scene from movie where I forgot the movie
- how to draw among us in desmos calculator as a joke
Just rewrote my corporate IT policies. I feed it all the old policies and a huge essay of criteria, styles, business goals etc. then created a bunch of new policies. I have chatgpt interview me about the new policies, I don't trust what it outputs until I review it in detail and I ask it things like
What do other similar themed policies have that I don't? How is the policy going to be hard to enforce? What are my obligations annually, quarterly and so on?
What forms should I have in place to capture information ( i.e. consultant onboarding).
I can do it all myself but it would be slower and more likely to have consistency and grammatical errors.
I find they're pretty good at some coding tasks. For example, it's very easy to make a reasonable UI given a sample JSON payload you might get from an endpoint. They're good at doing stuff like crafting farily complex SQL queries or making shell scripts. As long as the task is reasonably focused, they tend to get it right a lot of the time. I find they're also useful for discovering language features working with languages I'm not as familiar with. I also find LLMs are great at translation and transcribing images. They're also useful for summaries and finding information within documents, including codebases. I've found it makes it a lot easier to search through papers where you might want to find relationships between concepts or definitions for things. They're also good at subtitle generation and well as doing text to speech tasks. Another task I find they're great at is proofreading and providing suggestions for phrasing. They can also make a good sounding board. If there's a topic you understand, and you just want to bounce ideas off, it's great to be able to talk through that with a LLM. Often the output it produces can stimulate a new idea in my head. I also use LLM as a tutor when I practice Chinese, they're great for doing free form conversational practice when learning a new language. These are a just a few areas I use LLMs in on nearly daily basis now.
I use LLMs to generate unit tests, among other things that are pretty much already described here. It helps me discover edge cases I haven't considered before, regardless if the generated unit tests themselves pass correctly or not.
Oh yeah that's a good use case as well, it's a kind of a low risk and tedious task where these things excel at.
I have it make me excel formulas that I know are possible, but I can't remember the names or makeup for. Afterwords I always ask "what's a better way to display this data?" And I sometimes get a good response. Because of data security reasons I dont give it any real data but we have an internal one I can use for such things and I sometimes throw spreadsheets in for random queries that I can make in plain language.
A fringe case I've found ChatGPT very useful is to learn more about information that is plentiful but buried in dead threads in various old school web forums and thus very hard to Google. Like other people's experiences from homebrewing. Then I ask it for sources and most often it is accurate to the claims of other homebrewers that also can be correct or less correct.
I use it to help me come up with better wording for things. A few examples:
-
Writing annual goals for my team. I had an outline of what I wanted my goals to be, but wanted to get well written detail about what it looks like to meet or exceed expectations on each goal and to create some variations based on a couple of different job types.
-
Brainstorming interview questions. I can use the job description and other information to come up with a starting list of questions and then challenge the LLM to describe how the question is useful. I rarely use the results as-is, but it helps me to think through my interview plan better than just using a list of generic questions.
-
Converting a stream of thought bullet list into a well written communication.
I ask it increasingly absurd riddles and laugh when it hallucinates and tells me something even more absurd.
I use it for helping me learn German but only for explaining things like grammatical rules, concepts, or word uses.
Do not ask it to translate or write something for you. It will make lots of grammatical mistakes. I find that it often misgenders or uses the wrong case for nouns in a sentence.
Philosophy.
Ask it to act as Socrates, pick a topic and it will help you with introspection.
This is good for examining your biases.
e.g. I want to examine the role of government employees.
e.g. when is it ok to give up on an idea?
kill time
I use it to review my meeting notes.
- "Based on the following daily notes, what should I follow-up on in my next meeting with #SomeTeamTag?"
- "Based on the following daily notes, what has the #SomeTeamTag accomplished the past month?"
- etc.
I'm not counting on it to not miss anything, but it jogs my memory, it does often pull out things I completely forgot about, and it lets me get away with being super lazy. Whoops, 5 minutes before a meeting I forgot about? Suddenly I can follow up on things that were talked about last meeting. Or, for sprint retrospectives, give feedback that is accurate.
To add: I've also started using AI to "talk to podcast guests." You can use Whisper to transcribe a podcast, then give the transcript to AI to ask questions. I find the Modern Wisdom Podcast is great for this.
I record meetings of my building's board of management, nothing secret there, very mundane. I run it through Whisper and give the transcript to ChatGPT. It condenses everything into accurate minutes, resolutions and action items. Saves me a shit ton of work, finished in seconds. I'm never going back!
While this is something LLMs are decent at, I feel this is only of value if your notes are unstructured, and it presents infosec concerns.
I guess my notes are unstructured, as in they're what I type as I'm in the meeting. I'm a "more is better" sort of note taker, so it's definitely faster to let AI pull things out.
Infosec ... I guess people will have to evaluate that for themselves. Certainly, for my use case there's no concern.
Website building
Would you mind expanding on this? How do you use the LLM to aid in building websites?
Copying some HTML and CSS code into the llm and saying "change it to make it do xxxxxxx"
It has helped with some simple javascript bookmarklets
Translation and summarisation of text. Though, I do double check.
Also, getting an initial draft for some mails or rephrasing mails that I want to make more formal+concise.
They help me make better searches. I use ChatGPT to get a good idea of what better to search for based on my inquiry. It tells me what I am looking for, and then just use a search engine based on that.
Also, taught me some python and appscript. Currently learning and testing its capabilities in JavaScript teaching. And, yes I test out everything it gives me. It is best to output small blocks of code and lice it together. Hoping for the best and then, 3 years later finally create an app lol because that is on my end. Still working on an organization app. 80 percent accurate on following complete directions in this case.
As a developer, I use LLMs as sort of a search engine, I ask things like how to use a certain function, or how to fix a build error. I try to avoid asking for code because often the generated code doesn't work or uses made up or deprecated functions.
As a teacher, I use it to generate data for exercises, they're especially useful for populating databases and generating text files in a certain format that need to be parsed. I tried asking for ideas for new exercises but they always suck.
I am not using it for this purpose, but churning out large amounts of text that doesn't need to be accurate is proving to be a good fit for:
-
scammers, who can now write more personalize emails and also have conversations
-
personality tests
-
horoscopes or predictions (there are several examples even on serious outlets of "AI predicts how the world will end" or similar)
Due to how good LLMs are at predicting an expected pattern of response, they are a spectacularly bad idea (but are obviously used anyway) for:
-
substitute for therapy
-
virtual friends/girlfriend/boyfriend
The reason they are such a bad idea for these use cases is that fragile people with self-destructive patterns do NOT need those patterns to be predicted and validated by a LMM.
Have they given you anything creative that was good. I also, used it to make a meal plan and make a work schedule as an Excel doc, then it just needed a few edits.
Would you say you are good at creating a meal plan or a work schedule by yourself, with no AI? I suspect if you know what a good meal plan looks to you and you are able to visualize the end result you want, then genAI can speed up the process for you.
I am not good at creative tasks. My attempts to use genAI to create an image for a PowerPoint were not great. I am wondering if the two things are related and I'm not getting good results because I don't have a clear mental picture of what the end result should be so my descriptions of it are bad
In my case, I wanted an office worker who was juggling a specific set of objects that were related to my deck. After a couple of attempts at refining my prompt, Dall-E produced a good result, except that it had decided that the office worker had to have a clown face, with the make-up and the red nose.
From there it went downhill. I tried "yes, like this, but remove the clown makeup" or "please lose the clown face" or "for the love of Cthulhu, I beg you, no more clowns" but nothing worked.
I once asked ChatGPT how it (AI) works. It gave me the tools needed to get the right results. There were books on prompt engineering free online. But I decided after reading them that it was easier to have AI teach me to use AI...better. that's the LLMs. On the other hand for image generation, it takes persistence and priority. If the prompt is too complicated, it will do its own thing. If it is too simple, it will do its own thing. After a lot of practice getting to know how it outputs images you will find the right, or close results. Emphasis on close. Leonardo.ai is my favorite.
Edit: if you don't believe you are creative enough, prone the LLM for ideas. Ask it to make the prompt. They are finnicky
I think using LLMs with RAG (aka tools) is more useful and reliable than relying only on training data that the model does its best to represent.
For example, using a search engine to find results for a query, downloading the first 10 results as text, and then having the LLM answer subsequent queries about those sources, or another example would be uploading a document and having the LLM answer queries about its contents.
This is also advantageous because much smaller and quicker models can be used while still producing accurate results (often with citations to the source).
This can even be self hosted with Open WebUI/ollama.
When I'm in a hurry I use them for
- longer more complex excel formulas.
- to create powercell scrips to manipulate large csv-files.
I used it to teach me app script and it was 90 percent accurate
Database queries, especially OpenSearch/ElasticSearch
I havent really used any in a serious manner. I did install DeepSeek on my PC to try out. Its pretty fun to play with, but still seems to have issues. For example, I was using it to create bread recipes and fine tune proportions to get the exact amount of dough I need. I found that its math was way off, and would give me wildly different results even when asked the same question and given the same requirements.
Im not sure how ChatGPT compares, as I dont have access to it and Im not really willing to pay for it.
Yes, well that happens if you use very small models. It does get better with more parameters, meaning it gets more consistent. How valuable the advice is, well, you can judge for yourself.
Honestly, my favorite use case for ChatGPT is as an Internet search engine. Google has become so shitty that I outsource it's main job lol. I just tell ChatGPT to send me reputable sources as links for the query, and I skip the bullshit.
It's also not a bad way to generate SEO friendly descriptions for eBay listings, if you have a lot to list and are lazy. You can move a lot faster and get better results than using the default ai that site has. It would be ideal to personally write everything and be an SEO expert, but you are mostly guiding people to see the photos and just need the metadata perks of the jargon.
I use it for coding templates. Like build a basic mvc crud then I'll fill in the blanks.
None of the models are very good at the whole picture, but they save me time. I've tried to do more but it just lies about libraries that dont exist.
It's really good in statistics, but you need to know enough statistics to know what to ask. Just today I needed to write a PyStan script for doing some MCMC and it's helped me to write it, structure the data and understand the results of the experiment. Then, it confirmed my suspension that the chosen model was not very good for my data and tomorrow I'm trying with another probability distribution.
I was surprised how effective it was for getting a checklist of things I should do to get a car that hasn't been running for 30 years back on the road and asking for instructions for each step and things I should keep in mind
Outside of that it's become a Google replacement for software development questions
You do kinda have to know about the things you ask it about so you can spot when it's bullshiting you
Writing emails, brainstorming