LLMs use in the field
At first I was quite against LLMs for a variety of reasons, especially as they are getting shoved down the throat in every new and existing product. But after diving into it and giving it a honest look, I got a slightly different opinion.
First and foremost, I really dislike cloud-based LLM solutions. You have no control over the quality of the model, nor do you know what they'll do with your data. It's also quite expensive; you either pay with your information or money per X amount of tokens. Most of it is build on stolen data; not respecting licences nor robot.txt, privacy and crashing the web (ddossing websites by crawling them).
I'm also of the opinion that you really don't want to be using LLMs for any serious programming. Not just for the concern of it being trained on copyrighted code, but also the lack of specialist knowledge in even the largest models (GPT-5 High, Qwen 3 VL 235B, etc).
For example, ask it to explain C99 padding and alignment rules for structs. It will always fail to mention that if the first entry of a struct is another struct, alignment is guaranteed. It also can't tell you accurately which chapter this information is defined inside the ISO/IEC 9899:1999(E) spec.
Another example is making it generate a basic lox tree-walk interpreter. It's a well understood problem with a variety of implementations available in the training sets. However, it always gets too many problems wrong to a point where the code is unusable, even with refining it many times and coaxing it into the right direction.
At work we have a setup where most colleagues use Github Copilot (GPT-4.5) to generate code, then upload it to github where an LLM CI/CD reviews the PR and requests changes. You then refine the code with more prompting, uploading, reviewing, etc until the code is "good enough".
You can imagine I'm personally not a big fan of this and will keep pestering my colleagues to review the code I write myself in person. I'm an artisan, I take pride in the code I write by hand and the knowledge I obtained by self-study without aides.
What it is good for however is mathmatics and physics. My younger brother who has a master in both mathmathics and theoretical physics from the RUG told me how scary good it is at these topics, especially Chinese models. I also heard from friends it's good for finding working coupons.
Personally I really enjoyed using Rei-V3-KTO (Mistral Nemo 12B finetune) and Rei-24B-KTO (Mistral Small 3.2 24B finetune) locally through Koboldcpp. They are nice for writing make-believe adventures, if you're fine with run-off-the-mill light novel stories.
Another case where it works well enough is summarizations, especially vision capable models. Just make sure you've read the text in full and understand it's contents before using LLMs to summerize the text.
All in all I believe running models locally is a great way to use LLMs, but use it for simple tasks that are well defined. Tasks like code generation is too creative for it to generate properly, but converting one type of assembly to another or from XML to JSON is simple enough.