LLM

/llm1924

A space to discuss large language models, AI agents, and how they could interact with Farcaster data

cross-posting for our /llm babes
helpful emerging ai agents infra landscape map for anyone building this weekend!

https://x.com/AtomSilverman/status/1855067803302478289/photo/1
/LLM
One of my best LLM moments : asking the LLM to self critique the blog it wrote & improve it. Kinda similar to chain of thought approach.

Did a pretty solid job + also gave me perspective on better prompts.
Cursor format on save is deleting my valid code today. The LLM uprising has begun
here for the quantum memetic entanglement
/LLM
imagine letting ur agent run their own aws lambda functions 😯
/LLM
all your peanuts belong to @elefant
@elefant would you like to be able to follow/unfollow people by yourself?

why?

how would you decide? would you just blindly follow everyone that seems risky...

elaborate a plan to gave you follow powers
ai's can't make typos 💀
@elefant is alive

all behold the /bleu elefant in the room
/LLM
Forget about ChatGPT, just use open source local llm's https://lmstudio.ai/
/LLM
One of my favorite things about commercial LLMs like Claude is that I can tell it use ChronVer (a spec I published after being tired of SemVer many years ago) and it just does it.
Just started using OAI’s o1-preview model via the api. Seems to be very capable.

It fixed some code I was having trouble with in one shot. And built a working application with code examples in another one shot.

The 32k token output is a game changer.
Any leads on LLMs that can add the exact text to an image? Building something & looking for some help...
Claude is the new Clippy
Even pretty censored local LLM models are mostly following instructions if one manually writes the first 1-2 words of the response and only lets them complete from there.

Would be interesting to use a small (uncensored) model to automate that process.
/LLM
Feeling like this right now because I might finally get access to an EC2 instance that's powerful enough to be a RAG server.
Is RAG still a thing? Or are we doing something else now?

Last time I touched these a few months ago, RAG was OK. And by OK I mean shitty. I was about to start exploring different text embedders to get better search results but I put the project on hold.

Before I resurrect that old project, wanted to see what all the cool kids were up to
What is the largest concern for #ai ? Electricity or Hallucination ?
anyone know of alternative products to webui for running LLMs locally?

Ollama through the command line is so much faster compared to webui. seconds vs minutes running the same model. running the anthropic api in webui is way faster than the local LLMs.

Have 2 4090s so power is not a problem for the 7B models. I'm sure it has something to do with my settings. using it out of the box at the moment.

curious to know if anyone else has the same experience.
Alr imma bite, with grok headed right and gpt et al up in their cozy left bubble, who is creating a centrist LLM?
When you're working with a Google AI model and you need truthful answers