LLM
/llm1924
A space to discuss large language models, AI agents, and how they could interact with Farcaster data
based if true
helpful emerging ai agents infra landscape map for anyone building this weekend!
https://x.com/AtomSilverman/status/1855067803302478289/photo/1
https://x.com/AtomSilverman/status/1855067803302478289/photo/1
One of my best LLM moments : asking the LLM to self critique the blog it wrote & improve it. Kinda similar to chain of thought approach.
Did a pretty solid job + also gave me perspective on better prompts.
Did a pretty solid job + also gave me perspective on better prompts.
Cursor format on save is deleting my valid code today. The LLM uprising has begun
here for the quantum memetic entanglement
imagine letting ur agent run their own aws lambda functions 😯
ai's can't make typos 💀
ok banger
Forget about ChatGPT, just use open source local llm's https://lmstudio.ai/
One of my favorite things about commercial LLMs like Claude is that I can tell it use ChronVer (a spec I published after being tired of SemVer many years ago) and it just does it.
Just started using OAI’s o1-preview model via the api. Seems to be very capable.
It fixed some code I was having trouble with in one shot. And built a working application with code examples in another one shot.
The 32k token output is a game changer.
It fixed some code I was having trouble with in one shot. And built a working application with code examples in another one shot.
The 32k token output is a game changer.
Any leads on LLMs that can add the exact text to an image? Building something & looking for some help...
Claude is the new Clippy
Even pretty censored local LLM models are mostly following instructions if one manually writes the first 1-2 words of the response and only lets them complete from there.
Would be interesting to use a small (uncensored) model to automate that process.
Would be interesting to use a small (uncensored) model to automate that process.
Feeling like this right now because I might finally get access to an EC2 instance that's powerful enough to be a RAG server.
Is RAG still a thing? Or are we doing something else now?
Last time I touched these a few months ago, RAG was OK. And by OK I mean shitty. I was about to start exploring different text embedders to get better search results but I put the project on hold.
Before I resurrect that old project, wanted to see what all the cool kids were up to
Last time I touched these a few months ago, RAG was OK. And by OK I mean shitty. I was about to start exploring different text embedders to get better search results but I put the project on hold.
Before I resurrect that old project, wanted to see what all the cool kids were up to
What is the largest concern for #ai ? Electricity or Hallucination ?
anyone know of alternative products to webui for running LLMs locally?
Ollama through the command line is so much faster compared to webui. seconds vs minutes running the same model. running the anthropic api in webui is way faster than the local LLMs.
Have 2 4090s so power is not a problem for the 7B models. I'm sure it has something to do with my settings. using it out of the box at the moment.
curious to know if anyone else has the same experience.
Ollama through the command line is so much faster compared to webui. seconds vs minutes running the same model. running the anthropic api in webui is way faster than the local LLMs.
Have 2 4090s so power is not a problem for the 7B models. I'm sure it has something to do with my settings. using it out of the box at the moment.
curious to know if anyone else has the same experience.
Alr imma bite, with grok headed right and gpt et al up in their cozy left bubble, who is creating a centrist LLM?
When you're working with a Google AI model and you need truthful answers