6772
Nick Tindle
@ntindle #6772
Founding AI Engineer at AutoGPT | Former Steward @developerdao
244 Follower 65 Following
1171
๐๐ช๐พ๐ก๐ก๐พ
@gm8xx8ยท17:52 14/03/2024
Data Interpreter: An LLM Agent For Data Science
- achieves top performance in ml and reasoning, analyzing stocks /replicating websites. it autonomously debugs code and solves issues using notebooks and browsers.
- os
https://github.com/geekan/MetaGPT/tree/main/examples/di
- achieves top performance in ml and reasoning, analyzing stocks /replicating websites. it autonomously debugs code and solves issues using notebooks and browsers.
- os
https://github.com/geekan/MetaGPT/tree/main/examples/di
8531
Dean Pierce ๐จโ๐ป๐๐
@deanpierce.ethยท15:59 13/03/2024
GM, time for DeepMind to drop another revolutionary breakthough โฅ๏ธ
https://deepmind.google/discover/blog/sima-generalist-ai-agent-for-3d-virtual-environments/
https://deepmind.google/discover/blog/sima-generalist-ai-agent-for-3d-virtual-environments/
295084
Polywrap
@polywrapยท19:34 01/03/2024
Can I bill warps multiple times in the same frame interaction?
13901
Marissa
@marissaposnerยท02:01 15/02/2024
"We show that LLM agents can autonomously hack websites, performing tasks as complex as blind database schema extraction and SQL injections without human feedback. Importantly, the agent does not need to know the vulnerability beforehand." ๐
https://t.co/Bkc2gupDWL
Looking into providing L2 liquidity for CODE to make it easier to join D_D. Any L2 youโd prefer to see? Why?
1171
๐๐ช๐พ๐ก๐ก๐พ
@gm8xx8ยท03:36 06/02/2024
Large Language Model based Multi-Agents: A Survey of Progress and Challenges
โ
https://arxiv.org/abs/2402.01680
โ
https://arxiv.org/abs/2402.01680
1048
Mac Budkowski แต
@macbudkowskiยท21:30 05/02/2024
Heard from a friend:
"Product manager is basically a prompt engineer for the dev team"
"Product manager is basically a prompt engineer for the dev team"
D_D has arrived
Is there a quick start guide for dev with/for frames? V interested in POCing a concept for AI
1171
๐๐ช๐พ๐ก๐ก๐พ
@gm8xx8ยท05:41 01/02/2024
Efficient Tool Use with Chain-of-Abstraction Reasoning
CoA enhances LLMs w/ abstract reasoning training, leading to a 6% accuracy boost and 1.4x faster inference.
โ
https://arxiv.org/abs/2401.17464
CoA enhances LLMs w/ abstract reasoning training, leading to a 6% accuracy boost and 1.4x faster inference.
โ
https://arxiv.org/abs/2401.17464