1171
𝚐π”ͺ𝟾𝚑𝚑𝟾

@gm8xx8 #1171

☺︎ /ai /gm8xx8
131659 Follower 265 Following
openai/swarm

An experimental framework for creating, managing, and deploying multi-agent systems.

https://github.com/openai/swarm
I can’t help but questionβ€”am I even in the right place? lol
CyberCab for under $25,000–Fully autonomous FSD available in Texas and California, coming soon for the Model 3 and Model Y.

L4: FSD with hands on the wheel
L5: Robotaxi with no steering wheel
L6: Optimus driving a regular Model 3 with hands on the wheel

Optimus, aiming for a $20-30k price per robot, capable of performing any task.
New open-source text and image video generation model

- released under MIT license.
- generates high-quality 10-second videos at 768p resolution, 24 FPS.
- uses pyramid flow matching for efficient autoregressive video generation

https://huggingface.co/rain1011/pyramid-flow-sd3
/AI
Nothing beats getting hands-on with a brand-new, unreleased model from a great team. more πŸ”œ
AI has taken over the Nobel Prize.
- Physics βœ”οΈ
- Chemistry βœ”οΈ
/AI
A Visual Guide to Mixture of Experts (MoE)

> recommended reading

A Mixture of Experts (MoE) is a neural network architecture that consists of multiple β€œexpert” models and a router mechanism to direct inputs to the most suitable expert. This approach allows the model to handle specific aspects of a task more efficiently. The router decides which expert processes each input, enabling a model to use only a subset of its total parameters for any given task.

The success of MoE lies in its ability to scale models with more parameters while reducing computational costs during inference. By selectively activating only a few experts for each input, MoE optimizes performance without overloading memory or compute resources. This flexibility has made MoE effective in various domains, including both language and vision tasks.

https://newsletter.maartengrootendorst.com/p/a-visual-guide-to-mixture-of-experts
/AI
A Visual Guide to Mixture of Experts (MoE)

> recommended reading

A Mixture of Experts (MoE) is a neural network architecture that consists of multiple β€œexpert” models and a router mechanism to direct inputs to the most suitable expert. This approach allows the model to handle specific aspects of a task more efficiently. The router decides which expert processes each input, enabling a model to use only a subset of its total parameters for any given task.

The success of MoE lies in its ability to scale models with more parameters while reducing computational costs during inference. By selectively activating only a few experts for each input, MoE optimizes performance without overloading memory or compute resources. This flexibility has made MoE effective in various domains, including both language and vision tasks.

https://newsletter.maartengrootendorst.com/p/a-visual-guide-to-mixture-of-experts
Podcastfy is an open-source Python tool that converts web content, PDFs, and text into multi-lingual audio conversations using GenAI, focusing on customizable and programmatic audio generation.

https://github.com/souzatharsis/podcastfy-demo
/AI
Podcastfy is an open-source Python tool that converts web content, PDFs, and text into multi-lingual audio conversations using GenAI, focusing on customizable and programmatic audio generation.

https://github.com/souzatharsis/podcastfy-demo
The Llama 3.1-Nemotron-70B-Reward model, trained through human feedback (RLHF). Leads the RewardBench leaderboard w/a 94.1% score, demonstrates proficiency in Safety (95.1%) and Reasoning (98.1%), effectively rejects unsafe responses and solving complex tasks. Despite being smaller than the Nemotron-4 340B Reward model, it offers high efficiency and accuracy, uses CC-BY-4.0-licensed HelpSteer2 data for training, making it suitable for enterprise use. The model uses a combination of regression-style and Bradley-Terry reward modeling, using meticulously curated data from HelpSteer2 to maximize performance.

https://huggingface.co/nvidia/Llama-3.1-Nemotron-70B-Reward
/AI