mercury
/mercury20
viva la computation mercuryprotocol.io
The road is long, but decentralized AI is going to win
It's super annoying the Pytorch 2.5.0 is not supported on Intel based Macs... like why? Tons of people still use Intel-based Macs.
Now I have to run Pytorch in a Docker container everytime I develop on that machine.
Now I have to run Pytorch in a Docker container everytime I develop on that machine.
Our landing page just got a redesign!
We removed a lot of stuff, tried to make it cleaner, simpler, focusing more on our mission.
https://mercuryprotocol.io/
We removed a lot of stuff, tried to make it cleaner, simpler, focusing more on our mission.
https://mercuryprotocol.io/
Born too late to explore Earth, too soon to explore space.
Fuck that. We're the first humans to have a change to build God machines and live an indefinitely long time.
If you're alive at this point in time you're astronomically lucky.
Now let's get to work.
Fuck that. We're the first humans to have a change to build God machines and live an indefinitely long time.
If you're alive at this point in time you're astronomically lucky.
Now let's get to work.
We're doing a redesign of our landing page, and rephrasing our mission. Excited to see what Farcaster will think.
How to make sure AI is misaligned: keep it centralized
If you don't know what to binge watch this weekend
https://youtube.com/playlist?list=PL5XwKDZZlwaY7t0M5OLprpkJUIrF8Lc9j&feature=shared
https://youtube.com/playlist?list=PL5XwKDZZlwaY7t0M5OLprpkJUIrF8Lc9j&feature=shared
Wow!! NVIDIA just released an open-source model that easily outperforms GPT-4o and Claude 3.5 Sonnet
🤯
https://huggingface.co/nvidia/Llama-3.1-Nemotron-70B-Instruct-HF
🤯
https://huggingface.co/nvidia/Llama-3.1-Nemotron-70B-Instruct-HF
~The Balkanization of the quest for AGI~
OpenAI used to be one. Now Murati, Sutskever, Altman, and Musk all compete against each other on who gets to the holy grail first.
This is good. The more competition, the better.
Can't wait for what's coming once we drastically lower the price of compute. 🦾
OpenAI used to be one. Now Murati, Sutskever, Altman, and Musk all compete against each other on who gets to the holy grail first.
This is good. The more competition, the better.
Can't wait for what's coming once we drastically lower the price of compute. 🦾
China is building some super cool shit. This AI chip uses photons for power.
Hope one day these will be on the Mercury Compute Network.
https://www.livescience.com/technology/computing/china-s-upgraded-light-powered-agi-chip-is-now-a-million-times-more-efficient-than-before-researchers-say
Hope one day these will be on the Mercury Compute Network.
https://www.livescience.com/technology/computing/china-s-upgraded-light-powered-agi-chip-is-now-a-million-times-more-efficient-than-before-researchers-say
Imagine a world where OpenAI is the only one with superintelligent AI.
A world where a company can bend reality to its will.
A world where truth no longer exists.
A world where to innovate one must sacrifice at the Altman altar.
If you think this fucking sucks check out what I'm building.
https://warpcast.com/pmarca/0x5f59346f
A world where a company can bend reality to its will.
A world where truth no longer exists.
A world where to innovate one must sacrifice at the Altman altar.
If you think this fucking sucks check out what I'm building.
https://warpcast.com/pmarca/0x5f59346f
Cheap compute --> God machines --> Eternal life
What is the most annoying thing today when it comes to training or fine tuning LLMs?
Today at Mercury protocol: I fixed a bunch of bugs in the p2p networking layer of our node and did some testing in various different deployment environments. Fun stuff :)
The verification problem is probably the hardest problem to solve in decentralized AI compute.
We found all other attempts unsatisfactory, so we decided to deep dive into recent research papers and try to come up with our own approach.
Our main realization is that solutions that rely on game theory or are too dependent on current AI architectures are not future proof.
For a robust solution, we have to go all the way down to the metal.
We found all other attempts unsatisfactory, so we decided to deep dive into recent research papers and try to come up with our own approach.
Our main realization is that solutions that rely on game theory or are too dependent on current AI architectures are not future proof.
For a robust solution, we have to go all the way down to the metal.
We have been silent for a while now.
It is not because we have nothing to say.
It is because we have been focused on building.
Creating a decentralized GPU network for AI is incredibly hard work, especially when you are a tiny bootstrapped team.
But we are making progress. Every fucking day. Expect us.
It is not because we have nothing to say.
It is because we have been focused on building.
Creating a decentralized GPU network for AI is incredibly hard work, especially when you are a tiny bootstrapped team.
But we are making progress. Every fucking day. Expect us.
We just published our new litepaper describing Mercury and what we've been working to solve recently.
https://www.notion.so/lajosdeme/Mercury-Protocol-Litepaper-25d71ebe3ca241ffb9e02008d9b4c2e4?pvs=4
https://www.notion.so/lajosdeme/Mercury-Protocol-Litepaper-25d71ebe3ca241ffb9e02008d9b4c2e4?pvs=4
The most likely way to limit open AI development is to control access to compute.
We are working hard to make this impossible.
We are working hard to make this impossible.
Cheap compute is essential to the flourishing of human consciousness