12031
Sanjay
@sanjay #12031
@farcaster protocol dev
6819 Follower 218 Following
Released Hubble 0.19.3 which pushes version expiry to May 15. This will be the last hubble update. If you are still running hubs, please switch to snapchain instead of upgrading. Please reach out if you run into any issues with the migration.
Thinking about the upgrade process for Snapchain and looking for feedback.
There are going to be two kinds of changes:
a) Soft forks (bug fixes, minor improvements and non-consensus breaking changes)
b). Hard forks (breaking changes, typically FIP implementations or other major changes that affects consensus)
There are going to be two kinds of changes:
a) Soft forks (bug fixes, minor improvements and non-consensus breaking changes)
b). Hard forks (breaking changes, typically FIP implementations or other major changes that affects consensus)
Warpcast is currently being powered solely by snapchain. If you're publishing only to hubs, make sure you're also publishing to snapchain. Let us know if you run into issues with missing messages or with migrating to snapchain.
https://snapchain.farcaster.xyz/guides/migrating-to-snapchain
https://snapchain.farcaster.xyz/guides/migrating-to-snapchain
Both @neynar and @warpcast are reading/writing to Snapchain now. If you’re using Hubble, you should migrate to use snapchain exclusively and let us know if you run into any issues.
https://docs.farcaster.xyz/hubble/migrating
https://warpcast.notion.site/Snapchain-Mainnet-Public-1b96a6c0c101809493cfda3998a65c7a?pvs=4
https://docs.farcaster.xyz/hubble/migrating
https://warpcast.notion.site/Snapchain-Mainnet-Public-1b96a6c0c101809493cfda3998a65c7a?pvs=4

Migrating to snapchain / Farcaster Docs
A protocol for building sufficiently decentralized social networks.
docs.farcaster.xyz

Snapchain Mainnet [Public] | Notion
Phase 1 - Genesis Block (Mar 21)
warpcast.notion.site
Probably the most challenging and fun project I’ve ever worked on.
Incredible work by @dynemyte @suurkivi and @cassie.
Special shoutout to the Informal Systems team for building Malachite, a rock solid rust Tendermint library which powers snapchain.
Incredible work by @dynemyte @suurkivi and @cassie.
Special shoutout to the Informal Systems team for building Malachite, a rock solid rust Tendermint library which powers snapchain.
2
Varun Srinivasan
@v·05:23 04/02/2025
Snapchain testnet went live at 4pm today.
Over 20,000 blocks have been produced already with a million messages processed. This is real data from mainnet that is mirrored over to test Snapchain's integrity.
Over 20,000 blocks have been produced already with a million messages processed. This is real data from mainnet that is mirrored over to test Snapchain's integrity.
We've been so focused on snapchain, I forgot to do the Hub protocol release for Nov 27. Just released Hubble 1.17. Please upgrade before current version expires at Dec 11 midnight UTC.
Apologies for the short notice!
Apologies for the short notice!
Considering buying a house, and came across this very cool and very legal covenant (from 1946) for the land in the disclosures. Can't say I wasn't tempted to buy just to stick it to them.
I was originally leaning towards account ordering. But happy with where we ended up. The biggest issue with blockchains is managing block state growth. Thanks to @cassie for inspiring the solution on how to handle this in snapchains.
Also, special thanks to @vrypan.eth for the App Ordering idea.
Also, special thanks to @vrypan.eth for the App Ordering idea.
2
Varun Srinivasan
@v·00:04 12/09/2024
FIP: Snapchain
A new proposal to introduce global ordering to Farcaster hubs. We'll discuss this in more detail in the dev call tomorrow, but sharing an early draft for feedback.
https://warpcast.notion.site/Snapchain-Public-0e6b7e51faf74be1846803cb74493886
A new proposal to introduce global ordering to Farcaster hubs. We'll discuss this in more detail in the dev call tomorrow, but sharing an early draft for feedback.
https://warpcast.notion.site/Snapchain-Public-0e6b7e51faf74be1846803cb74493886
If you enjoy coming up with novel distributed systems algorithms, we have just the challenge for you.
2
Varun Srinivasan
@v·06:03 22/08/2024
We're starting to think about a new sync model for Farcaster.
The current system works but is unlikely to scale up another 10x. Here's our articulation of the problem we want to go after.
The current system works but is unlikely to scale up another 10x. Here's our articulation of the problem we want to go after.
Please upgrade Hubble to 1.14.4 which implements storage changes defined in https://github.com/farcasterxyz/protocol/discussions/191
We'll min version on Monday at the latest so all hubs will be ready for Aug 28 when storage units were originally scheduled to expire.
If you are manually calculating storage, you'll need to update your logic. You may find the new helper functions implemented in the hub-nodejs package useful https://github.com/farcasterxyz/hub-monorepo/blob/main/packages/core/src/limits.ts
We'll min version on Monday at the latest so all hubs will be ready for Aug 28 when storage units were originally scheduled to expire.
If you are manually calculating storage, you'll need to update your logic. You may find the new helper functions implemented in the hub-nodejs package useful https://github.com/farcasterxyz/hub-monorepo/blob/main/packages/core/src/limits.ts
Since this day, we're at 50x the message count, 180x the peer count, ~40x the db size. Perf metrics are harder to compare since it was during an incident but currently p95 merge latency is ~30ms and p95 gossip delay is <1s (vs 2000ms and ~2.5hrs during the incident)
3
Dan Romero
@dwr.eth·22:17 03/02/2024
Entire Warpcast is online trying to get things stable.
Thank you for the patience!
Thank you for the patience!
Hub message disruption today was caused by our hub losing gossip connectivity to all other hubs (unclear why exactly due to a logging bug). It was unable to regain connection due a bad interaction with a libp2p upgrade.
Released 1.14.2 with a fix.
Released 1.14.2 with a fix.
Hubble 1.14 is out. It includes a bunch of fixes around follows (consistency issues and large compaction events breaking event streams).
If you are using shuttle, make sure you’re on the latest version before upgrading hubs, there’s a breaking api change for events.
If you are using shuttle, make sure you’re on the latest version before upgrading hubs, there’s a breaking api change for events.
Message processing was broken today because some events exceeded grpc client default size. If you're using shuttle, please upgrade to 0.4.1 to get the fix. If you are constructing hub clients manually and listening to events, make sure to pass in the following param
This one was tricky to track down. There were a lot of dead ends. Special thanks to
@wazzymandias.eth for basically fixing it last night and not telling the rest of us 😂
@cassie and @sds for some deep libp2p and tcp tuning magic, which we thankfully didn't need.
And finally to my good friend Claude, who pointed me to the `node --prof` command which is able to profile worker threads, would've been much more difficult to narrow down the root cause without it.
@wazzymandias.eth for basically fixing it last night and not telling the rest of us 😂
@cassie and @sds for some deep libp2p and tcp tuning magic, which we thankfully didn't need.
And finally to my good friend Claude, who pointed me to the `node --prof` command which is able to profile worker threads, would've been much more difficult to narrow down the root cause without it.
2
Varun Srinivasan
@v·23:38 26/06/2024
The root cause was an expensive iteration.
When a hub gets a message from a peer, it iterates over its entire peer store to track some stats. As the number of peers grew the iteration took longer and the number of iterations needed kept increasing.
Eventually, the iteration couldn't be completed before the next one was triggered. This caused hubs to start crashing slowly.
The fix was effectively a 1-line change that moved the iteration out of the critical path.
When a hub gets a message from a peer, it iterates over its entire peer store to track some stats. As the number of peers grew the iteration took longer and the number of iterations needed kept increasing.
Eventually, the iteration couldn't be completed before the next one was triggered. This caused hubs to start crashing slowly.
The fix was effectively a 1-line change that moved the iteration out of the critical path.
Reminder that the replicator is deprecated by shuttle. We’re going to remove the replicator from the hub codebase by end of next week to avoid any confusion. If you’re still using the replicator, please migrate to the shuttle package.
Let me know if you have any questions around migration.
Let me know if you have any questions around migration.
We’re seeing message processing delays again. The team is working on scaling our systems to be able to handle it.
2
Varun Srinivasan
@v·21:22 14/04/2024
Warpcast had a delay processing messages this AM.
Traffic spike in them middle of the night on the protocol caused messages to get backed up. We're 95% caught up now and working on fixes to make sure this doesnt happen.
Traffic spike in them middle of the night on the protocol caused messages to get backed up. We're 95% caught up now and working on fixes to make sure this doesnt happen.
alpha version of the package is out https://github.com/farcasterxyz/hub-monorepo/tree/main/packages/hub-shuttle
2
Varun Srinivasan
@v·20:37 02/04/2024
We've landed on a new design for Replicator v2 after prototyping.
It's an npm package which connects to a hub and a postgres db. It creates a messages table in postgres which will contain every message in the hub.
https://warpcast.notion.site/Replicator-V2-Architecture-54a92c79276c4831a3f9ae60b2428781?pvs=74
It's an npm package which connects to a hub and a postgres db. It creates a messages table in postgres which will contain every message in the hub.
https://warpcast.notion.site/Replicator-V2-Architecture-54a92c79276c4831a3f9ae60b2428781?pvs=74
@ted @gregan the RWA thesis is finally playing out. When does the goldfinch goat pool open? There's clear investor demand https://x.com/NeerajKA/status/1775881228656181312?s=20
446971
Richard Opany
@richiejuma1603·14:17 04/04/2024
This is calle "higher" by December i want to have many
x.com
Please reach out if you’re interested in using this, or have any feedback
2
Varun Srinivasan
@v·20:37 02/04/2024
We've landed on a new design for Replicator v2 after prototyping.
It's an npm package which connects to a hub and a postgres db. It creates a messages table in postgres which will contain every message in the hub.
https://warpcast.notion.site/Replicator-V2-Architecture-54a92c79276c4831a3f9ae60b2428781?pvs=74
It's an npm package which connects to a hub and a postgres db. It creates a messages table in postgres which will contain every message in the hub.
https://warpcast.notion.site/Replicator-V2-Architecture-54a92c79276c4831a3f9ae60b2428781?pvs=74
We're planning to min version the hubs to 1.11.2 later today to improve network health. This version includes the fix to have all hubs use snapshots to catch up if they are too far behind. If you're on an older version, please run `./hubble.sh upgrade` to get up to date.
https://warpcast.com/sanjay/0x4a7d7839
https://warpcast.com/sanjay/0x4a7d7839
12031
Sanjay
@sanjay·23:17 22/03/2024
We've released 1.11.1. @wazzymandias.eth made it so that hubs will now automatically use snapshot sync to catch up if they are too many messages behind.
Note that this will reset the db, if you're running a replicator you can disable this to be safe, by setting `CATCHUP_SYNC_WITH_SNAPSHOT=false` in your .env file
Note that this will reset the db, if you're running a replicator you can disable this to be safe, by setting `CATCHUP_SYNC_WITH_SNAPSHOT=false` in your .env file
We've released 1.11.1. @wazzymandias.eth made it so that hubs will now automatically use snapshot sync to catch up if they are too many messages behind.
Note that this will reset the db, if you're running a replicator you can disable this to be safe, by setting `CATCHUP_SYNC_WITH_SNAPSHOT=false` in your .env file
Note that this will reset the db, if you're running a replicator you can disable this to be safe, by setting `CATCHUP_SYNC_WITH_SNAPSHOT=false` in your .env file
This is a very cool talk. Interestingly, it's almost exactly the same algorithm the hubs use right now. Prolly trees are more efficient since it collapses the number of levels required (we use timestamp prefix tries, so it always at least 10 levels), but apart from that it's exactly the same.
280
vrypan |--o--|
@vrypan.eth·22:25 05/03/2024
Knowing very little about the internals of Hubble, I'm at risk of saying something totally useless, but have you checked Prolly Trees? I attended this presentation a couple of months ago, and I feel that it could be related to optimized hub sync. https://youtu.be/X8nAdx1G-Cs?si=1g5RdIKXg1S0mVA2&t=340
Our existing sync architecture for Hubs is running into scaling issues. Been thinking about a new design that can scale to billions of messages and millions of fids. If you enjoy complex distributed systems problems, I would appreciate feedback
https://warpcast.notion.site/Sync-V2-a9c0fd81d7b245a0b3fbd51e6007909f
https://warpcast.notion.site/Sync-V2-a9c0fd81d7b245a0b3fbd51e6007909f