Fun fact: I'm using @fjallrs.bsky.social for storing & indexing these PLC records. Required storage with some small tricks (15 bytes per did:plc, 32 bytes per CID) ends up being ~13GB.
Which made me think: Hey that easily fits into RAM! And yeah - I added another impl that uses BTreeMaps. Works :)
Which made me think: Hey that easily fits into RAM! And yeah - I added another impl that uses BTreeMaps. Works :)
Comments
I'm doing
So yay it's even faster than I thought!
re-encoding loaded date in dag-cbor, hashing, verifying signatures etc.
Parallelizing this was really easy with #rust scoped threads & crossbeam-channel (I needed an spmc channel to distribute work).
A full audit of ~34M dids now takes ~20mins on my machine
I'm on a slow quest to figure out a better DID system + social graph (if there is such a thing to begin with). As you are deep in there, is there something you would do differently?
If you initialized your DID with your own key, they can't "fake" records, but they can essentially remove them.
Building certificate logs on top is the next logical step.
About the certificate logs, as did:plc has a chain of signed operations containing all the data ... isn't that already a certificate log? Could you expand your thought?
So a log over all the logs if you will.