Interesting, will take a look. Regarding your questions:
- Historically, reputation and web-of-trust models have been tried with mixed results (see PGP/GPG history)
- Proof of work for human validation can probably be gamed, useful as a potential workaround for rate limiting/DDOS mitigation though (check how Tor uses it)
- I'd be very skeptical providing my full KYC details to a new service, perhaps a host verification a-la Let's Encrypt could be useful as a Layer 1 "KYC" tier?
As a quick solution before implementing the more sophisticated suggestions in this thread, you can try getting a small cheap VPS from somewhere outside and trafficking all your traffic through it via sshuttle[1]. For example, Vultr (not an endorsement) has some with ~$3/month that should be sufficient for your case.
Merklemap is running PostgreSQL as the primary database, currently scaling at ~18TB on NVMe storage, and around 30TB of actual certificates that are stored on s3.
The backend is implemented in Rust (handling web services, search functionality, and data ingestion pipelines).
Bit of a side note for my fellow protocol enjoyers: this site is WhiteWind which is another app on the atproto network. Bluesky is a microblogging app on atproto while WhiteWind is a long form blogging app on the same network. It's pretty neat.
I'll never forgive SiFive for discontinuing the only blobless RISC-V machine (HiFive Unleashed) after shipping only a few thousand units (which they did only because Debian demanded it as a condition of adding support).
HiFive Unleashed ($999 - $1199) was made in no more than about 500-600 units.
HiFive Unmatched ($665) had several thousand units built.
Absolutely no reason to buy an Unmatched now even if you could because the VisionFive 2 (Star64, Mars) are slightly better in almost every way starting at 1/10th the price.
I'm now wondering how many decades are still being lost because of similar bugs in other OSes that don't get as much scrutiny, like OpenBSD or even FreeBSD.
I'm not going to say scheduling is better or worse on different platforms, but it is clearly different.
When I tried to port the (at that time) new, open-source version of .NET Core to FreeBSD, one of the things which I simply couldn't fix in the .NET framework code itself was threading. For one, I had to (for some reason, don't remember now) use non-posix threading-functions to make it compile. But even with that in place, things weren't behaving as expected.
I mean... Threading worked, but .NET had a fairly big test-suite which was very opinionated about what sort of behaviour and performance characteristics different kind of threading-scenarios and threading-primitives should have.
On FreeBSD I was forced to extend time-outs and outright disable some tests to make the build pass.
Not necessary, the problem is similar as what can be seen with garbage collection (latency vs. throughput).
For example if you give more smaller time slices to threads then you have better latency but worse throughput as it means more work when switching the time slices and more cache invalidation.
.NETs test suite is tuned for Windows. Windows is focusing more on desktop use-cases and is more tuned for lower latency then throughput on the other hand FreeBSD is mainly for servers so their scheduler is more tuned for throughput. This difference could very well explain the failure in the test suite.(Independent of weather there is a bug or not.) To test what I think it does test you have to be very thigh about the expected latencies, thigh enough to make the test suit fail if used on a more throughput optimized system.
Similar on Linux in some distros you have an alternative official kernel for media applications (e.g. gaming) which changes kernel parameters to be a bit more latency focused. E.g. linux-zen in case of arch linux.
Similarly, I know DragonFly BSD focuses on speed (making the kernel as non-blocking as possible, thread-per-core type stuff), but is there a comparison of the scheduler with FreeBSD's?
reply