Hacker Newsnew | past | comments | ask | show | jobs | submit | ajbeach22's commentslogin

This feels like a corporate greed play, on what should be a relatively simple chat application. Slack has quickly become just another enterprise solution in search of shareholder value at expensive of data privacy. Regulation of these companies should be more apparent to people, but sadly, is not.

I would recommend https://mattermost.com as an alternative.


I wonder what the cost is compared to terminating SSL at Clodfront? For my web tier architectures, I use Cloudfront to reverse proxy both dynamic content (from the api) and static content (from s3). SSL is terminated only at CloudFront.


I don't think you can use Cloudfront to serve that kind of traffic. Cloudfront costs are described here: https://aws.amazon.com/cloudfront/pricing/

So for 10k HTTPS requests, the price is 0.01 $. If you serve 5 billion per day, that is 5000$ a day. With such high traffic I believe it is needed to handle it using performant webservers (Go, Erlang?) to keep costs reasonable, and probably terminating SSL at load balancer is the way to go


I am not sure that math is right. Using the aws cost calculator, its only about 1100/mo for 5B https requests. However, I think if you consider data transfer its still probably in the range of a several thousand a day. yikes.


Not sure what calculator you're using, but from the pricing page [1] it's pretty clear that 5B HTTPS requests cost at least (depending on the geographic origin) $5000. And that's per day and without data transfer.

[1]: https://aws.amazon.com/cloudfront/pricing/


I've seen lots of tutorials for single page apps/api architectures that specifically store JWT in localstorage, it's endemic.

What I haven't seen though, is guidance on how to deploy SPA web apps using cookies properly (with either jwt or session token in http only cookies)

From what I have found, there are only a few options:

1. host api at 'api.mydomain.com' and frontend from 'mydomain.com'. You will have to deal with CORS and options requests which can add significant latency for non-simple CORS requests. Not to mention extra configuration with CORS headers.

2. Serve assets/frontend from the API (ie, rails static assets, Django collect static, etc). Downside is that you have to deploy everything together.

3. Reverse proxy so that api/frontend are on the same domain, and you can use cookies without CORS. Api lives at 'mydomain.com/api' and frontend at 'mydomain.com`

I am currently doing 3, which if you are using AWS, you can use CloudFront as the reverse proxy by using a separate dristriburion for your api and your Frontend and using behaviors to route traffic by path. In fact, that is what AWS recommends: https://aws.amazon.com/blogs/networking-and-content-delivery...

So for 3, for example:

You have CloudFront -> Loadbalancer -> EC2 for your api with a path of /api in CloudFront, you can set CloudFront to never cache. You don't pay for bandwidth between EC2 -> CloudFront, you only pay for bandwidth out of CloudFront:

>Outbound data transfer charges from AWS services to CloudFront is $0/GB. The cost coming out of CloudFront is typically half a cent less per GB than data transfer for the same tier and Region

For your frontend, you can host in s3 -> CloudFront with a catch all path for non-api paths (allows client side push state to work in SPA).

CloudFront in this case also is where SSL termination happens. You can also to full end to end ssl if you enable that in the CloudFront and your API can also terminate ssl at the LoadBalancer.

The other benefits to this approach is that you can usually just use the default auth systems for your web apps. You can also create a CNAME for your API loadbalancer to be api.mydomain.com and use that for non-web clients. In the case of Django Rest Framework, there is both cookie and token auth enabled by default.


Super critical co2 is also used for caffeine extraction in coffee, and its safer solvent for cannabis extracts and concentrates. I would never buy cbd products unless they are lab tested with a GC for contaminates and extracted with co2.


Look into USP-grade acetone-extracted cannabinoids. Acetone is 100% VOC so it will evaporate away cleanly without needing a vacuum purge (I'd still heat it to 135F to boil off the acetone which boils at ~133F) and your body naturally produces it in small amounts, so a tiny bit of contamination isn't going to be likely to cause damage. It is cheap to obtain vs. compressed CO2 (the cost of the equipment required for CO2 extraction represents the bulk of the cost of the extract) so you'll also save money.

Many places that do CO2 extraction also don't use a closed-loop system, so many are just venting the CO2 directly into the atmosphere.


Take a look at the USP Acetone specification, it does not indicate definite ideal suitability for the task:

https://greenfield.com/wp-content/uploads/2018/11/Spec-Sheet...

This fungible specification has no specific limit for Diacetone Alcohol content, a common impurity and one which increases during storage.

Diacetone Alcohol is an industrial solvent with properties of its own, not nearly as low-boiling as Acetone. Clean Diacetone Alcohol will also show negligible Residue After Evaporation and Non-Volatile Residue under conditions of the tests, which report mostly solids content along with many higher-boiling liquid impurities but not components as moderately volatile as Diacetone Alcohol itself.

However Diacetone Alcohol can still be a significant component of the extract once the bulk Acetone has fully evaporated, depending on the conditions of the processing. Rotovap would help a lot and more than 135F would reduce residual solvent better from a viscous matrix.

The USP Purity minimum of 99.5 basically means that the water content plus any other chemical impurities such as Iso-Propyl Alcohol & Methanol (specified) and implying things like Diacetone Alcohol or Benzene, etc (but unspecified) must not add up to more than 0.5 percent.

You might even prefer to explicitly specify a maximum limit For some other target toxins, less than some low detectable amount if you were picky.

Some legitimate laboratories may not be able to detect the difference between Acetone which contains unlisted impurities and that where it is negligible.

Take a look at the associated weaselwords from legal:

https://greenfield.com/wp-content/uploads/2018/11/OVI-Residu...

Which references this as the closest applicable guideline, even though it was intended to apply to the residual solvent content of pharmaceutical products not solvents themselves:

https://www.uspnf.com/sites/default/files/usp_pdf/EN/USPNF/g...

Better than nothing but not as cautious as it could be. You may want fewer impurities than marginal drug companies anyway, you're allowed to do that. This particular supplier does look nicely better than marginal, which is good, with typicals looking well better than the limit.

Regardless with two lots of USP Acetone having apparently identically suitable certificates, one may result in its Diacetone Alcohol or other not-so-volatile components comprising a significant portion of the extract, while the other lot would not, under the same solvent-removal conditions which are excellent for the latter batch of Acetone having only the lighter impurities such as IPA & Methanol.

I would still want to test it myself first.

Source: pioneered laboratory techniques which some other testers and chemical plants eventually adopted years later, still state-of-the-art today, including this particular material. Certified billions of dollars worth of commodities like this, single-handedly more than some single petro-chemical companies can manufacture. Less than a year ago deployed a further advanced system with two backups for 100 percent uptime in a 24hr staffed operation, for when a big oil/chemical client has their ships come in, and it was Acetone. There is still no adequate publication within ASTM, been at it since dirt was rocks, and Acetone has been around even longer. Documentation not found elsewhere.


Why should I use this over https://gocloud.dev/howto/pubsub/?

https://godoc.org/gocloud.dev/pubsub

This has support for many pub/sub protocols.

Why reinvent something that already exists instead of just contributing back to an existing project?


After a quick look, IMO Watermill is a bit more flexible, because of middlewares and decorators support ;)

And what is unique, Watermill provides some high-level concepts like messages Router or out-of-the-box CQRS support which is really helpful when you are building bigger application.


I'm too lazy to back with some links my following statement, but usually in industry diversity is good. There are many frameworks and libraries, that have overlapping use cases. But with more choice it's easier to get the right tool for the job. Also maybe this framework is solving the same problem in a more clever way. So: yes, it's good to have smart developers contributing instead of starting new projects, but it's also good to have more diverse frameworks to choose from.


Even a cursory glance tells me the scope of this projects plans outstrip a pub sub protocol implementation. Maybe that's why?


not sure exactly what you mean but there is a driver package that can adopt other protocols through the same interface.

https://godoc.org/gocloud.dev/pubsub/driver


Even if it was a different implementation, I wonder if it could align with the open source cloudevents spec (https://github.com/cloudevents/spec)


> I just don't think Go has a place in new code.

This is a completely blanket statement. Not only are there very large open source projects written in go which your company more than likely already uses (kubernetes, docker, prometheus, influxdb to name a few) there are also case studies of very respectable performance in go:

1 million requests/min http://marcio.io/2015/07/handling-1-million-requests-per-min...

1 million websockets in go: https://www.freecodecamp.org/news/million-websockets-and-go-...

event loops in go: (surpasses redis in their benchmarks) https://github.com/tidwall/evio

I see golang more as a progression for JS or Python developers than Java or JVM based languages, as the concurrency model in golang is closer to to pythons + javascript async. The argument that Java, Kotlin, Erlang developers are less likely to move to Go is probably a valid one, but it is completely baseless to say that "go has no place in new code".

The concurrency model in golang with channels makes more sense to someone more familiar with python/javascript who hasn't been exposed to threading or other concurrency models. If you can understand basic coroutines/channels you get the benefits of golangs concurrency (and even parallelism) without much effort, or code complexity, with an order of magnitude more performance that python or javascript for the same feature set.

The profiling tools in golang are also first class (pprof, trace, ect) With pprof, its easy to see even traces of different coroutines, heap allocations, cumulative allocations, these are even safe to sample on production running services and this is provided by the standard library. There are even built in tools to profile GC. The argument that tooling and monitoring doesn't exist in golang is simply not true.


>That said, I'm not sure how serious you are about handling file uploads, but uploading directly to buckets often means uploading to a single region (on aws, a bucket may be hosted in us-east-1 for instance, meaning high latency for folks in e.g. Australia). This may or may not be problematic for your use case, but it did bring us complaints when we had that.

You can use https://docs.aws.amazon.com/AmazonS3/latest/dev/transfer-acc...

S3 acceleration uses the cloudfront distributed edge locations. As the data arrives at an edge location, data is routed to Amazon S3 over an optimized network path. This costs more money though.


>> Senior algorithm nerd on my project is going nuts over algorithmic complexity

This is me, but luckily where I work I have people who can keep me in check because we generally do design reviews before anything big is built.

However, I have been in situations at previous companies where big(o) was ignored to take short cuts up front, because the "data was small" and suddenly scaling to even just 100 users starts to break things because of poor design decisions when it gets into production.

I guess the lesson here is more the importance of design reviews. Also n^2 is HUGE even for small data if there is IO or api calls involved. Any public api you provide, that is n^2 is not a good idea because you never know who may end up using it for what.


> if there is IO or api calls involved

Right. In my case, the operation was an extra memory comparison. For something already in the cache.

Sure, constraints can change and your assumptions about n<10k may prove unwise, but that's our call to make as engineers. YAGNI. If you know n is never going to grow, then why waste time on it? We're not paid to write pristine code. We're paid to solve problems while hopefully not creating new ones. Pragmatism and all that.


> Highly optimized C++ or C library with javascript wrapper beats golang hurr durrr

At what point is it C++ beats golang vs javascript beats golang? IMO these kinds of benchmarks are disingenuous at best.

The benefit of go here is that i don't have to write c++ or c if i need to submit a bug fix, and I don't have layers of abstraction around libraries written in a different language. The last thing I want to is to try and debug the underlying c or c++ libraries when things go wrong. At least with go, pprof and other go tools make debugging low level things straight forward


There are a few other notable things about Go: 1. Compiler and toolchain is all written in Go. Compare this to g++ or JRE/JDK has a lot of code in other languages which can be problematic when you need to audit the code or fix some low level bug. 2. Language spec is shorter than any other language.


Also: https://news.ycombinator.com/item?id=19156671

go can handle 1 million of concurrent websocket connections anyway, so what is the point?


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: