Location: Berlin, Germany
Remote: Yes
Willing to relocate: No
Technologies: Clojure, Go, Java, Python, PostgresQL and other RDBMS, AWS, Docker, Kubernetes, CI/CD
Résumé/CV: Most recently Senior Staff Engineer at CircleCI, previously Amazon, reach out for complete CV
Linkedin: www.linkedin.com/in/cpolymeris/
Email: c (at) polymeris (dot) dev
Don't contact me for cryptocurrency or adtech jobs, please.
I agree in principle, but also find myself often doing my best thinking when making coffee or just "idly" staring out of a window, and would find it annoying to be interrupted just because I'm not at my desk. Luckily I work from home.
Yeah this is fair. I was considering the classic wave a hand in my face because I'm wearing headphones at my desk type of disruption but there's so many more variants.
There's another angle here which is that ad-hoc in-person conversations tend to be low value because there's no record. The process of simply writing down a question before asking it goes a long way toward finding a solution.
It's one of the things I really appreciate about working from home. There are no unwelcome interruptions because everyone controls their own time and interactions I do have are higher value because they have been more thoroughly considered.
It's only partially patriotism. More importantly, I suspect, just a brand that sells well, no matter what you are selling.
That said, EPFL/ETH and CERN have done some important things in computer science.
That's the thing, JWT is not an authentication protocol, that's just one (the most frequent) use-case of "transfering claims".
The footguns, e.g. the choice of encrypted or not, or symmetric or asymmetric crypto, are a result of the flexbility required to cover the other use-cases. Maybe what's needed is a subset of JWT that's just for AuthN.
On the topic of Git Flow and simpler & faster alternatives, last month I published a short blog post about my experience learning those alternatives exist at CircleCI[1].
The most important learning for me was that things like Git Flow aren't free: they slow you down and make developers feel more detached and less responsible for the code that's running in prod.
I only evaluated hasura, haven't used it in production. What made me decide against it (it wasn't an easy decision, hasura does look cool) was the caching and scaling stories. The caching semantics for GraphQL are in general less mature than e.g. REST, but if hasura at least supported queries over GET it would help, by allowing CDNs to cache those requests. On the topic of scaling I was concerned that making a single postgres instance the bottleneck for the whole app does seem a bit of an expensive way to scale. But of course this all depends on the access patterns and the economics of the particular project.
There are collision attacks against SHA-1. But the currently known collision attacks aren't a big problem for git for a variety of reasons.
Git uses hashes that include more data than just the hash of the content of an object. Meaning the existing known collisions don't work and you'd have to develop collisions specifically to target git.
Additionally, the current known collision attacks require generating the two colliding contents as a pair. You can't take an existing document and then generate a collision for that document. Further, the technique is not particularly good for text documents because you need to include some data to force the collision. This is easily missed in binary documents but not the text documents most people are committing into git repos (granted you can put binary files into git and that would still work but the other reasons for this not being very useful still apply).
Git also has code to deal with possible collisions and will fall back to full comparison and will fail a commit that creates a collision. Finally, generating these document pairs leaves detectable patterns in the content. GitHub scans for these patterns as part of their process.
You hand-wave the difference between collision resistance and second-preimage resistance most away...
Just a recap: generating two files who's hashes collide takes 2^80 tries if SHA-1 wasn't broken, whereas finding a file with the same hash as a specific, externally-determined file takes 2^160 tries.
It's not secure for many applications. There are various practical collision attacks on it.
However, to rewrite history you need a preimage attack. In other words, you need to find a value X which hashes to the same hash value y which already exists h(X)=y
These are not currently practical. So SHA1 is certainly not recommended, but isn't completely broken yet.
A known collision brings a hash algorithm to be "not secure" for cryptographic use. It doesn't mean there is a reliable way to come up with an equivalent hash for different data.
That said, hashes like md5 are still plenty used in ETags for http 304s all the time despite being considered insecure for cryptographic use.
Similar for Angel, which probably includes Ángel. I would consider them the same name. Even with the shared meaning, the pronunciation is different enough.