Hacker Newsnew | past | comments | ask | show | jobs | submit | moyok's commentslogin

As someone working on a side project that allows people to upload data publically (think something like codepen), what can I do to stay away from trouble like this?


Could be worth noting that the pay of an entry level scientist or engineer is less than 1,000$ a month, probably making less sense to someone from outside India. It's because of a low cost of living in India.


The thing that scares me about using JWT is that all security completely relies on the one secret that is used to sign tokens - any person with access to that secret has got potentially unlimited access to the app. They can now impersonate any user and do basically anything.


Yes, it'd be better if JWTs were full-fledged certificates, where ultimate authority could be confined to some offline key, who delegates authority for strictly delimited period to online keys. Or ultimate authority could belong to k out of a collection of n keys: one would need to suborn k keys to suborn the authority as a whole.

RFCs 2692 & 2693 specify a really great, lightweight, way to do that. They resulting certificates needn't be a whole lot heavier than a JWT, and are much lighter-weight than and X.509 certificate. The RFCs also specify an intelligent, domain-neutral algorithm for performing calculations on tags (what JWT refers to as 'claims') and keys.

It's a pretty awesome solution, and there are a lot of good ideas in there. A streamlined version could, I think, end up being competitive with JWTs.


SPKI was deprecated for SDSI[1] (also done by Rivest), both of which AFAIK haven't been touched in ~20 years (which is fine by me, if the theory and implementation are solid, but SDSI has CORBA/J2EE smells all over the RFC from what I remember. Lightweight, eh...)

[1] https://people.csail.mit.edu/rivest/sdsi11.html


> SPKI was deprecated for SDSI

No, it's the other way around: SDSI was deprecated for SPKI, which took a lot of its ideas about naming from SDSI.

> both of which AFAIK haven't been touched in ~20 years (which is fine by me, if the theory and implementation are solid, but SDSI has CORBA/J2EE smells all over the RFC from what I remember. Lightweight, eh...)

SPKI is indeed old, but the fundamental ideas are really good, and some of them (the cert calculus) are timeless. It needs a v2.0 to update the recommended crypto, specify some more use cases and so forth. But it's really, _really_ good, far better than XPKI and extremely capable.

And still pretty lightweight.


Hey, if the conceptual grounds are sound, which I'm guessing they are, since... I mean, Ron Rivest, age doesn't quite matter w/r/t the timeless elements. Rijndael is mathematically sound, and honestly I've got more trust in older algorithms than newer ones if only because there's been more time for the populace to vet it[1] presumably fortifying it with time.

All of the resources I've searched for are fairly old, do you have anything more recent that I can read up on? I see a 2006 paper, but not much other than that.

[1] Though I'm well aware that having an open-standard available for a long time doesn't mean squat, as evidenced by Heartbleed-esque bugs.

Edit: Reading the '00 "A Formal Semantics for SPKI" Howell, Katz, Dartmouth. This is what I was looking for.


I wonder what you think so far.

In particular, I liked the tuple-calculus they define; it can be extended to support just about any permission I can think of (although it does require reversing DNS names, which is slightly ugly).

I have a scheme to release a v2.0 of the standard someday in my copious free time.


Sometimes it's good to use asymmetric crypto, and those RFCs could perhaps be suitable for that. My impression of the parent comment is not that of a preference for asymmetric over symmetric, however, but rather of a lament that key-based crypto is defeated when attackers learn keys. After all the complaint about secret keys applies just as well to private keys.


The thing that scares me...

You should rotate your keys often enough to alleviate your fears. You probably ought to trust your MAC a lot more than you trust other links in the key-security chain, anyway.


For this, you can use refresh tokens and set the JWT expiration to a low interval - say 10 minutes. After every 10 minutes, the JWT expires,authentication fails, and the client uses the refresh token to get a new JWT. To revoke a client, revoke their refresh token. This way, though they won't be logged out immediately, they would be logged out in a max of 10 minutes when they need to refresh again, and find out that their refresh token is no longer valid. The point is that instead of every request touching your DB or cache, only one request does in every 10 minutes.


I know this works, and I've used it, but I also find it to be the most aggravating thing about JWT and also OAuth. With OAuth, some sites allow you to refresh a token after it times out (so really the refresh token is the source of truth, defeating the purpose of the OAuth token), and others only allow you to refresh before it times out (forcing a login by the user if they are disconnected too long, or storing their username/password in the system keychain, making that the source of truth and again defeating the purpose of the OAuth token).

Also a timeout of this kind is only security theater, because it may only take moments to vacuum a client's data once a token has been skimmed.

The JWT library I'm using in Laravel blacklists tokens by storing them in their own table, rather than invalidating them. This is self-evidently a pretty bad vulnerability because malicious users could fill up the database, so now the developer has to deal with that scenario.

Put all that together and I think the notion of token expiration is, well, dumb. In fact I think timeouts of any kind are a code smell. They make otherwise deterministic code very difficult to reason about in the edge cases. The best thing to do is encapsulate timeout handling in a refresh layer of some kind, once again defeating the whole purpose of timeouts in the first place.


I have personally experienced the security disadvantage you mentioned. I used my Google login to sign into an email client. I immediately realised that the app was going to store all of my email data in their private servers. I quickly went over to the Google dashboard and deauthorised the app, relieved that they would only have been able to get my first few mails in this time. But the app retained access to my emails, even receiving new emails for some time. Probably because of something similar to a refresh token being revoked, but the access token still being valid. I wanted to stop the app from accessing my e-mails, but could not.

However, despite this disadvantage some applications just cannot afford the load of every single request touching the DB or cache. JWT makes sense for that particular use case when you are willing to make this compromise. Instead of every single request touching the cache, maybe every 1000th request does now, because of the token expiration time.

Another use case is when you need a very simple, stateless way to authenticate users and don't require revocation. Some Oauth providers don't give you the option to revoke access tokens, for example.


> despite this disadvantage some applications just cannot afford the load of every single request touching the DB or cache.

Disagree. This is one of the simplest things alive to distribute. Split the query from the token do the query in the DB and the token lookup is effectively a distributed hash table lookup (assuming the token is, say, a UUID). Once the DB query comes back store the result pending the successful retrieval of the token.

What's difficult is handling something like millions of concurrent video downloads / uploads - not looking up tiny tokens.


tiny tokens, but needed for every single operation


Sure, but the really hot tokens could be cached right next to the DB. Plus how many operations is a person doing per second? If its more than a couple you can batch them pretty easily.


It's not dumb if you do it right.

Think of it this way, the _real_ token is your refresh token. It's stored in your database. You control it and it can be revoked at any time. So, now you build 100 other services, and they all accept this refresh token. Problem is, since you control it so well, every other service now needs to validate that refresh token with the auth service on every request. It would be really nice if we could get around that massive traffic pinch point. So, we create crypto tokens that can be validated by every service without the need to make a network call. As a compromise, we make this new token expire in an hour so that the _client_ needs to validate their refresh token every hour and all our services are freed of from ever directly calling the auth service. Sure, this means that when you log out you're not really logged out for up to an hour, but it's all tradeoffs.


For many applications, not being able to immediately log out is an unacceptable trade-off. If you know your account has been compromised and you need to kill all sessions ASAP, an hour delay is unacceptable.


> The JWT library I'm using in Laravel blacklists tokens by storing them in their own table, rather than invalidating them.

The main rationale for JWTs is that it removes the session store as a point of contention (and secondarily it resolves some xdomain issues that aren't that difficult to work around anyway). If you're going to introduce a new table/cache, you're likely better off just using sessions.

Totally agree about timeout management.


TTLs aren't necessarily a code smell.

They're important for DNS.


The refresh token doesn't defeat the purpose of oauth. The purpose is that the third party needs to check in again to refresh.

This gives the end user the time to revoke the token at the provider without the need to revoke or even trust the third party.


I've captured this exact workflow, and it works really well in practice on mobile and browser (probably anything that can curl with JSON) https://github.com/hharnisc/auth-service


Ehh... but at least for Google's implementation, you won't have a refresh token in the first place if the user doesn't grant "offline access" on a special page that comes up after login.

In our usability testing, we've found that this freaks a lot of people out and reduces adoption. The benefits of basically outsourcing our session management to Google don't outweigh this... so we use JWT for auth only, and then use that token as a session key for our own local solution.


> the client uses the refresh token to get a new JWT

What's the point of using the JWT then? If the refresh token lasts for longer than the JWT, and you need to send it periodically back up to the server to auth the user, why use the JWT in the first place?


The point is that it would reduce the DB/cache load, as the refresh token would need to be verified once in a few minutes or so as opposed to verifying it for every request. Regular requests could be authenticated in the CPU itself without having to go through to the DB/cache layer. This means lower latency and reduced load on the DB/cache.


...then make the JWT last as long as the refresh token.


If the JWT is as long as the refresh token, then what's the point of having a refresh token? You would then probably need to get new refresh tokens then to make the session last longer.

The idea is to make the refresh token last for say a few days, and the JWT for say 10 minutes. Now, every 10 minutes the client needs to use the refresh token to get a new JWT. The maximum time a client can have access to the service without a valid refresh token is 10 minutes. All the requests made in this window of 10 minutes would be deemed authenticated by verifying the JWT, and without having to go through the database or cache.

Now, say a user of a web app clicks "log me out from all my devices". The user's access needs to be revoked from everywhere they are logged in. If you invalidate all their refresh tokens, then in a max of 10 minutes they would be logged out from everywhere, as their refresh tokens would no longer work and the JWT duration is only 10 minutes.

This approach is essentially a mid-way or a tradeoff between using traditional sessions and JWT. "Pure" JWT is stateless and hence cannot support individual session revocation. The only way to invalidate sessions in "pure" JWT would be to invalidate the key or certificate used to sign the JWT, but that would invalidate everyone else's sessions as well and hence is not very practical.

Since with this approach you implement sessions plus JWT, it's more complicated than just using sessions. JWT should be used for such applications when the latency or load benefit is significant enough to justify the added complexity. For applications that do not need session revocation, however, JWTs are a convenient way to implement sessions without needing a DB or cache layer.


It's so that if the JWT is stolen in transit, the thief only has access to the token for a shorter period of time. This is why they should expire quickly. Whether or not you think that matters, is not up for debate. That's how it is.


jwt can be verified by the application without having to make a call to the auth service


I've been working on rolling my own authentication layer in Node/Express lately. Your comment is the first time I understood how refresh tokens work.


Julia is great too for this kind of work


I really like go. I just love that it compiles to a native binary and is so easy to distribute. I love the way interfaces work and that types specify interfaces automatically without explicitly specifying that.

I love the "strictness" of the language - for example the code won't compile if you declare a variable and not use it, or import a library and not use it. I love that there is a standard gofmt which means code auto formats to a standard format. These features really help set some "discipline" when working in a team.

I love the way concurrent code can be called easily and the use of channels. I love the performance - it has been more than fast enough for my use cases so far. I love that I can get started with an HTTP server using just the standard library, and the most popular web frameworks in go are micro frameworks.

Overall, there's a kind of a simplicity about the language that underlies all of the above things, and that is what makes me excited about go.

I have used go in some minor projects that have been running peacefully for months without any hitches, and am using it in a big project mostly in the form of microservices and scripts. It has become my favorite language now.


I used to get paid 500€ a month as a developer for a consulting firm, and it was a 12-hours-a-day kind of a job. The firm would then bill its clients 40-60$ for every hour I worked. From one perspective it does seem exploitative. But on the other hand, I know many people who still dream of getting that job - the money is not good, but it almost doubles when you get promoted, and the career path is good. It's a really intense place with almost impossible deadlines, and you get to learn a lot from the pressure.

I was recently offered a project where I had take a complex analytics algorithm implemented in Excel to the server and implement it as an API for ~250-300$. Took me around ~20-30 hours over an extended weekend to write it in golang and I feel good about it. The work was mostly simple transformations on the input data, though a large number of them. It resulted in roughly 1000 lines of code including tests and comments, and a big part of that time was spent actually understanding the algorithm.

Similarly, I did some work as a favor to a friend - a Facebook app that would send some inspiring quotes as notifications to it's users every few days. It took me maybe 10-20 hours to get it online. Frankly, I am not so proud of the code as it was written in such a hurry, and is a bit messed up. But to its merit, it's been functional for several months without any noticeable downtime, and has sent thousands of notifications so far without any need to touch the server.

These are the kinds of projects that can be done in a short amount of time without a long-term commitment, and make sense for such a budget. For me atleast, 10$ an hour is good money, and 25$ would be pretty decent. 50$ would be something of a dream - I would be making in a day what I used to make in a month at my previous job.


That would be 4*26=104 hours right? That seems a good amount of time.


I meant it as 26 hours total, but I had my math completely wrong. 40 hours a week * 4 weeks * 1/3rd is 52.8. I had it halfed somehow. But anyway, yeah.


and somehow I STILL have my math slightly wrong. Can I not do arithmetic? 40 * 4 / 3 == 53.33.

Anyway, I guess there some projects that can be done well in 53 hours, but it's still not a huge amount of time. At "USD$450 should be fine as a third of a monthy salary!", that's approximately $9/hour. Really? Okay. If you can find someone willing to take $9/hour for 53 hours of work for a project that really can be done well in 53 hours.... I think my point stands.


I have been looking for a solution to the same problem - short projects with no long term commitment for some side income. Maybe this is useful - CodeGophers: https://news.ycombinator.com/item?id=11663869


This used to happen to me every time I had to go to the airport in the night, and would book some local taxi to do that. The drivers just never cared about the traffic signals in the night.

The prevalent attitude seems to be that traffic rules are more like guidelines and can be broken if it seems "okay" to do so. Driving on the wrong side of the road is for example quite common, and most people do it to avoid making a long U-turn. Someone who I knew was driving a car and had a head on collision with a scooter travelling the wrong way on a highway, and the scooter had three passengers on board it (it's called "tripling" and is also pretty common). Sometimes you get to see cars reversing on highways with fast moving traffic, because they missed a turn and wanted to go back.

Things could probably get better in the future though. In New Delhi, for example - according to a new law, if you are caught in a traffic violation (such as ignoring a traffic signal), you lose your license along with the right to drive for three months. I saw somebody's status on Facebook that this happened to them, so it's not just an empty threat too.


When I was in Vietnam & Indonesia, I regularly saw people driving on the wrong side of the road. Luckily, they would generally drive just off the road, so traffic on the road wouldn't be at risk of a collision. It was also similarly common to see entire families riding on one scooter - I regularly saw families of 4 riding together.


If you're "driving" (a scooter?) just off the road, or walking alongside a road, it is better to doit counter-traffic so you can see the traffic comming.

If you switch sides, the traffic will aproach from your back, and that's a lot more dangerous.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: