Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Hello me from last week. Had exactly this bug, sometimes nginx couldn't connect to the backend (but very rarely, and not reproducible on demand), which I eventually tracked to the fact that localhost sometimes resolved to ::1 instead of 127, which is what the backend was listening on. Still don't understand why it was only like 1 in 1000 requests, and not every or every other request. Just one more slice of ipv6 mystery.


I've had weird errors like that where two DNS servers were giving answers to my query rather than just the one that I intended. This will never happen when using TCP but when using UDP it may happen. Every now and then the packets would receive in a different order and then I'd be paged because some app fell over. Fun times.


Is the Nginx client using the happy eyeballs algorithm?

https://en.m.wikipedia.org/wiki/Happy_Eyeballs

Can be a source of race conditions.


I had a very similar problem recently with docker + nginx. Best I could figure out was the randomness of the problem was being caused by keep alive connection limits. If the connection was opened as IPv4 it would work until it hit the keep alive limit but the new connection might run into the IPv4/IPv6 lookup problem and fail. Never really figured it out for sure. It's definitely thrown some cold water on my plans to go dual stack everywhere all the time. Not sure it's worth the risk of running into these stupid bugs.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: