Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

In my day job I do a lot of troubleshooting. That's most of my job... helping other IT folks figure out what the heck is happening with something.

I'd say that in 70% of things I work on, part of that time is figuring out what config files are used because the app/service/process/whatever is doing something the owner/tech/whatever doesn't expect, yet is rational, and they "didn't tell it to do that".

It'd be great if everything reported as the article mentions -- amazing, in fact -- but I feel like it's a pie in the sky wish. Like asking that all programs always exit cleanly or something. I feel like there'll always be inevitable edge cases -- libraries that themselves pick up config files or registry values or environment variables that aren't expected, who knows -- that'll need to be discovered. If things are already not going right with a program, I'm less likely to trust what it says it's doing and more likely to just watch it and see what it's actually reading and doing.



Yesterday I was wondering why the self signed root certificate and the certificates I added to my debian install didn't work with httpie (ssl warning) but did work with curl (and the browsers I added the RootCA to).

I found the explanation: https://github.com/httpie/httpie/issues/480 httpie doesn't look into /usr/local/share/ca-certificates but apparently it's not httpie's fault, it's the python requests library that doesn't check that folder.

I don't know what to think of it and who is supposed to fix it. But I am back to curl with my tail between the legs (because I oversold httpie to my coworkers) for now. There's only one place where the bucket stops with curl.


I ran into this the other day as well.

You can override the REQUESTS_CA_BUNDLE env var to point to your own certs.

https://stackoverflow.com/a/37447847


Gosh sigmavirus24 is a salty chap.


I read the thread after seeing your comment, and it strikes me that besides the first comment being a little brusque, sigmavirus24 comes across as both calm and helpful - doubly so given that the original issue appears to have been PEBKAC related

> I think I forgot to run "pip3 uninstall" and ran only "pip uninstall" as I had to use python3 to get working ssl in the first place.


The issue is still open, so it has nothing to do with PEBKAC. "A little brusque" is definitely underselling it. That was a fine display of internet assholery.


My interpretation of saltiness was also formed by reading the linked requests issue. Just sarcasm went unnecessary.


For once I would say it isn't Python's fault either. There are at least 10 different locations that certificates can be stored at on various Unix variants and Linux distros. Go checks them all, which is probably the only sane solution, but seriously come on Unix devs... Is it that hard to pick a standard location?


My favourite is a very bad pattern common in Docker images. Just configure it with env vars, and then a script plucks them into a config file somewhere.

And then document it misleadingly badly. These options are mandatory! (lists 5 of the 10 actual mandatory options).

Then a script branches on a env var value, which isn't one that's documented, and then fails opaquely if it took the wrong branch because you didn't know you needed to set that env var.

Best ones are the ones that consume your env vars and set _other_ env vars, it's great!


How should this be done instead? Are there any good docs/tutorials?


1) Document all the config options, and ensure you highlight any interactions between settings, and explain them. I'm biased because I used to work on this project at RH, but I really did like the documentation for Strimzi, here's how their env vars are documented [0].

To highlight a few examples of how I think they're good:

> STRIMZI_ZOOKEEPER_ADMIN_SESSION_TIMEOUT_MS Optional, default 10000 ms. The session timeout for the Cluster Operator’s ZooKeeper admin client, in milliseconds. Increase the value if ZooKeeper requests from the Cluster Operator are regularly failing due to timeout issues.

We know if it's required, what it defaults, and what I love to see, why we might want to use it.

> STRIMZI_KAFKA_MIRROR_MAKER_IMAGES ... This [provided prop] is used when a KafkaMirrorMaker.spec.version property is specified but not the KafkaMirrorMaker.spec.image

I like this, explaining when this env var will or will not be used.

> STRIMZI_IMAGE_PULL_POLICY Optional. The ImagePullPolicy that is applied to containers in all pods managed by the Cluster Operator. The valid values are Always, IfNotPresent, and Never. If not specified, the Kubernetes defaults are used. Changing the policy will result in a rolling update of all your Kafka, Kafka Connect, and Kafka MirrorMaker clusters.

Firstly, always great to enumerate all accepted values for enum-like props. But what I really like here is that the consequences of altering this value are explained.

> STRIMZI_LEADER_ELECTION_IDENTITY Required when leader election is enabled. Configures the identity of a given Cluster Operator instance used during the leader election. The identity must be unique for each operator instance. You can use the downward API to configure it to the name of the pod where the Cluster Operator is deployed. (Code snippet omitted)

Interactions between config options highlighted - if you set STRIMZI_LEADER_ELECTION_ENABLED to true, this option is now required.

We're told that this must be unique. And a suggestion on one straightforward way to do this, with example code.

One more thing to call out as good:

> The environment variables are specified for the container image of the Cluster Operator in its Deployment configuration file. (install/cluster-operator/060-Deployment-strimzi-cluster-operator.yaml)

Being told _where_ in the source code the env var is being consumed is great.

Now compare the Confluent docs for their Kafka Connect image. [1] A colleague of mine was using this image, and was connecting to Confluent Cloud, so needs to use an API key and secret. There's no mention of doing that at all. But you can. Just merely set the CONNECT_SASL_JAAS_CONFIG option, and make sure you also set CONNECT_SSL_ENDPOINT_IDENTIFICATION_ALGORITHM and CONNECT_SASL_MECHANISM.

And very importantly, don't forget CONNECT_SECURITY_PROTOCOL, as not only will you have odd connection failures (usually manifests as the client dying at the ApiVersions negotiation stage of the Kafka protocol's handshake), but because a script run in the image uses the value of that undocumented setting to execute a prerequisite in different ways [2], and you'll get weird behaviour that obscures the real issue.

2) Support more than one way of configuring things - maybe I want to mount in a config file, maybe I want to provide a ConfigMap, maybe I want to do it via env vars. Well... [3]

[0]: https://strimzi.io/docs/operators/latest/deploying.html#ref-...

[1]: https://docs.confluent.io/platform/current/installation/dock...

[2]: https://github.com/confluentinc/kafka-images/blob/master/kaf...

[3]: https://strimzi.io/docs/operators/latest/deploying.html#asse...


Thanks. This is great!


If any of the frequent offenders are open source, I wonder if ot would be worth your while to submit (or get someone else to) a change to add this functionality.


"Submit a change and they'll include it" is the demo/promotional version of FOSS


I suspect that varies from one project to the next.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: