Until you get sued by JK Rowling. Unlikely? sure, but I wanted to decouple for other reasons.
That's why i went with PassCrux for mine. Can't argue that it's too close, since "crux" is just latin for cross, as in "crux of the matter" (JK likely invented horcrux as a portmanteau of horror + crux).
Just because something is on a blog or a github page doesn't mean that they are discoverable to everyone who might need to know, e.g. a conservator. You're better off physically putting it with obvious locations for your financial documents, like in your home with your tax returns or valuables.
Had the same idea years ago (same hashicorp lib too) but lost motivation to polish it to the point I felt confident enough to Show HN.
https://github.com/xkortex/passcrux
But given recent events, I want to restart work on it.
My use-case revolved more around preserving a master password e.g. to a password manager. I also wanted to support self-hosted backup, like hiding shares and giving directions to the parts to trusted friends. The shamir sharing part was straightforward but i really want to add forward error-correction to protect against partial data loss.
Hooo boy where do I begin? Dependency deadlocks are the big one - you try to share resource attributes (eg ARN) from one stack to another. You remove the consumer and go to deploy again. The producer sees no more dependency so it prunes the export. But it can't delete the export, cause the consumer still needs it. You can't deploy the consumer, because the producer has to deploy first sequentially. And if you can't delete the consumer (eg your company mandates a CI pipeline deploy for everything) you gotta go bug Ops on slack, wait for someone who has the right perms to delete it, then redeploy.
You can't actually read real values from Parameters/exports (you get a token placeholder) so you can't store JSON then read it back and decode (unless in same stack, which is almost pointless). You can do some hacks with Fn:: though.
Deploying certain resources that have names specified (vs generated) often breaks because it has to create the new resource before destroying the old one, which it can't, because the name conflicts (it's the same name...cause it's the same construct).
It's wildly powerful though, which is great. But we have basically had to create our own internal library to solve what should be non-problems in an IaC system.
Would be hilarious if my coworker stumbled upon this. I know he reads hn and this has been my absolute crusade this quarter.
> The producer sees no more dependency so it prunes the export. But it can't delete the export, cause the consumer still needs it. You can't deploy the consumer, because the producer has to deploy first sequentially. And if you can't delete the consumer (eg your company mandates a CI pipeline deploy for everything) you gotta go bug Ops on slack, wait for someone who has the right perms to delete it, then redeploy.
This is a tricky issue. Here is how we fixed it:
Assume you have a stack with the ConstructID of `foo-bar`, and that uses resources exported to `charlie`.
Update the Stack ConstructID to be a new value, ie `foo-bar-2`. Then at the very end of your CI, add a `cdk destroy foo-bar` to delete the original stack. This forces a new deployment of your stack, which has new references. Then, `charlie` updates with the new stack and the original `foo-bar` stack can be safely destroyed once `charlie` successfully updates.
The real conundrum is with data - you typically want any data stacks (Dynamo, RDS, etc) to be in their own stack at the very beginning of your dependency tree. That way any revised stacks can be cleanly destroyed and recreated without impacting your data.
> Dependency deadlocks are the big one - you try to share resource attributes (eg ARN) from one stack to another. You remove the consumer and go to deploy again. The producer sees no more dependency so it prunes the export.
I’m a little puzzled. How are you getting dependency deadlocks if you’re not creating circular dependencies?
Also, exports in CloudFormation are explicit. I don’t see how this automatic pruning would occur.
> Deploying certain resources that have names specified (vs generated) often breaks
CDK tries to prevent this antipattern from happening by default. You have to explicitly make it name something. The best practice is to use tags to name things, not resource names.
> I’m a little puzzled. How are you getting dependency deadlocks if you’re not creating circular dependencies?
> Also, exports in CloudFormation are explicit. I don’t see how this automatic pruning would occur.
I explained that. It's a quirk of how it tree-shakes, if nothing dereferences the attribute, it deletes the export. And yes it'll automatically create an export if you do something like
> CDK tries to prevent this antipattern from happening by default. You have to explicitly make it name something. The best practice is to use tags to name things, not resource names.
I'm well aware but i'm fighting a ton of institutional inertia at my work.
>
Anna Grzymala-Busse, a Stanford University international studies professor, said that seizing the means of production is a socialist and communist goal. "The difference is that with communism, there is also one ruling party that brooks no opposition and no pluralist civil society," she said.
Collectivism and some amount of state controlled means of production are typical of many leftist systems. America already does this in many places: military, power, education, civil works. Communism stands out among other -isms on the left in that it is single-party and incompatible with democracy.
> suspicious gesture with his right hand
It was a Nazi salute. Don't believe me? Go to a crowded street corner and do the "suspicious gesture," and gauge the reaction.
He doesn't have a point, because he doesn't know how socialism and communism are alike, and how they differ. He's arguing against what he imagined I wrote.
> The people that started in 3d printing when they had to assemble the whole machine by hand are now sad to see their hobby replaced by something too easy, it feels like cheating.
I have a 10 year old kit-built prusa I3 sitting next to me. Its brother is in the basement next to a kossel. It's been years since they have seen action, there is a litany of small bits of work they need.
I unboxed an A1 Mini and it's been like an epiphany. I've been printing almost nonstop. It's so much FUN. I just send from my phone and it just works. Everything has been nearly flawless until last night where half a batch of mini utility knife frames started to spaghetti, probably my fault for not fully cleaning the build plate in a bit.
Beats the hell out of glue stick or blue tape, fussing with slicer params, babysitting the first layers, etc etc. Fuck that, gimme the cheat.
AI datacenters are bottlenecked by power, bandwidth, cooling, and maintenance. Ok sure maybe the Sun provides ample power, but if you are in LEO, you still have to deal with Earth's shadow, which means batteries, which means weight. Bandwidth you have via starlink, fine. But cooling in space is not trivial. And maintenance is out, unless they are also planning some kooky docking astromech satellite repair robot ecosystem.
Maybe the Olney's lesions are starting to take their toll.
The shadow thing can be solved by using a sun-synchronous orbit. See for example the TRACE solar observation satellite, which used a dawn/dusk orbit to maintain a constant view of the sun.
Every telco satellite can cool its electronics. However, more than a few kW is difficult. The ISS has around 100kW and is huge and in a shadow half the time.
The cooling is the bit where I'm lost on, but it will be interesting to see what they pull off. It feels like everyone forgets Elon hires very smart people to work on these problems, it's not all figured out by Elon Musk solely.
Google, Blue Origin and a bunch of other companies have announced plans for data centers in space. I don't think cooling is the showstopper some assume.
Good call out, and really interesting. SpaceX being the cheapest way to get things into space, it seems like SpaceX is about to become extremely lucrative.
Satellites can adjust attitude so that the panels are always normal to the incident rays for maximum energy capture. And no weather/dust.
You also don't usually use the same exact kind of panels as terrestrial solar farms. Since you are going to space, you spend the extra money to get the highest possible efficiency in terms of W/kg. Terrestrial usually optimizes for W/$ nameplate capacity LCOE, which also includes installation and other costs.
For one or a few-off expensive satellites that are intended to last 10-20 years, then yes. But in this case the satellites will be more disposable and the game plan is to launch tons of them at the lowest cost per satellite and let the sheer numbers take care of reliability concerns.
It is similar to the biological tradeoff of having a few offspring and investing heavily in their safety and growth vs having thousands off offspring and investing nothing in their safety and growth.