One I'm familiar with (but don't know deeply about) is Iron-air batteries [1]. Form Energy [2], an interesting grid-storage startup uses them. They're not energy-dense whatsoever, but very cheap, which makes them economical for that application.
Neat project and approach! I got fed up with expensive registries and ended up self-hosting Zot [1], but this seems way easier for some use cases. Does anyone else wish there was an easy-to-configure, cheap & usage-based, private registry service?
Let us hope that all the wires above the battery packs are of the same polarity. I can't tell for sure if that's the case but I hope the two that are closest together are just book-ended.
A ton of sand, but that's the main issue with those systems and why it's genuinely impractical as anything but a hobbyist project. They need constant monitoring as all of those cells are from laptop and risk thermal runaway at some point. Even with the best matching possible some cell in his configuration will have higher internal resistance and create heat. "Real" large off-grid systems all use LiFePO4 and are unlikely to just catch fire. That being said from the forum post he seems well aware and he probably has individual fuse for each cell.
You could also just bury it so that the worst of the explosion is mostly mitigated. I've also seen small container setup which would probably work better than his (seemingly) wooden shed.
I wonder if the correct solution here is build the shed far away from house and trees, on a cinderblock foundation filled in the middle with 8+ inches of sand, and you just stand back with the garden hose to keep everything around it moist.
Not gonna do you any good if the batteries themselves start going off, but if something else has ignited in the cabinet and the batteries are not yet on fire... you'd be glad to have the extinguisher, I bet
I experienced a 400v DC lithium ion battery catch on fire once, it was very scary. That fire extinguisher won't do much at all, even if it is placed in a more logical spot.
The firemen ended up putting the battery, half melted, into a big drum of water and it hook hours to cool off. The concrete was still warm to the touch where it burned for ~30 hours after the situation was sorted out.
The smoke was just absolutely unbelievable. Made me reconsider buying an EV. That fire was no joke.
The MV contactor wasn't even closed, it had 24v powering it for the internal cell balancer from the vendor, that was it.
Even though it might not seem like it because reporting on burning cars is very selective, EVs do catch fire a lot less than gar powered cars - even when adjusted for how many there are on the road. Additinally, many new EVs use cheaper LFP batteries now that are almost impossible catch on fire.
I hear you and appreciate your point, I just don’t think they’re for me. Maybe when my kids are grown. Scary does t begin to describe what happens, the amount of energy is mind-boggling.
It's definitely now in the wrong spot. I assume that once upon a time there was one rack against the wall, and it was only slightly irresponsibly placed, and now there are two racks and hey kids, heat rises.
The extinguisher should be directly inside the door so as not to attract someone to traverse farther into the building without an escape plan.
Of course if he did so then there would be no extinguishers in the picture and then we would also bitch about it.
The only purpose of a fire extinguisher is to allow you to get out. They do not contain enough water to adequately put out any real life fire (especially not an electrical one like this).
If he can't reach to grab it because it's too hot, he should have already left.
Dry Powder or CO2 is what you need for energized electrical equipment. And considering there's potential lithium involvement, you might want something more specialized (e.g. F-500 Encapsulator Agent). I agree anything more than a small-scale incident you're just getting the heck out of dodge. I'd have built something along the lines of a concrete bunker, with an automated suppression system to buy time.
I think I'd rather have the extinguisher near the door.
If you are outside, it does not tempt you to cross the room. If you are inside, you run for the door, and then turn around and decide if maybe you should just keep running.
Everything worth doing is worth over-doing. He should start doing mad scientist experiments and produce ball lightning, the amperage could be sufficient.
It's interesting how the simulated view makes errors in the terrain data obvious. The view from Santa Cruz island (off the coast of California) is almost completely obscured by single-pointed errors in the height map along the coast that didn't get cleaned from the data: https://caltopo.com/view#ll=34.0505,-119.8665&e=30&t=n&z=3&c...
You reminded me of this scene from Battlestar Galactica that has stuck with me since I watched it (spoiler alert). Conversely, the character laments the sensory limitations of the human body :)
I was actually thinking about Tay-Sachs this morning on a walk, after I passed by a church with a billboard out front that says "Christ died for all, even babies in the womb".
I learned about Tay-Sachs in high school biology. I think we watched a short documentary on it, as an example of genetic inheritance, and the importance of enzyme function. I remember being so surprised that something so simple (absence of one protein) could be so horrible. A beyond-grim prognosis, and immeasurable/unavoidable suffering for everyone involved. Since then, it's been something that I can't reconcile with the existence of a higher benevolent being. I'm no expert, but it made a lasting impression on me.
I'm so uplifted that researchers have made progress on curing this senseless disease.
I don’t think GP is questioning whether those dying infants’ lives were valuable. I think their point is, those lives were valuable, so how could a benevolent, all-powerful creature allow them to suffer and die merely due to a genetic bug.
I’m a huge fan of Postgres. This one is “user error”, but we still got bit pretty hard.
A query plan changed, on a frequently-run query (~1k/sec) on a large table (~2B rows) without warning. Went from sub-millisecond to multi-second.
The PG query planner is generally very good, but also very opaque. The statistics collected during an ANALYZE and used by the planner are subject to some significant caveats. Essentially, the planner would sometimes wildly mis-estimate costs due to under-sampling, and would choose a bad plan. We fixed it in two different ways: 1) lower the auto-ANALYZE threshold; 2) increase the number of rows sampled when collecting statistics for the relevant column.
Again, this was “user error”. That said, it will probably happen again on the same or another query, because it’s hard to know if/when a query plan is about to change, and pg_hint_plan and similar are very heavy-handed solutions.
I'm not even too bothered by the opaqueness of the query planner (although I'd love better visibility into it). But the fact that the query plan can change any second is insane: you can't lock it, and you can't force another one as a short-term fix.
There's no option that I know of. If you reach an impossible-to-anticipate threshold and the query plan changes, your whole system can be down and you can only fix forward, which might take a _long_ time to figure out and is super dangerous as you'll pretty much have to experiment on your prod database.
It's insane, I've not yet been bit too bad by it but I know it's coming for me.
Well, what exactly would you expect for better visibility into the planner? I mean, you have the source code, and I'm not sure how to visualize the extreme number of combinations considered by the planner. Any examples of databases doing interesting things?
As for the "locking" of plans, I personally have rather serious doubts about that. Yes, I've heard it suggested as a viable solution, but knowing how vastly different plans may be "right" for the same query with just slightly different parameters ...
> what exactly would you expect for better visibility into the planner
Dunno. At the moment I need a fairly deep understanding of how the planner works (eg how it uses statistics or indexes) to optimise queries, I'd love to be able to _see_ that rather than guess. Not saying it's easy, I'm just wishing
> As for the "locking" of plans, I personally have rather serious doubts about that. Yes, I've heard it suggested as a viable solution, but knowing how vastly different plans may be "right" for the same query with just slightly different parameters ...
What's the problem with vastly different plans being "right" for the same query?
All I am (and many other people are) asking for is a way to ensure PG doesn't bring down my entire system because it decided to change the query plan it uses without 1. any sort of warning 2. any way to revert it. It doesn't feel like it's asking for too much! Maybe locking plans is a good solution, maybe it's not, I'd just like _something_ that lets me sleep at night
Aurora PostgreSQL has something called Query Plan Management - which I like - is meant to address this type of issue especially for large tables that have key queries that you could blow up DB basically if they go haywire in planning.
Would def be a feature that would be nice to see in PostgreSQL itself.
It'd be nice if one could tell pg to only change its query plans during certain change windows, say, the first Saturday each month, with on call staff ready.
I'm currently having a similar issue where the query planner refuses to use the indexes on a search query (was fine for w hile, but one day it just started de-optimizing itself). Instead just does a seq-scan. Instead of the execution taking ~40ms with indexes the query planner thinks that the seq scan of ~1.5s is better...
Re-indexes the db and run analyze the table. It gets better for max 30min then PG de-optimizes itself again.
I'm kinda stuck on it, any ideas what can I do to resolve it?
Try lowering the random_page_cost value; this is the performance cost query planner uses for random reads, which is usually too high if you're using an SSD where random reads are cheap (on disks it's expensive). Just setting it to 1 works well in my case.
This solves many "it does a slow seq scan even though there's an index"-cases.
If using SSD or similar fast storage subsystem, or those that hide a higher random access time vs sequential, you may indeed want to reduce random_page_cost to make random_page_cost / seq_page_cost in the 1.2-1.5 range.
But it's also wise to review the default_statistics_target being used, that autovacuum is running frequently enough (which does autoanalyze), that the analyze thresholds are also properly tuned...
Thank you for mentioning https://postgresqlco.nf Team member here :) All these parameters mentioned here are well documented there, with recommendations.
Is it a HSTORE column with GIN index? The default "FASTUPDATE=ON" option will delay updates to the index until vacuum time, but if you don't vacuum soon enough suddenly it can decide it should sequentially scan instead of reading through the delayed updates.
This is behaviour I've seen on 9.x on the Aurora variant; for that the solution was to use the FASTUPDATE=OFF index storage option. You can see the delayed tuples by using "pgstatginindex" function.
Using some of the extra options of EXPLAIN (ANALYZE, BUFFERS, COSTS) might give more hints.
If not HSTORE/GIN, then it could be that the analyzer, after some auto-analyze of the table things that what you are asking for will match a significant number of the rows in the table. So there's no point in random seeking through an index because it thinks it needs to read e.g. 50% of the table anyway, so it might just as well not use the index.
set `enable_seqscan` = 'off' or set local `enable_seqscan` = 'off'. This will force the pg query planner to use indexes. Experiment with it until you figure out why your query performance deteriorates. Maybe you are doing a lot of updates/deletes? Increase the statistics sampling size? Autovacuum more frequently?
Because the user is using values that are no longer covered by the statistics. For example incrementing timestamp or id column.
If the stats are from yesterday and they say nothing about the frequency of todays timestamps the query will have to take a pessimistic view of the world. It might be that the data has radically changed since the last stats run, or not. Need to analyze the table to know and make optimal choices.
I'm not quite sure why you consider this "user error"? I work on the optimizer a bit, and I wouldn't say it's a fault of the user ...
OTOH I'm not sure it's a fault of the DB either :-( The statistics collected by ANALYZE are pretty much a lossy compressed version of the database, and so some details are missing - that's kinda the point of collecting the stats.
I'm not sure why lowering the autoanalyze threshold would fix this - it increases the frequency of stats updates, so my feeling is it makes it more likely to trigger similar issue. OTOH increasing the statistics target seems like the right thing to do (although it also keeps more accurate stats, not just increase the sample size).
I don't know if there are better solutions (both practical and in principle) :-(
Unfortunately, this is just the reality of using an RDBMS. I've seen similar behavior on Informix and SQL Server (with a smaller load than yours). They all occasionally generate suboptimal query plans. That's what your DBA is for.
SQL Server was somewhat notorious for it when migrating to 2014 because they rewrote the cardinality estimator. It generally worked better, but in some systems it really didn't. Some people ended up using a trace flag to use the legacy estimator. They have steadily improved the new estimator and it's no longer a problem, but it goes to show how much is going on under the surface.
It's the reality of Postgres, yes, but not all relational database. You mentioned SQL Server, which lets you lock in a query plan, specifically to cover the use case the parent described. When you have a Very Important frequently-run query that pulls from a monstrous table, it's nice to be able to sleep peacefully knowing the DB won't shit the bed because something completely unrelated changed someplace else in the database.
One fair criticism of Postgres (and many other open source projects) is that they can be a little too religious about how the thing should work in an ideal world (in this case, SQL being as declarative as possible), sometimes to the detriment of practicality and of making things easier for the business.
We update statistics weekly on SQL server. One week did did a particularly aggressive data cleanup and then ran stats which created a bad plan when the "tiny" table quickly grew.
> Unfortunately, this is just the reality of using an RDBMS. I've seen similar behavior on Informix and SQL Server (with a smaller load than yours). They all occasionally generate suboptimal query plans. That's what your DBA is for.
Other DBs let you lock in query plans or provide query hints, but postgres' developers are against either, which is not necessarily complete nonsense as it avoids users shooting themselves in the foot… but it also prevents users from digging themselves out of query planner stupidity.
One strategy I've found useful when asked for an estimate is to ask back: "What do you need the estimate for?" Often, this leads to a useful discussion, and we can discover things like:
- They don't actually need to estimate, because the task can very obviously be completed by the previously window
- They're simply trying to prioritize two different features, so the estimate doesn't need to account for who will be working on the project, known vacations, meetings, etc.
- The business is trying to use the estimate for strategic planning, so high-confidence, or multiple estimates (optimistic/normal/conservative) are actually needed.
It's similar to when someone comes to engineering and asks "Please build this button for me" - it's always crucial to ask "Why?" and understand the problem they're trying to solve, since often what they've asked for is not what they need.
I've been loving my COROS. It's quite simple, and the battery life is astonishing. No music functionality though, if that's a dealbreaker for you. I got the APEX 42mm about 5 months ago.
I've been kicking around a related idea for a while. A static "death day" is great because you can internalize, and countdown towards it, but it loses the distribution behind life expectancy. A single day is probability's expected value, but there's a huge range of days that could be your last - even today!
My idea is a daily (or weekly/monthly/etc) notification that re-computes a "death day" for you, on that recurring basis. Each notification would include a day, and together, they'd form a distribution that faithfully represents the variance in days you're likely to die.
For example: me; 28-year-old male. I'd regularly get "death days" that are ~50 years out. Occasionally, maybe they'd be 25, or 55 years away. And very rarely, they'd be things like 10, or even 5 years away. A perfect reminder to play hooky, or call up a friend, or tick off a bucket list item every once in a while!
Anyone interested in this? I'm kind of fishing for an excuse to build it...
There is a spiritual practice to go to bed everyday like on a death bed, visualising completely and clearly that I will not wake up tomorrow morning, this is it, this is the end. It does change life profoundly, no computation needed :)
[1]: https://en.wikipedia.org/wiki/Metal%E2%80%93air_electrochemi...
[2]: https://formenergy.com/technology/battery-technology/