Though once s hits the fan, you can just tell AI “I have no idea how any of this works andI don’t really even care but I need rate limiting, so do what you must, I trust you”.
Here’s how I would do this task with cursor, especially if there are more routes.
I would open a chat and refactor the template together with cursor: I would tell it what I want and if I don’t like something, I would help it to understand what I like and why. Do this for one route and when you are ready, ask cursor to write a rules file based on the current chat that includes the examples that you wanted to change and some rationale as to why you wanted it that way.
Then in the next route, you can basically just say refactor and that’s it. Whenever you find something that you don’t like, tell it and remind cursor to also update the rules file.
Solid approach. Don’t be shy about writing long prompts. We call that context engineering. The more you populate that context window with applicable knowledge and what exactly you want, the better the results. Also, having the model code and you talk to the model is helpful because it has the side effect of context engineering. In other words you’re building up relevant context with that conversation history. And be acutely aware of how much context window you’ve used and how much is remaining and when a compaction will happen. Clear context as early as you can per run. Even if it’s 90% remaining.
Yes, I do things like this too. Planning works great, but the same is true for making write-ups after completing a task. In this case you'd ask the agent to write up a document with all the rules for your refactoring based on the current thread (where you have discovered the rules with the agent as you go). Let it include examples and let it pause and report back to you if it comes across a situation that isn't fully covered by the current rules yet so you can review it ,update the rules, and then let it continue
Sorry, I didn't mean to be taking shots at any airplane company. I just disagree that multi-module consensus
is a reliable form of EDAC. I gave a human factor example, but there are technical reasons too.
> I just disagree that multi-module consensus is a reliable form of EDAC.
I wonder why you disagree about this? The only reason I can thing of is:
- same sw with same hw with same lifecycle would probably have the same issue. (vendor diversity would fix this)
- The consensus building unit is still a possible single point of failure.
Any other reasons you might doubt it as a methodology? It seems to have worked pretty well for Airbus and the failure rate is pretty low, so... It obviously is functional.
Modern units I'm sure have ECC, AND redundace as well.
Yes exactly, birds of a feather fail together... an A380 has three primary flight control computers, but still carries another entirely dissimilar set of three flight control computers as backup.
Well, the diversity would cover the issue with random HW failures, not the case your SW has a bug in it. As to the SW, they _sometimes_ have vendor diversity.
Regardless, there are multiple fronts you need to tackle to have high reliability so you should use all techniques at your disposal.
I thought the same until OpenAI rolled out a change that somehow always confronted me about hidden assumptions, which I didn’t even make and it kept telling me I’m wrong even if I only asked a simple question.
Blockchain is a very inconvenient database, for sure, but there is a good reason Bitcoin uses it. It had to solve to double spend problem and create a trustless p2p digital cash, while being censorship resistant and having no central authority.
Some people around a decade ago started using blockchain for everything where a SQLite db would have been better, because blockchain was the buzzword around that time, and they were charlatans who wanted funding and hype, or signal how cutting edge they are (kind of how the last two years everybody became an AI company).
It doesn’t mean that Bitcoin using blockchain is stupid.
> and they were charlatans who wanted funding and hype, or signal how cutting edge they are
Interesting that those same hucksters and shysters who spread the gospel of the blockchain immediately jumped on th AI bandwaggon when this was the shiny new thing.
Or, maybe, 40 years working in IT turned me slightly cynical.
> According to new projections published by Lawrence Berkeley National Laboratory in December, by 2028 more than half of the electricity going to data centers will be used for AI. At that point, AI alone could consume as much electricity annually as 22% of all US households.
Cash is not completely anonymous, but hard enough and not enough parties track it. Bills are serialized and you could take photos of coins and likely identify them based on scratch patterns.
Still, whole thing is saved by not enough people actually tracking it to that level.
And on other side, BTC tracks every single transaction ever. Which is also detriment, that is we keep everything stored forever in lot of places... Which kind seems massive waste.
Point taken about anonymity. However, its design (that of cash) is theoretically anonymous, it is reality which gets in the way. BTC, on the other hand, is "just" a huge ledger of transactions with giver and receiver perfectly "identified" (in a unique way, albeit just pseudonymous) and preserved forever.
Also, as you point out, BTC is a massive waste of resources and storage space.
> giver and receiver perfectly "identified" (in a unique way, albeit just pseudonymous)
Not perfectly. A lot of heuristics are needed to link a unique owner to multiple transactions. With bitcoin, it's recommended to use a new address for every transaction so, for example, in a basic transaction, it's not so easy to identify which output is the recipient and which is the change.
And there's Monero that tries to hide these links a lot more.
Which is why higher layers like the Lightning Network, Rootstock, Liquid offer to not store everything on chain and offer speed/features Bitcoin natively can't while resting on the higher security model of their base layer.
With the Lightning network, L2 is basically a database between only 2 parties that are required to be online during the transaction. It's not possible to double spend in that situation.
There's the possibility of double spending by committing to the bitcoin blockchain an old version of your "database", but then you would face the penalty of having your entire balance confiscated by the other party.
> but then you would face the penalty of having your entire balance confiscated by the other party.
Only if the other party notices in time that you did this. You are reliant on active monitoring of the blockchain to know that your transactions actually happened. And the more you want to scale (i.e. the more transactions you do on a single Lightning channel without settling it on the BTC blockchain), the bigger the risk becomes.
Yes, but as long as you monitor, double spending is not possible. And it's possible to use tools to do that somewhat passively.
There are conditions on every payment system. With bitcoin you also have something to do to prevent double spending: wait for some number of confirmations (and making sure you're on the right chain).
And "double-spend protection guarantees of blockchains" is very dependent on the cost of doing a 51% attack, so it's not strong by itself. It's very strong in bitcoin only because the quantity of hashrate/money required to do one is astronomical. It's not so strong on small blockchains.
And I fail to see how the risk increases with more transactions on a single lightning channel.
My point is that Lightning has additional failure modes that BTC does not, and Lightning in itself does not offer the guarantees that Bitcoin does. It of course also suffers from all of BTC's failure points - if someone successfully does a 51% attack on BTC, they can implicitly also steal any Lightning funds as well. If you close a Lightning channel and then don't wait for enough confirmations, or you broadcast your cheating transaction and don't wait for enough confirmations, you can clearly lose your money.
The risk doesn't increase with the number of transactions on a channel, that was a wrong statement from my side. What I was thinking of is that the risk increases the more your transact through Lightning instead of regular BTC. Basically, the more of your BTC is caught up in Lightning channels, the higher the value of attacking you with a double spend attempt.
This is automated, no one is proposing to manually look at BTC blocks to see if you are getting cheated. The problem is that you need to explicitly run code constantly to check if this happens - which means that if your monitoring agent goes offline for any reason (which an attacker could perhaps force), your BTC that you received in a Lightning channel may be stolen.
Okay, so it's an attack vector but one that can be mitigated against by implementing redundancy.
I would argue that Lightning's biggest security issue is having to store your private keys on an Internet connected device. I don't know if further improvements can be made in this area, for example allowing for some kind of 2FA, like multi-sig on the base layer.
I thought it was interesting that BSV seemed to scale just fine, and you could also store entire files on it, including JSON, HTML or even music or videos.
This seemed like an amazing innovation to me, made even more amazing by the fact that it was, by all accounts, the original protocol.
You could do some pretty amazing stuff with it, for example store a SPA on chain and then store individual posts on chain, and have the SPA read the app.
Unfortunately, the ecosystem was completely greed focused, and nobody is interested in technological advancement in the slightest.
>BSV seemed to scale just fine, and you could also store entire files on it, including JSON, HTML or even music or videos
This doesn't pass the sniff test. Everyone must store the full blockchain in order to verify it. So to run a full node you would have to store everyone's JSON, HTML, music, videos. Full mirroring for every node in a distributed system is about as close as you can get to the definition of doesn't scale.
I should note, the scaling I was referring to was transaction processing. Data storage is a little different.
The architecture which I heard described or hypothesized was more akin to Amazon deep storage. More frequently accessed data would be more accessible on "hot" nodes.
Full nodes would effectively, under this paradigm, become cloud storage providers. As a bonus, the problem of how to charge for access is basically already solved, and does not require a complex corporate payment scheme.
Indeed. Bitcoin's blockchain grows with a laughable 3kB/s, yet is an unwieldy 700 GB.
A blockchain that allowed you store one song per second would be hundreds of TB before long. There are other architectures for that sort of thing for a reason.
Looks like BSV is about 7TB and grows at about 4GB a day. I have no clue what those guys are up to these days. This may be unweildy for a home PC but really is still pretty trivial for a data center.
500 hours of video is uploaded to YouTube per minute which is... If my napkin math is right, about a petabyte a day.
Looks like the max they've done is something like 22k TPS. No idea how accurate this is, I don't follow the ecosystem. There's a lot of different numbers like "maximum theoretical potential" that probably ly mean nothing.
Why is overlapping content an issue? Isn't that good?
Let's say I like Show A and Show B. Show A is available on Provider 1 and Provider 2, Show B is available at Provider 2 and Provider 3. Thanks to overlapping content, I can subscribe to Provider 2 and I can watch both of my favorite shows.
I used ChatGPT for every day stuff, but in my experience their responses got worse and I had to wait much longer to get them. I switched to Gemini and their answers were better and were much faster.
I don’t have any loyalty to Gemini though. If it gets slow or another provider gives better answers, I’ll change. They all have the same UI and they all work the same (from a user’s perspective).
There is no moat for consumer genAI. And did I mention I’m not paying for any of it?
It’s like quick commerce, sure it’s easy to get users by offering them something expensive off of VC money. The second they raise prices or offer degraded experience to make the service profitable, the users will leave for another alternative.
reply