Hacker Newsnew | past | comments | ask | show | jobs | submit | mbreese's commentslogin

Arguably better quality, but at the cost of being shorter. In the great trade off of time, size, and quality, I think VHS chose a better combination.

Sometimes the code must be received through the bank’s app. I went though this process recently to open a new account (at a bank where I already had other accounts). I didn’t think much of it at the time, but if you didn’t have or want a smartphone, this could be a major problem.

I’ve been doing something similar with a RAG system where in addition to storing the documents, we use an LLM to pull out “facts”. We’re using the LLM to look for relationships between different entities. This is then also returned when we query the database.

But I like the idea of an LLM generated/maintained wiki. That might be a useful addition to allow for more interactive exploration of a document database.


I think the easy answer is: because there are customers there. It’s a region full of major commercial and industrial companies. I can imagine that you’d want data centers close to where those customer are.

Technically, I can see challenges in power and cooling, but those can be overcome. The real question is- Are there enough customers in the region to support local data centers? I think that’s clearly yes.


I thought PPC was supposed to be highly performant, but not very efficient. I didn’t think ARM (at least non-Apple ARM) was hitting that level of performance yet. I thought ARM was by far more efficient, but not quite there in terms of raw performance.

But I could be wrong… I’m going from a historical perspective. I haven’t checked PPC benchmarks in quite a while.


Are you guys sure you're not confusing product lines? PPC is a PowerISA architecture, but hasn't been pushing desktop/server level performance for, what, almost 20 years? It's an embedded chip now, and AFAIK IBM doesn't even make them any more. Power (currently "10th gen"(-ish)) is the performant aarchitecture, used in the computers formally known as i-Series, formerly known as RS/6000. It's pretty fast, not not price competitive. They aren't really the same thing.

"PowerPC" was a modification of the original IBM POWER ISA, which was made in cooperation by IBM, Motorola and Apple.

Motorola made CPUs with this ISA. Apple used CPUs with this ISA, some made by IBM and some made by Motorola.

While Motorola and Apple used the name "PowerPC", IBM continued to use the original name "POWER" for its server and workstation CPUs. Later IBM sold its division that made CPUs for embedded applications and for PCs, retaining only the server/workstation CPUs.

However, nowadays, even if the official IBM name is "POWER", calling it "PowerPC" is not a serious mistake, because all the "PowerPC" ISA changes have been incorporated many years ago into the POWER ISA.

So the current POWER ISA is an evolution of the PowerPC ISA, which was an evolution of the original 1990 POWER ISA.

It is better to call it POWER, as saying "PowerPC" may imply a reference to an older version of the ISA, instead of referring to the current version, but the 2 names are the same thing. PowerPC was an attempt of rebranding, but then they returned to the original name.


Thanks for the lecture. My point is that people often confuse PPC in the embedded space (still in production) with Power in the enterprise space (where noone I know refers to it as 'PPC' other than historical artifacts like 'ppc64le' (we run mostly AIX), and haven't since the G5 days). Same/similar ISA, very very different performance expectations. YMMV.

I think they see customers wanting to have the flexibility to move to ARM and this is the fastest way to say they support ARM workloads. Maybe this is a path for IBM to eventually use ARM chips down the road, but I see this as being more about meeting customers where they think the demand is today rather than an explicit guess for tomorrow.

I’ve said it before here, but my mind was swayed after talking with a product manager about AI coding. He offhandedly commented that “he’s been vibe coding for years, just with people”. He wasn’t thinking much about it at the time, but it resonated with me.

To some agents are tools. To others they are employees.


I had a similar realisation in IT support - I regularly discover the answers I get from junior-to mid-level engineers need to be verified, are based on false assumption or are wildly wrong, so why am I being so critical of LLM responses. Hopefully some day they’ll make it to senior engineer levels of reasoning, but in the meantime they’re just as good as many on the teams I work with and so have their place.

While we’re talking about filtering — is there a way to set a WHERE clause when you’re setting up the index? I’ve been working on this a lot recently for a hybrid vector search in pg. One of the things that I’m running up against is setting a good BM25 index for a subset of a table (the where clause). I have a document subsets with very different word frequencies, so I’m trying to make sure that the search works on a set subset.

I think I can also setup partitions for this, but while you’re here… I’m very excited to start to roll this out.


Partitions would be one option, and we've got pretty robust partitioned table support in the extension. (Timescaledb uses partitioning for hypertables, so we had to front-load that support). Expression indexes would be another option, not yet done but there is a community PR in flight: https://github.com/timescale/pg_textsearch/pull/154

I think it comes down to the standard argument against ZFS on linux -- uncertainty. It works *now*. Will it continue to work? Will any upstream changes in the Linux kernel cause issues with the ZFS modules bolted on top?

It is unlikely for there to be issues with ZFS and Linux. It's too common now, but it's not included in the main Linux tree, so it's not explicitly tested.

So, it's a low risk, but not zero risk.

More to the point here, when working with FreeBSD, ZFS is a first-class citizen (moreso even), so working with it *should* be more integrated with a FreeBSD solution than Proxmox, but how much more (and is that meaningful) is probably a qualitative feel than quantitative fact.


If you're evaluating VM hosts (proxmox, hyper-V, vmware, etc...) You need to have support for nested virtualization all the way down. Otherwise, if you want to evaluate a VM infrastructure, you need to start with bare-metal. Really, you just need to make sure that your top level support nested virtualization, but I understand their point.

However, the point about firecracker VMs in place of containers I think is really a good use-case. Firecracker can provide a better isolation environment, so it would be great to be able to run Firecracker VMs for workloads, which would require that the host (and the VM host above) support nested virtualization.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: