From listening to their podcasts / twitter spaces it sounds like they're building machines from the ground up. I get the impression they'd hate the analogy, but an apple style take on owning the hardware and software together.
For a dip into how deep in the weeds they are, https://www.youtube.com/watch?v=HCkuCkp3Zoo (they also do this as "oxide and friends" wherever you find podcasts, which is a nuisance to link to but works better)
For what it’s worth, I often use an Apple analogy. Sun works too. Both share that same idea: if you want to build a great computer, you should write software as well as build hardware.
Not a blade. They're selling whole racks because that's the level that the hyperscalers work at, and Oxide are trying to bring the same class of systems to on-prem deployments. The closest comparison would be to something like modern mainframes, though the provided featureset might be subtly different. (For one thing, Oxide systems are ordinary x86 systems at the CPU architecture level; it's just "everything else" that's potentially custom.)
Hyperscalers do their own boards, chassis, BMCs, racks, etc., and can do their firmware (or modifications to it) though. Several are even getting into their own CPU designs. They don't want any boutique designs controlled by other companies there. And they don't want to pay premiums for it. They certainly don't want to run a proprietary hypervisor / host OS stack on it.
You're assuming hyperscalars would be our target buyer. That's not the case; in fact, it's almost backwards. Hyperscalars build these things for themselves, but do not sell them to others. There are other organizations that could take advantage of this technology, but literally cannot buy it. That's where Oxide fits in.
They may have been. Regardless, that’s not the case. No harm no foul either way! I’d absolutely agree with you that trying to sell to them wouldn’t make sense.
It just doesn't seem to offer much, for all the grandiose claims. Who is going to pay a premium to have "reimagined" firmware and ROMs? Or yet another proprietary hypervisor? Or a stack that seems to be built around Solaris?
It looks like a vanity or nostalgia project started by old Solaris folk, as far as I've seen.
It's mostly the same sort of handwaving though, which I still don't understand.
"anyone who has at least a rack full of Dell/HPE hardware"
Big enterprise and government customers want white glove support, 10+ year continuity, ISV certification for their hardware. They will pay a premium, but they expect a premium and not in reimagined firmware, but really support. I can't see how Oxide could compete there or why it would want to.
"If Oxide computers had existed when I was working on sorting out the infrastructure for in-country domestic payments at Visa, I'd have lobbied pretty hard to adopt them. Assembling basically the same thing from a bunch of random vendors and terribly integrated software was an infuriating, expensive process that only achieved great results because we invested the time and money into it that I'd have rather budget allocated to writing core payments & settlement applications."
I'm not doubting this person's story, but perhaps he doesn't know exactly what Oxide would give either. This kind of thing is exactly what you can write a check to Dell or HPE for and they've already assembled those things with their boxes and their hardware partners they've got the firmware and software working, and the solution is all certified by SAP and Oracle and Microsoft and whoever else.
But Visa didn't go that way, they instead paid the premium for experts on staff or contracted to do more of it in house. And they got a great result.
Between the two, I'm having a hard time seeing where Oxide would fit. Pay a premium for the hardware solution and for the in house IT team?
> they expect a premium and not in reimagined firmware, but really support. I can't see how Oxide could compete there or why it would want to.
One aspect of being responsible for the entire stack is that we can give excellent support. It’s never some other company’s problem.
> This kind of thing is exactly what you can write a check to Dell or HPE for
See my other link elsewhere in the thread for why we’re different than writing that check to Dell.
But even then, this means you do know there is a market for this kind of server. It’s now “why Oxide instead of Dell” instead of “why does Oxide exist.” I’m sure lots of people will still buy servers from other folks for a very long time, but some will also choose us. That’s the joy of startups, we’ll all just have to wait and see!
> One aspect of being responsible for the entire stack is that we can give excellent support. It’s never some other company’s problem.
You aren't responsible for the entire stack though. You have Intel, Mellanox, Microsoft, Redhat, Oracle, SAP, Intersystems, etc.
And this type of "support" is surely a huge burden in terms of manpower and infrastructure required. You can't scale this and have the core developers on call solving customer problems. You could have a hypervisor architecture which is technically better and requires less support overhead than KVM for example, but can you really compete with Redhat for governments and enterprise white glove support just with that advantage?
> See my other link elsewhere in the thread for why we’re different than writing that check to Dell.
The problem is once you're talking a real solution, the system management and provisioning and BMC and partitioning stuff is not the really hard part.
Plug in and power on another rack and the systems show up in the management console and you can partition and provision them, sure. IBM and other "enterprise" vendors have that. They're clunky but they have procedure manuals and training and they just work for the most part.
It's what you actually do with those racks is the difficult thing. Enterprise isn't as smooth or scalable as "hyperscale" here. It's a lot of old crufty legacy in house and ISV software.
Perhaps, but if their secret sauce produces a much better product, I'm not terribly concerned that the value-add is a software-based and not hardware-based.
Yeah I'm not arguing for or against their product, just genuinely curious to know what differentiates them from existing blade servers hosting kubernetes or something.
hmm not sure how this benefits over something like vxrail that's a supported turnkey solution, but I guess will wait for the 3rd party reviews to come out once it ships.