Hacker Newsnew | past | comments | ask | show | jobs | submit | bradfa's commentslogin

Have a look at comma ai

I would hope geohot is exploring options to partner with one of the automakers. Because it sure looks like the future is not bright for their device. Cars are steadily switching to encrypted canbus and don't work with Comma. It's a dead end unless they work a deal with someone to be allowed on the bus.

George Hotz has done some interesting work, but Comma is far too indie/hacker. It's not at a scale where it can be 100% autonomous.

I think a fully autonomous car has to be designed around LiDAR and autonomy from the ground up. That's a hugely capital intensive task that integrates a lot of domains and data. And so much money and talent.

This is more in the ballpark of Google Waymo, Amazon Zoox, Tesla/xAI, Rivian, Apple, etc.

And as the other folks have mentioned, this becomes a really good prospect if one company can manage the autonomy, insurance, maintenance, updates, etc. A fully vertically integrated subscription offering on top of specially purposed hardware you either lease or purchase.


It’s interesting that this implies that building natural gas pipelines to data centers is easy, at least easier than building out substations and transmission lines. Because you don’t run a (or several) 42MW natural gas generator without a big fat natural gas pipe.

Why is it so much easier to build the pipelines than to bring in electric lines?


In Texas a lot of natural gas is wasted/burned away as it is not profitable to collect and transport it from all oil fields. These days quite a few places put small turbines to generate electricity to do cryptocurrency mining.

This will serve a similar use case just on a bigger scale.


That is the most William Gibson thing I've read today, at least. Wow.

I have also heard about deployment of inference (LLMs etc) instead of crypto at these gas turbines. If you already have internet there, why not I guess.

a 16" natural gas pipeline moving 200 MMscf/d at pressures >1000 psi (relatively standard numbers, 16" can go up to ~600 MMscf/d) has a power transport capability of 2.5 GW (thermal). burn that in 60% efficient combined cycle turbines and you get 1 GW of electricity. that's a lot easier than building 1 GW of electric transmission lines.

You can do 1GW with a single 525kV hvdc bundle (maybe 300mm across).

The market seems to be settling on a 5 cable setup and 2GW power transmission (Tenet & other European TSOs have a 2GW Platform build out…).


1GW at 500kV three-phase AC is 1154 amps; 1 billion divided by 500,000 divided by sqrt(3).

You could handle that with one set of 1272kcmil aluminum conductors, or two sets of 300kcmil conductors, based off of this wire submittal: https://www.prioritywire.com/specs/acsr.pdf

The voltage drop will be higher than HVDC, but AC transformers are probably an order of magnitude cheaper than HVDC switchgear. That’s the main issue with HVDC, interrupting HVDC current is very difficult since there’s no zero point like with an AC sine wave. High voltage AC breakers use SF-6 to extinguish the arc at the zero point, which happens 120 times per second at 60 hz (100 times per second for 50 hz)


>Because you don’t run a (or several) 42MW natural gas generator without a big fat natural gas pipe.

at 40KWh/kg and 50% efficiency you'd need 2 tons/hour for a 42MW generator, which is a one large tanker per day. Thus you can do without gas pipeline which is a big advantage over electric wires and other static infra when you need to scale power quickly.

Sidenote - it all brings memories of how 34 years ago i worked couple months in a Siberia village powered by working 24x7 gas turbine from a helicopter.

Vs. the original article - i doubt that supersonic core is the best. Supersonic engine is designed to get a significant pressure from ram effect. Until supersonic speed reached, such an engine has bad efficiency due to low compression - that is why Concorde was accelerating to supersonic speed on afterburners (atrocious efficiency just to get to efficient speed as fast as possible). The modern engines from say 787 - they have high compression and best high temp mono-crystal blades, etc. - would be much better.


So AI comes on a truck, not through a series of tubes. Got it! :)

Ultimately AI comes from space :) Just wait until Starship starts flying - with its projected $/kg launch costs the placement of solar + GPUs in space would be the cheapest option (10-20kg is 1 GPU, 5 m2 solar and 2 m2 radiator (at 70C radiating away 1.5 KW) - total launch cost would be around paltry $1-2K) .

Are people seriously still on this “datacenter in space” thing? It has been beaten down with hard facts endlessly the past few months.

you see my numbers in my comment. I don't see any numbers in your comment (please don't give any links to the articles whose authors don't understand black body radiation formula and thus reference ISS cooling in datacenter discussion :)

Facts don't matter when Elon's feelings are at stake

I'd really like to see the math on this one. It implies that building wind and solar on Earth is somehow worse than building it in space _and_ moving the data center there? It's not just counter-intuitive, it's bonkers.

>It's not just counter-intuitive, it's bonkers.

you need to look at numbers instead of intuition - intuition naturally gets us wrong when we deal with unfamiliar things like space.

A ready 10 ton house will cost $500K-1M to place in a well developed area - the cost of land itself, communications, permits, also delays and unpredictability of process, etc. Add to that yearly taxes. Add tremendous electricity costs in case of the datacenter. And all those costs have and will be growing.

Launching 10 tons on Starship - sub-$1M and that cost will drop down to about $100K-200K.

> It implies that building wind and solar on Earth is somehow worse than building it in space _and_ moving the data center there?

exactly. Space is cheap and plentiful :)


> Why is it so much easier to build the pipelines than to bring in electric lines?

It's not necessarily easier to do one or the other. It's about which one is faster.


In Texas the electric grid is regulated by ERCOT and gas pipelines are regulated (by accident of history) by the Texas Railroad Commission. ERCOT has a big network of producers who put energy into the grid, local companies like CenterPoint Energy who distribute it to customer sites, then retail electric companies sell the power and pay a line usage fee to the line owner (which tends to be a fixed monthly cost to the customer, listed separately on the bill from other fees or usage bills). The TRC deals with companies that own their own pipelines and bill the end customer directly.

A lot of the natural gas in the US is in Texas, and a lot of it is flared while pumping out crude. Putting data centers on turbines near the extraction fields out in the Permian Basin makes sense for power. You can build short pipelines or hook into the ones already there.


They want to build them near the oil fields in texas. As of now most of those fields already run without much if any power infra in place on top of that they would be right by the natural gas generation.

Add that the manpower and expertise of running generators is abundant there and it's a prettt solid idea if they can actually make it.


WA state has the advantage of cheap electricity due to hydro projects, and before they were able to ship off their surplus to CA, they did a lot of aluminum production here to take advantage of it. I can see natural gas working similar, but I’ve also heard data centers want to take advantage of cheap hydro and wind power in western states.

Transmission loss in gas pipes is probably lower than electric transmission? Underground probably easier than above ground. Lastly I think they are building data centers near natural gas fields...

I wouldn't expect so, because it's not just fugitive emissions we're talking about, but that you need to run a lot of big compressors to run pipelines. But often that cost isn't really counted because they just burn more gas to power them.

It's amortized for sure, gas is relatively dense in energy and can be transported long distance with minimal loss, unlike electricity. High voltage DC power lines are used in some places for long distance transport, but that's nowhere near the continent-spanning oil and gas pipes.

I'm guessing it's not just the overhead lines, but you need the actual power plants somewhere.

We have an abundance of natural gas and a shortage of electricity.

This fuse blows because a crash was detected and it is to protect the people inside the car and rescuers. The article argument is that it can blow even for small crashes where no damage to the battery occurs but rehabilitating the vehicle still incurs an outrageous cost. This is not a simple over current protection fuse.

$1000 for the module with the fuse seems ok to me. Another $3000 to link the module to the vehicle is the outrageous part.


They are not only linking it to the vehicle, they are doing a LOT of other checks on the battery - that it's not damaged in non-obvious ways. For that you need trained people (it's really high voltage and amperage stuff), tooling AND you really need to be sure you guarantee everything is OK.

Even the basic mechanical disconnect and lowering of the battery is far from simple (and requires A LOT more expensive tools than changing a wet belt - not because they are greedy, but because a lift that can lower such hevy battery costs a lot of money, mostly in materials), and that's not even opening it, making sure you don't get electrocuted when you work on it ect.


The lift costs about as much as one of their repair bills. That can hardly justify the cost.

I feel that my highest productivity was the 4 years I spent on the same team working remotely but having many interactions per day with my coworkers and manager. I only physically was in-person with my team for 1 week during that 4 year span. But every day I was working WITH my teammates, interactively. My manager was open and honest about things and the company culture embraced discussing "What if we did X?" to debate how we could improve things and dream up new ideas.

Prior to that I worked in-person in offices doing similar types of engineering. I was never as productive there but I did see more sides of the business and I got to do more varied tasks. Having lunch or going for a short walk physically with teammates and non-teammates definitely spawns opportunities which otherwise don't naturally happen.

Now, I do consulting/contracting remotely. Often I'm working on weeks to months long contracts. All my customers are remote. It's very clear that my value is in short term results, to get the customer past their current problem. If any planning for the future is found, I recommend it, but unless the customer wants me to pursue it then the recommendation is all I give.

All 3 kinds of work have pros and cons. I do miss regularly having lunch with coworkers. I MUCH prefer my remote work commute, flexibility, and work/life balance.


Not to simplify too much, but I think it comes down to accountability and responsibility.

I've worked remotely with a team where everyone was very engaged, saw similar shared goals to work towards and everyone took accountability for doing work to reach that.

I've also worked remotely with people who basically barely noticed the work everyone else even did, nor cared, nor appreciated it. There was a sense of, let's just get any interactions over with and go back to our doing the minimum we can to not get fired.


If OpenAI fails, wouldn't their customers just move to other AI providers? So the total hardware demand by AI companies wouldn't dramatically change over a reasonable time frame?


If OpenAI fails, the assumed case is overcapitalisation relative to demand. In such a world, if the reason OpenAI has no liquidity is because there's insufficient demand, there's no customers to move across.


What happens if the OpenAI failure is because of lack of demand from their customers and the market in general? None of the AI providers or hardware providers will survive, no?


Because manufacturers transitioned fab and assembly lines from low margin dram to higher margin products like hbm, hence reducing dram supply. But the demand for consumer grade dram hasn’t changed much so prices for it go up.


Soon we will get promoted tools who want to show their brand to the human and agent. Pay a little extra and you can have your promotion retained in context!


Back when ChatGPT Plugins were a thing a built a small framework for auto-generating plugins that would make ChatGPT incessantly plug (hehe) a given movie:

https://chatgpt.com/share/6924d192-46c4-8004-966c-cc0e7720e5...

https://chatgpt.com/share/6924d16f-78a8-8004-8b44-54551a7a26...

https://chatgpt.com/share/6924d2be-e1ac-8004-8ed3-2497b17bf6...

They would also modify other plugins/tools just by being in the context window. Like the user asking for 'snacks' would cause the shopping plugin to be called, but with a search for 'mario themed snacks' instead of 'snacks'


Got any links to explanations of why fine tuning open models isn’t a productive solution? Besides renting the GPU time, what other downsides exist on today’s SOTA open models for doing this?


When the new pre-trained parameters come out in a new model generation, your old fine tuning doesn't apply to them.


Home automation customers (the end users) probably are going to balk at the yearly subscription price of Ubuntu Pro. Especially for gadgets that likely cost less to buy upfront than a single year of Ubuntu Pro.


That there are so many indicates to me the opposite. There are lots of people who want to work on that kind of thing, just they all have slightly different opinions as to which part is the part they’re fighting against, hence so many different forks.

The forks are all volunteer projects (except Ubuntu), so having slightly different opinions isn’t considering capitalism as a driving force.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: