Hacker Newsnew | past | comments | ask | show | jobs | submit | dash2's commentslogin

So the time series are provided with no context? It's just trained on lots of sets of numbers? Then you give it a new set of numbers and it guesses the rest, again with no context?

My guess as to how this would work: the machine will first guess from the data alone if this is one of the categories it has already seen/inferred (share prices, google trend cat searches etc.) Then it'll output a plausible completion for the category.

That doesn't seem as if it will work well for any categories outside the training data. I would rather just use either a simple model (ARIMA or whatever) or a theoretically-informed model. But what do I know.


I think I'm in the same boat as you are, in preferring more conventional approaches to time series analysis.

I'm curious as to how this would compare to having an actual statistician work on your data, because I feel that time series work is as much an art as it is a science. To start, selection of an appropriate timeframe is always important to ensure our data doesn't resemble either white noise or a random walk, and that we've given the response time of our data appropriate consideration! I find that people unfamiliar with statistics miss this point - I get people asking why I might use a weekly or biweekly timeframe for data when they reckon I should be using hourly or daily data. Selection of appropriate predictors is also important for multivariate time series and I have no idea how this model approaches that.

I also have questions about how interpretable the results outputted by this model are. With a more "traditional" model, I can easily look at polyroot or the [P/E]ACF, as well as various other diagnostic tools, and select a relatively simple model that results in a decent 95% prediction interval. I've always been very wary of black box models simply because I wouldn't be able to explain any findings derived from them well.

From skimming the blog post, is MAE all they're using for measuring the output quality?


If it works for predicting the next token in a very long stream of tokens, why not. The question is what architecture and training regimen it needs to generalize.

Is that right? I think that you can serve tokens without training the next models. It would be bad strategy, but it would work. So it's an important question, are they covering their operating expenditure? If they are the business has legs (and it will be worth spending a lot to train the next models). If not, maybe not.

If a major model provider were to just halt progress on developing new and improved models, the open weight alternatives would catch up in a couple years.

They would have a period of great margin, followed by possibly zero margin as enterprises move to free options.

They would have to come up with a lot of great products around the inferior models to justify charging at that point.


Also, an out-of-date model which doesn't know about last year's world events, hit songs and new JS libraries is a depreciating asset even before you consider low-cost competitors catching up. So you'd presumably have to do some training just to keep the model up to date at the current quality level (unless you completely give up and just sweat the assets). And on the other side of that coin: over the next few years, do the latest, biggest models continue to generate user-perceived real-world improvements sufficient to keep users wanting the latest and greatest?

> If a major model provider were to just halt progress on developing new and improved models, the open weight alternatives would catch up in a couple years.

That's why it would be bad strategy.


There are companies that already do nothing but serve tokens using models trained by others. Just running infrastructure and collecting a reasonable fee for their troubles. It's only a bad strategy if you want to claim to investors that you'll gain monopoly market share if only they could give you a few more billion dollars.

i don't think it will work, it's too easy to switch models. When google comes out with a new model people will just switch. I think Google wins in the long run, they have the money to just wait until everyone else goes bankrupt and they also have the Apple contract and therefore the mobile market.

And apparently the most efficient training and inference thanks to their TPUs, IIUC?

What are the equivalents to "you're absolutely right" or "it's not X. It's Y." for vibe-coded apps? I think: verbose design with lots of subtitles; lots of boasting; and a particular visual style.

An argument not mentioned here and which I didn’t appreciate until I actually took part in these markets myself, is that you need a supply of stupid/uninformed people to take the other side of the informed people’s bets.(In economic terms, no-trade theorems apply.) That suggests to me that the dream of perfect information revelation isn’t going to come true. Instead, the liquid markets will be those with a large supply of marks, who bet for identity reasons or who are simply ignorant and naive. (Currently polymarket gives a 16% ish probability that Trump will lose office this year. Sounds like wishful thinking to me?)

Polymarket "odds" I think are just the price of the contract * 100 right?

That's not actually the predicted odds by the market because every single bet is also a bet on interest rates.

A contract that literally 100% always would resolve to "true" in 1 year would have a non-zero price for the "false" side because selling that option (and thus taking the "false" side means you get ~$97 today and then pay $100 in a year.

Polymarket's 4% chance of jesus returning by 2026 actually represents a market consensus of basically a 0% chance.

For trump losing office there might be some bets predicated on his losing office being correlated to a higher interest rate outcome, too.


If the odds are set correctly, you should have the same EV on either side of a bet.

That's why it's hard to beat Vegas at sports betting--they set the correct odds way too often.

When regular folks make up their own odds, they're not very good at it, but in theory the market just buys up any +EV position, even if it's a longshot.


I agree with you overall but I'm not sure the Trump bet is the best example.

He certainly isn't going to be thrown out of office (unfortunately) but those 16% of bettors also win if he dies this year, and he's 80, fat, still gobbling burgers and shows signs of someone that has had at least one stroke so far.

On the other hand he has access to the best medical care, but even still a 16% chance he drops dead before the end of the year isn't that outrageous.


So for example "shows signs of someone that has had at least one stroke so far" sounds like the kind of "information" that has not yet made it into a mainstream news outlet, probably for good reason. And indeed when I searched, I found it on The Daily Beast, whose reliability I doubt.

But maybe I'm wrong. If so, you could always take the opposite side of the bet. For sure, the correct probability is not zero!


Checks annuity tables for an 80 year old making it to 81...you giving 1:6 odds? Sure I will take that bet because the actual odds are only a small fraction. If I spread out the risk on multiple positions, I can make a very good return taking those types of bets.

People that place bets on the political outcomes on PolyMarket are from one of three groups: 1) Insiders who think they have an edge (but probably don't), 2) fools that believe what their media of choice tells them and 3) People who make money on the first 2 groups.

We both know that that the Trump market and people who believe everything they see on MSNBC (or whatever it is called now) have a big overlap. Its basically a way to print money because there are always people who are out of touch enough to believe their side is right 100% of the time and a <insert color here> wave is coming in the next election. Is this taking advantage of them? Maybe, but they are a walking negative externality in every other way in life so why not. Consider it a tax on political extremism and partisanship which I think it a good thing. Prove me wrong...


You put your own case powerfully, but you don’t seem to have reacted to Derek Thompson‘s case, except to say that you’re not bothered about gambling addiction. (And why not? If people predictably do things that are bad for themselves, that damages the efficiency case for free markets and everything.)

I did read his article, and there are the geopolitical events and the sporting events he talks about.

I don't really understand why sports leagues require faith in their institution. Is the economy overleveraged on collateral debt swaps on league merchandise sells? Is our economy built on preteens in Nebraska believing their only way out of there is a worthwhile pursuit?

I'm not sure why I am supposed to care about the sanctity of that market, what are the consequences of it feeling rigged? and the FBI was on those insider trades instantly, so the sports side seems tightly regulated already whether I understand why a segment of that market needs certain assurances.

And the non-sporting trades I recognize the danger of, the liquidity in the market altering the outcome as someone in control of the outcome does something selfish. I say do what we can to avoid the death markets and the nuclear ones, but distributed bounties otherwise are very transparent and efficient wealth distribution mechanisms that fulfill other goals of compensating labor more correctly.


Surely a big point of sport is to get young people to do healthy, character-building team activities. That requires sporting heroes they can look up to, rather than cheats who will throw a game for money.

It doesn’t require that

Representation is a powerful driver for a large swath of humans, there are many others who get inspired for other reasons, or inspired by fictional characters

I’m fine with those other traits being expressed more frequently


Management Studies is the top management journal, it is highly regarded and would count as fairly prestigious in e.g. tenure applications.

Here's mine. It's not big or important (at all!) but I think it is a perfectly valid app that might be useful to some people. It's entirely vibe-coded including code, art and sounds. Only the idea was mine.

https://apps.apple.com/us/app/kaien/id6759458971


This is horrible. Children of that age should not be glued to a computer screen. If handing your kids over to the care of a bot is your idea of parenthood, I'm sure glad I'm not your kid.

The exact point of the app is to be as un-sticky as possible. I deliberately used calm colours, slow transitions, and a simple gameplay routine with a limited shelflife, after seeing how other apps for kids were designed like fruit machines.

If you simply think that children should never be exposed to screens, then I can sympathise with that point of view, but I think it's better to introduce them in a thoughtful and limited way.

Your last sentence is unnecessarily overblown and inflammatory, and adds nothing useful to the discussion.


Not in this case: the LLM wrote the entire paper, and anyway the proof was the answer.

Maybe there's a positive externality: your individual learning percolates to others and benefits the firm as a whole.

What is there to learn? If anything developers are still the one's training and enhancing the models by giving them more feedback cycles and what works and what doesn't.

Is there more to the inspiration than "3d isometric with a lot of staircases"?

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: