Why does that matter if he's not seeing ads. A severely crippled adblocker means that you would see ads during regular usage.
Additionally, Brave a chromium based browser has adblocking built into the browser itself meaning it is not affected by webextention changes and does not require trusting an additional 3rd party.
>the present generation of automated systems, which are monitored by former manual operators, are riding on their skills, which later generations of operators cannot be expected to have.
But we are in the later generation now. All the 1983 operators are now retired, and today's factory operators have never had the experience of 'doing it by hand'.
Operators still have skills, but it's 'what to do when the machine fails' rather than 'how to operate fully manually'. Many systems cannot be operated fully manually under any conditions.
And yet they're still doing great. Factory automation has been wildly successful and is responsible for why manufactured goods are so plentiful and inexpensive today.
It's not so simple. The knowledge hasn't been transferred to future operators, but to process engineers who are kow in charge of making the processes work reliably through even more advanced automation that requires more complex skills and technology to develop and produce.
No doubt, there are people that still have knowledge of how the system works.
But operator inexperience didn't turn out to be a substantial barrier to automation, and they were still able to achieve the end goal of producing more things at lower cost.
Google made some very large ngram models around twenty years ago. This being before the era of ultra-high-speed internet, it was distributed as a set of 6 DVDs.
It achieved state-of-the-art performance at tasks like spelling correction at the time. However, unlike an LLM, it can't generalize at all; if an n-gram isn't in the training corpus it has no idea how to handle it.
I have this DVD set in my basement. Technically, there are still methods for estimating the probability of unseen ngrams. Backoff (interpolating with lower grams) is an option. You can also impose prior distributions like a Bayesian so that you can make "rational" guesses.
Ngrams are surprisingly powerful for how little computation they require. They can be trained in seconds even with tons of data.
>You can see this in retirement, actually. There's real data showing mortality spikes in the years after people stop working. The structure of striving, even when it felt like a burden, was providing something that leisure alone can't replace. People who stop pursuing things often just... decline.
Or maybe people stop working because their health was declining?
The counterpoint is in all the people who pursue daily goals intensely, at high ages. POTUSes and SCOTUSes, by example, tend to outlive most USians, and tend to stay active with projects or jobs long beyond normal retirement.
Per the article, it did succeed. AI radiology tools are being widely adopted, and they work very well.
But they are being used by radiologists, not instead of radiologists. And because scans can be interpreted more quickly and cheaply, more scans are ordered, which has increased the demand for radiologists overall.
Computation does not necessarily need to be quantized and discrete; there are fully continuous models of computation, like ODEs or continuous cellular automata.
That's true, but we already know that a bunch of stuff about the universe is quantized. The question is whether or not that holds true for everything or rather not. And all 'fully continuous models of computation' in the end rely on a representation that is a quantized approximation of an ideal. In other words: any practical implementation of such a model that does not end up being a noise generator or an oscillator and that can be used for reliable computation is - as far as I know - based on some quantized model, and then there are still the cells themselves (arguably quanta) and their location (usually on a grid, but you could use a continuous representation for that as well). Now, 23 or 52 bits (depending on the size of the float representation you use for the 'continuous' values) is a lot, but it is not actually continuous. That's an analog concept and you can't really implement that concept with a fidelity high enough on a digital computer.
You could do it on an analog computer but then you'd be into the noise very quickly.
In theory you can, but in practice this is super hard to do.
If your underlying system is linear and stable, you can pick any arbitrary precision you are interested in and compute all future behaviour to that precision on a digital computer.
Btw, quantum mechanics is both linear and stable--and even deterministic. Admittedly it's a bit of a mystery how the observed chaotic nature of eg Newtonian billard balls emerges from quantum mechanics.
'Stable' in this case means that small perturbations in the input only lead to small perturbations in the output. You can insert your favourite epsilon-delta formalisation of that concept, if you wish.
To get back to the meat of your comment:
You can simulate such a stable system 'lazily'. Ie you simulate it with any given fixed precision at first, and (only) when someone zooms in to have a closer look at a specific part, you increase the precision of the numbers in your simulation. (Thanks to the finite speed of light, you might even get away with only re-simulating that part of your system with higher fidelity. But I'm not quite sure.)
Remember those fractal explorers like Fractint that used to be all the rage: they were digital at heart---obviously---but you could zoom in arbitrarily as if they had infinite continuous precision.
Sure, but that 'If' isn't true for all but the simplest analog systems. Non-linearities are present in the most unexpected places and just about every system can be made to oscillate.
That's the whole reason digital won out: not because we can't make analog computers but because it is impossible to make analog computers beyond a certain level of complexity if you want deterministic behavior. Of course with LLMs we're throwing all of that gain overboard again but the basic premise still holds: if you don't quantize you drown in an accumulation of noise.
> Sure, but that 'If' isn't true for all but the simplest analog systems.
Quantum mechanics is linear and stable. Quantum mechanics is behind all systems (analog or otherwise), unless they become big enough that gravity becomes important.
> That's the whole reason digital won out: not because we can't make analog computers but because it is impossible to make analog computers beyond a certain level of complexity if you want deterministic behavior.
It's more to do with precision: analog computers have tolerances. It's easier and cheaper to get to high precision with digital computers. Digital computers are also much easier to make programmable. And in the case of analog vs digital electronic computers: digital uses less energy than analog.
One problem is that, even though it is turing-complete, many practical operations are very difficult. Patterns tend towards chaos and they tend towards fading out, which are not good properties for useful computation. Simply moving information from one part of the grid to another requires complex structures like spaceships.
You might have better luck with other variants. Reversible cellular automata have a sort of 'conservation of mass' where cells act more like particles. Continuous cellular automata (like Lenia) have less chaotic behavior. Neural cellular automata can be trained with gradient descent.
> Copyright law does not protect typeface or mere variations of typographical ornamentation or lettering. A typeface is a set of letters, numbers, or other characters with repeating design elements that is intended to be used in composing text or other combinations of characters, including calligraphy. Generally, typeface, fonts, and lettering are building blocks of expression that are used to create works of authorship. The Office cannot register a claim to copyright in typeface or mere variations of typographic ornamentation or lettering, regardless of whether the typeface is commonly used or unique.
Given the incredible amount of work that goes into designing a typeface I find that really surprising.
Apparently you CAN protect the implementation of a typeface, e.g. the font file itself. Wikipedia says:
> Typefaces and their letter forms are considered utilitarian objects whose public utility outweighs any private interest in protecting their creative elements under US law, but the computer program that is used to display a typeface, a font file[a] of computer instructions in a domain-specific programming language may be protectable by copyright. In 1992, the US Copyright Office determined that digital outline fonts had elements that could be protected as software[13] if the source code of the font file "contains a sufficient amount of original authorship".
Likewise I think it is extremely dubious that models can be copyrighted at all, for the exact same reason you can't copyright a phonebook or database. The entire regime of claiming to release models under various licenses is bullshit, because you can't copyright rote transformations of things either.
It's completely different from a phonebook or database, which are mere compilations.
If something is considered sufficiently transformative, then it can be copyrighted. If you do a bunch of non-trivial processing on a database to generate something new, you can copyright that.
And LLM training is in no universe a "rote transformation". It is incredibly sophisticated, carefully tuned, and results in a final product that could not possibly be more different.
This is definitely going to be argued over in court at some point, along with many other questions about AI and copyright.
Speaking of which, why is it taking so long to get a supreme court decision on whether or not training counts as copyright infringement? The only court cases that have been resolved so far have settled on unrelated grounds without touching the core issue.
I don’t think benchmark overfitting is as common as people think. Benchmark scores are highly correlated with the subjective “intelligence” of the model. So is pretraining loss.
The only exception I can think of is models trained on synthetic data like Phi.
reply