Hacker Newsnew | past | comments | ask | show | jobs | submit | more teruakohatu's commentslogin

> 1. The nerf is psychologial, not actual. 2. The nerf is real but in a way that is perceptual to humans, but not benchmarks.

They could publish weekly benchmarks. To disprove. They almost certainly have internal benchmarking.

The shift is certainly real. It might not be model performance but contextual changes or token performance (tasks take longer even if the model stays the same).


Anyone can publish weekly benchmarks. If you think anthropic is lying about not nerfing their models you shouldn't trust benchmarks they release anyway.


I never said they were lying. They haven’t stated that they do not tweak compute, and we know the app is updated regularly.


Aider has some weird anti-features like an explicit inability to handle git submodules.

Now I dislike submodules, but I can’t get a massive codebase refactored just because of my tooling preferences.


In New Zealand our cow’s milk is separated into its components and then reconstituted and bottled. I would think it’s the same elsewhere too.


I grew up with real cow milk from neighborhood cows and I can taste the difference. To this day I won’t buy milk that tastes reconstituted.


American, never heard of this. Some quick searching, and I found an Australian dairy site which describes this as permeated milk. From this advert piece it might be a way of ensuring that the milk fat/protein ratios can be easily adjusted to hit some target numbers.

[0] https://www.dairy.com.au/you-ask-we-answer/why-is-milk-perme...


> The challenge is the institutional knowledge that never made it into code or documentation and has walked out the door.

This is a problem a compiler cannot fix, and is a very real problem.


I agree, much slower and worse output. It is substantially worse now than it was weeks ago.

It spends a lot of time coming up with “UI options” (Select 1, 2 or 3 with a TUI interface) for me to consider when it could just ask me what I want, not come up with a 5 layer flow chart of possibilities.

Overall I think it is just Anthropic tweaking things to reduce costs.

I am paying for a Max subscription but I am going to reevaluate other options.


Pandoc is an extremely popular Haskell tool.


Faster than an H100 for solving 128x128 matrices. But it’s not clear to me how they tested this, code is only available on request.

> We have described a high-precision and scalable analogue matrix equation solver. The solver involves low-precision matrix operations, which are suited well to RRAM-based computing. The matrix operations were implemented with a foundry-developed 40-nm 1T1R RRAM array with 3-bit resolution. Bit-slicing was used to guarantee the high preci- sion. Scalability was addressed through the BlockAMC algorithm, which was experimentally demonstrated. A 16 × 16 matrix inversion problem was solved with the BlockAMC algorithm with 24-bit fixed-point preci- sion. The analogue solver was also applied to the detection process in massive MIMO systems and showed identical BER performance within only three iterative cycles compared with digital counterparts for 128 × 8 systems with 256-QAM modulation.


A vanilla python can write files, edit ~/.zsh to create an sudo alias that executes code next time you invoke sudo and type in your password.

uv installing deps is hardly more risky.


That's sneaky. Do any code scanners check for that class of vulnerability?

Scanning for external dependencies is common but not so much internal private libraries.


https://linuxsecurity.expert/compare/tools/linux-auditing-to... shows a few.

I've used Tiger/Saint/Satan/COPS in the distant past. But I think they're somewhat obsoleted by modern packaging and security like apparmor and selinux, not to mention docker and similar isolators.


Code scanners cannot protect you from code execution on your machine.


point is that a script executes the script in front of you.

uv executes http://somemirror.com/some-version

most people like their distro to vet these things. uv et all had a reason when Python2 and 3 were a mess. i think that time is way behind us. pip is mostly to install libraries, and even that is mostly already done by the distros.


I must be the one Data Scientist in the world whose PopOS has twice failed to boot after updates. To the point I have give up on it.

My stack is so vanilla (nvidia, python, R) I can’t think what the issue is. Maybe hardware.


Give Fedora a shot. You can run that cosmic desktop on it.


Thanks for the advice. I spend 99.99% of my time in a terminal, a browser and vscode.

The graphical environment is neither here nor there for me, I just want to do an update and cuda libraries/nvidia drivers not break and for my OS to boot!


I think they meant triple how often they broadcast, not using a radio frequency x3 higher on the spectrum.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: