Hacker Newsnew | past | comments | ask | show | jobs | submit | more arthur-st's commentslogin

They have a healthy enterprise customer base, and an engineering team that clearly knows how to work with power users (which OpenAI is bad at).


They have an old-school enterprise sales operation that is doing superb work. Apart from that, ChatGPT's projects are useless crap (can't read other convos in a project; can't generate project documents from a convo), and so clearly they would get value out of just getting some developers who have built anything of use to a poweruser.


Having experience with digitizing a university textbook in physics by hand, this is a very nice LaTeX guide for everyone interested. One thing worth noting from 2025 perspective that the "default" local setup is most likely going to be VSCode with LaTeX Workshop[1] and LTeX+[2] extensions, and that you should use TeX Live on every platform supported by it (since MiKTeX and friends can lag). Also, use LuaTeX, as it's the officially recommended[3] engine since November 2024.

[1] https://marketplace.visualstudio.com/items?itemName=James-Yu...

[2] https://marketplace.visualstudio.com/items?itemName=ltex-plu...

[3] https://www.texdev.net/2024/11/05/engine-news-from-the-latex...


No, the parent clearly indicates that they consider XeTeX a worse choice than LuaTeX.


My mistake, apologies!


> They weren't contributing before

That is not true. Companies like AWS had paid staff working as OSS Redis core maintainers before the licencing schism. This talk of "achieving their goals" is just bluster serving no reason other than damage control.


Check if your configuration allows pyright to use other files in the workspace.


To round out the big 4, there's also Pyre from Meta. I haven't used it myself, as when I last checked it had a low number of PEPs covered, but I've heard some good words for it.


MyPy's rules are reference-grade, being as close to an official spec as we get until the Typing Council is done establishing their moat.

To understand shortcomings of MyPy, I strongly suggest reading pyright's documentation for how they compare: https://github.com/microsoft/pyright/blob/main/docs/mypy-com...

Quoting the pertinent part:

> Pyright was designed with performance in mind. It is not unusual for pyright to be 3x to 5x faster than mypy when type checking large code bases. Some of its design decisions were motivated by this goal.

> Pyright was also designed to be used as the foundation for a Python language server. Language servers provide interactive programming features such as completion suggestions, function signature help, type information on hover, semantic-aware search, semantic-aware renaming, semantic token coloring, refactoring tools, etc. For a good user experience, these features require highly responsive type evaluation performance during interactive code modification. They also require type evaluation to work on code that is incomplete and contains syntax errors.

> To achieve these design goals, pyright is implemented as a “lazy” or “just-in-time” type evaluator. Rather than analyzing all code in a module from top to bottom, it is able to evaluate the type of an arbitrary identifier anywhere within a module. If the type of that identifier depends on the types of other expressions or symbols, pyright recursively evaluates those in turn until it has enough information to determine the type of the target identifier. By comparison, mypy uses a more traditional multi-pass architecture where semantic analysis is performed multiple times on a module from the top to the bottom until all types converge.

> Pyright implements its own parser, which recovers gracefully from syntax errors and continues parsing the remainder of the source file. By comparison, mypy uses the parser built in to the Python interpreter, and it does not support recovery after a syntax error. This also means that when you run mypy on an older version of Python, it cannot support newer language features that require grammar changes.

Astral's type checker seems to an exercise in speeding up Pyright's approach to designing a type checker, and removing the Node dependency from it.


I haven't had any issues from MyPy regarding speed. So performance issues did not exist whenever I used MyPy. Also not sure why I need incremental anything. I save a file and then I want it to be checked.

If I am not implementing a LS, then how is it of any importance, whether the type checker was designed with typing a LS? How does that benefit me in my normal projects?

If there are no semantic improvements, that allow more type inference than MyPy allows, I don't see much going for Pyright. Sounds like a "ours is blazingly faster than the other" kind of sales pitch.


In a medium size codebase (~ 100 python modules of 200 lines), mypy take 5 minutes to type check. This can be a problem for a CI.


Just to throw my anecdote in: I used to work at the mypy shop - our client code base was on the order of millions of lines of very thorny Python code. This was several years ago, but to the best of my recollection, even at that scale, mypy was nowhere near that slow.

Like I said, this was many years ago - mypy might've gotten slower, but computers have also gotten faster, so who knows. My hunch is still that you have an issue with misconfiguration, or perhaps you're hitting a bug.


My current company is a Python shop, 1M+ LOC. My CI run earlier today completed mypy typechecking in 9 minutes 5 seconds. Take from that what you will.


Ditto, same order of magnitude experience; at least for --no-incremental runs.

Part of the problem for me is how easily caches get invalidated. A type error somewhere will invalidate the cache of the file and anything in its dependency tree, which blows a huge hole runtime.

Checking 1 file in a big repo can take 10 seconds, or more than a minute as a result.


I guess that there is something with the cache that we don't do right. Thanks for your return.


Never happened for me. Similarly sized code base, done in seconds, if not 1s. Guess we all have our anecdotes.


I think you have something misconfigured, or are timing incorrectly. I'm working on a project right now with ~10K LOC. I haven't timed it, but it's easily <= 2 seconds. Even if I nuke MyPy's cache, it's at most 5 seconds. This is on an M3 MBP FWIW.


And with dmypy (included with myoy) it’s even faster


I've found dmypy very underbaked. It's very easy to get it to regularly crash or pin a CPU indefinitely in my codebase.


Yeah it’s far from perfect, but speed is usually not its biggest fault.

I’ll still be switching to the astral offering as soon as it’s production ready.


Pyright has semantic improvements (and also some differences) over MyPy. As for using the type checker as a language server, it's difficult to go back to “it's compiling” after you've had one stop you from typing bugs out in-flight.


This depends on the model, e.g., the non-entry level Forerunner series watches have had training load-related metrics for a while now. That said, Garmin is mainly a cardio watch, and by extension the Apple training load metrics seems to be giving a more presentable UX to the "default"[1] approach to cardio training load.

[1] https://evokeendurance.com/a-new-and-better-look-at-training...


> especially after ... the white house praise for rust

What's the threat model here, that Rust is a trojan language from the feds?


I recommend reading this paper, as it gives some understanding of the things that are possible with an infected toolchain: https://www.cs.cmu.edu/~rdriley/487/papers/Thompson_1984_Ref...

Some modern compiled languages such as Zig and Go can be officially bootstrapped from a C toolchain. And a C toolchain can be bootstrapped with Guix using only a 357-byte blob. This gives some good confidence that you can bootstrap a malware free toolchain using auditable source artifacts.

Rust however, does not have an official way to be bootstrapped from a C compiler, which means developers must use a previous version of the compiler to build a new version. In this situation, you can never be sure a malware was not injected in a previous version of the compiler (see the Ken Thompson paper for an example). There's no way to know because you are using a unauditable blob to create another blob.

This is why someone created mrustc, a Rust compiler implemented in pure C++, so that Rust can be bootstrapped from a C toolchain (see also: https://users.rust-lang.org/t/understanding-how-the-rust-com...).

The mrustc solution is not good because there are essentially 2 implementations of the same compiler that have to be kept in sync. It would be much better if Rust used a solution like Zig's: https://ziglang.org/news/goodbye-cpp/


This was interesting, cheers!


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: