Hacker Newsnew | past | comments | ask | show | jobs | submit | jryio's commentslogin

Do you know what's completely missing from all of these products like anything LLM and Onyx...

a mobile application that has parity on the same features that ChatGPT and Claude does...


Hmm, yea that's a great callout. Something we definitely have in our sights longer term (focus for now is to make sure that the desktop chat experience is truly amazing).


I hope you mean "parity" no?


For those interested in this type of single entry accounting (and by extension double entry)

Here are some other ones I've tried and used in the past:

https://copilot.money

https://lunchmoney.app

https://ynab.com

https://beancount.io

https://hledger.org


But why one or the other? Don't get me wrong, I appreciate a curated list of suggestions, but it would really be useful to have some tips or comments on the experience of each one, their shortcomings or advantages. Otherwise, it's not much better than just checking out a list of names from Google :)


I will use hledger if I'm handling someone else's money, like as a trustee. Double entry accounting is nice for being precise about things. But for my own accounts it's too much overhead to deal with reconciliation. Don't have time for that.


A lot of it is going to be needs & vibes based. Some of them have more in-depth and niche features in certain areas, like transaction splitting or categorization and others are just simple and clean UI to go for ease of use.



I use monarch, I don't think it is very good as an 'investment tracker' (what wealthfolio claims to be). It's fantastic as a more general personal finance/budget tracker.

For example - I have to reclassify loads of transactions for it to track close to correctly. Say treasuries - purchase at a discount, then at maturity redeem for full amount. You can enter them as buy/sell, but then it wont properly report to cashflow, or give you a good classification as to what type of income that was.

Similar with stocks and short term/long term. It means that even though all the info is there it's not as useful as I'd like for showing total income broken out as types of income to help with tax planning etc.

I still use it so the annoyances are not too extreme, but if there were a tool that did a better job of the investment side I'd switch.

Looking at the wealthfolio features I'm not sure it handles any of that any better though, but it does seem to break out dividend/interest income.


I use monarch and I've been happy enough with it. Would probably consider self-hosting with actual in the future, but I wanted an easy on-ramp for myself to actually get in the habit of budgeting.


Will shamelessly promote ours as well: https://www.fulfilledwealth.co/

We're entering the same market but with a tilt towards investment & actionable guidance. Same read-only capabilities on the account sync side (although our budgeting + spending side is still heavily in development) except we're an RIA that can provide professional advise (for free).


I still use GNUcash [1]. Only drawback is comparatively poor handling of equities, with no good way to view historic portfolio value / net worth. Great for general purpose accounting though.

[1] https://www.gnucash.org/


Does the net worth report not work correctly?


As I recall, it doesn't incorporate the historic price of stocks. So if I bought 1 share of Nvidia for $10 10 years ago, it'd say I had a net worth of $180 then, not $10 (as it uses today's stock price).

Not mentioned yet in this subthread, but worth checking out because it runs fully local: https://play.google.com/store/apps/details?id=com.stoegerit....

It's not perfect, for example its monthly/yearly subscription detection didn't work great for me, but compared to all those apps that involve trusting a third party with your banking data it's worth a look.


There's also Actual Budget (https://actualbudget.com) which you can self-host.



beancount + the web ui for it, fava, is what I end up going back to whenever I look for the sort of tools. Downside is I'm way behind on my ledger and don't _really_ want to spend the effort inputting everything to catch up.


I am a big fan of lunchmoney, built a TUI for it with their API too.

https://github.com/Rshep3087/lunchtui


Which do you like for what purpose ?

Also seems like Empower (not listed) is the big one


For germans I found https://parqet.com/ very good.

Generous free tier and auto sync from some common german banks



Which you appear to be the developer of from other comments in this thread. Not saying it's bad, but it's self-promotion rather than organic preference.


I think if it's strongly relevant and someone else started the conversation, that it's fine as long as there's disclosure.


HN is for self promotion


> Please don't use HN primarily for promotion. It's ok to post your own stuff part of the time, but the primary use of the site should be for curiosity.

https://news.ycombinator.com/newsguidelines.html


You missed https://tiller.com which uses the same financial connectors as others but dumps the data into a Google/Office365 spreadsheet that you control.


Typo fix: ynab.com


Clickable link: https://ynab.com

I'm a huge fan of You Need A Budget, it was instrumental in giving me control over my finances. It feels like a superpower to see all my money in one place and not care which bank account the dollars actually reside. Also makes it easier to take advantage of various offers (Credit card or things like HYSA) since I know all the records will live in YNAB and I have full control there, even if the individual banks I use have terrible UIs.


Someone else mentioned this up the thread. I am a huge fan of YNAB too, but I just gave Actual Budget a try and I'm hooked. Some things are better and some things worse than YNAB, but it's open source and self-hosted. I'd recommend either.


Fixed thanks!


There's a higher level of abstraction

https://www.modular.com/mojo


So if CUDA could be ported to Mojo w/ AI then it would be basically available for any GPU/accelerator vendor. Seems like the right kind of approach towards making CUDA a non-issue.


Chris Latner of Apple's Swift and Tesla fame is running a company entirely predicated on this, but at the deterministic language design level rather than the inference level.

https://www.modular.com/mojo

If a beam search, initiative plan and execute phase is more effective than having better tooling in a deterministic programming language then this will clearly take the lead.


Thanks for the link! I am not familiar with the company but reminds me of the whole formal methods debate in distributed systems. Sure, writing TLA+ specs is the 'correct' deterministic way to build a Raft implementation, but in reality everyone just writes messy Go/Java and patches bugs as they pop up because its faster.


Was with you up to

> because its faster


That's correct - however as other commenters have noted. Doing this by hand is extremely challenging for human engineers working on tensor kernels.

The expense calculation might be

expense of improvement = (time taken per optimization step * cost of unit time ) / ( speedup - 1)

The expensive heuristic function is saving wall time well also being cheaper in cost of unit time. And as the paper shows the speed up provided for each unit time multiplied by unit cost of time is large.


Usually the rate of overall improvement for this type of optimization is less than Moore law rate of improvement, thus not worth the company investment. 17x micro-benchmarks don't count. Real improvements come from architectural changes, for example: MoE, speculative multi-token prediction, etc.



I think there's a very important nugget here unrelated to agents: Kagi as a search engine is a higher signal source of information than Google page rank and ad sense funded model. Primarily because google as it is today includes a massive amount of noise and suffered from blowback/cross-contamination as more LLM generated content pollute information truth.

> We found many, many examples of benchmark tasks where the same model using Kagi Search as a backend outperformed other search engines, simply because Kagi Search either returned the relevant Wikipedia page higher, or because the other results were not polluting the model’s context window with more irrelevant data.

> This benchmark unwittingly showed us that Kagi Search is a better backend for LLM-based search than Google/Bing because we filter out the noise that confuses other models.


Maybe if Google hears this they will finally lift a finger towards removing garbage from search results.

Hey Google, Pinterest results are probably messing with AI crawlers pretty badly. I bet it would really help the AI if that site was deranked :)

Also if this really is the case, I wonder what an AI using Marginalia for reference would be like.


> Maybe if Google hears this they will finally lift a finger towards removing garbage from search results.

It's likely they can filter the results for their own agents, but will leave other results as they are. Half the issue with normal results are their ads - that's not going away.


“Show me the incentive and I’ll show you the outcome” - Charlie Munger

Kagi works better and will continue to do so as long as Kagi’s interests are aligned with users’ needs and Google’s aren’t.


>Maybe if Google hears this they will finally lift a finger towards removing garbage from search results.

Unlikely. There are very few people willing to pay for Kagi. The HN audience is not at all representative of the overall population.

Google can have really miserable search results and people will still use it. It's not enough to be as good as google, you have to be 30% better than google and still free in order to convert users.

I use Kagi and it's one of the few services I am OK with a reoccurring charge from because I trust the brand for whatever reason. Until they find a way to make it free, though, it can't replace google.


They are transparent about their growth of paying customers, do you feel as if this fairly consistent and linear rate of growth will never be enough to be meaningful?

https://kagi.com/stats


> Maybe if Google hears this they will finally lift a finger towards removing garbage from search results.

They spent the last decade and a half encouraging the proliferation of garbage via "SEO". I don't see this reversing.


Google wants there to be garbage in search results, because most of the garbage sites are full of Google ads.


There are several startups providing web search solely for ai agents. Not sure any agent uses Google for this.


Maybe we should learn to pass reverse-turing tests and pretend to be LLMs so we can use this stuff lol.


Makes me wish the Kagi API was available and not prohibitively priced: https://help.kagi.com/kagi/api/search.html


> Primarily because google as it is today includes a massive amount of noise and suffered from blowback/cross-contamination as more LLM generated content pollute information truth.

I'm not convinced about this. If the strategy is "lets return wikipedia.org as the most relevant result", that's not sophisticated at all. Infact, it only worked for a very narrow subset of queries. If I search for 'top luggages for solo travel', I dont want to see wikipedia and I dont know how kagi will be any better.


The wrote "returned the relevant Wikipedia page higher" and not "wikipedia.org as the most relevant result" - that's an important distinction. There are many irrelevant Wikipedia pages.


(Kagi staff here)

Generally we do particularly better on product research queries [1] than other categories, because most poor review sites are full of trackers and other stuff we downrank.

However there aren't public benchmarks for us to brag about on product search, and frankly the simpleQA digression in this post made it long enough it was almost cut.

1. (Except hyper local search like local restaurants)


do you use pinned/deranked sites as an indicator for quality?


I don't think we share them across accounts, no, but we do use your personal kagi search config in assistant searches.


Why would lefthook not be a more reliable tool (in design)

https://github.com/evilmartians/lefthook


I've tried to use it early on, but it hasn't moved much over time. It was opinionated and neglected. Meanwhile, pre-commit is supported by everyone. There are other alternatives, such as Husky, hk, and git-hooks, but they don't offer the out-of-the-box support that pre-commit does.


You misunderstand what Rust’s guarantees are. Rust has never promised to solve or protect programmers from logical or poor programming. In fact, no such language can do that, not even Haskell.

Unwrapping is a very powerful and important assertion to make in Rust whereby the programmer explicitly states that the value within will not be an error, otherwise panic. This is a contract between the author and the runtime. As you mentioned, this is a human failure, not a language failure.

Pause for a moment and think about what a C++ implementation of a globally distributed network ingress proxy service would look like - and how many memory vulnerabilities there would be… I shudder at the thought… (n.b. nginx)

This is the classic example of when something fails, the failure cause over indexes on - while under indexing on the quadrillions of memory accesses that went off without a single hitch thanks to the borrow checker.

I postulate that whatever the cost in millions or hundreds of millions of dollars by this Cloudflare outage, it has paid for more than by the savings of safe memory access.

See: https://en.wikipedia.org/wiki/Survivorship_bias


> Pause for a moment and think about what a C++ implementation of a globally distributed network ingress proxy service would look like - and how many memory vulnerabilities there would be… I shudder at the thought

I mean thats an unfalsifiable statement, not really fair. C is used to successfully launch spaceships.

Whereas we have a real Rust bug that crashed a good portion of the internet for a significant amount of time. If this was a C++ service everyone would be blaming the language, but somehow Rust evangelicals are quick to blame it on "unidiomatic Rust code".

A language that lets this easily happen is a poorly designed language. Saying you need to ban a commonly used method in all production code is broken.


Only formal proof languages are immune to such properties. Therefore all languages are poorly designed by your metric.

Consider that the set of possible failures enabled by language design should be as small as possible.

Rust's set is small enough while also being productive. Until another breakthrough in language design as impactful as the borrow checker is invented, I don't imagine more programmers will be able to write such a large amount of safe code.


I would say the impact of the borrow checker is exaggerated.


> You misunderstand what Rust’s guarantees are.

Well, no, most Rust programmers misunderstand what the guarantees are because they keep parroting this quote. Obviously the language does not protect you from logic errors, so saying "if it compiles, it works" is disingenuous, when really what they mean is "if it compiles, it's probably free of memory errors".


No, the "if it compiles, it works" is genuinely about the program being correct rather than just free of memory errors, but it's more of a hyperbolic statement than a statement of fact.

It's a common thing I've experienced and seen a lot of others say that the stricter the language is in what it accepts the more likely it is to be correct by the time you get it to run. It's not just a Rust thing (although I think Rust is _stricter_ and therefore this does hold true more of the time), it's something I've also experienced with C++ and Haskell.

So no, it's not a guarantee, but that quote was never about Rust's guarantees.


Everyone understands Rust doesn't offer such guarantees.

Even more now after this outage.

But it's a fact that "if it compiles it runs" is often associated with Rust, in HN at least. A quick Algolia search tells me that.


I have definitely noticed this when I've tried doing Advent of Code in Rust - by the time my code compiles it typically send out the right answer. It doesn't help me once I don't know whatever algorithm I need to reach for in order to solve it before the heat death of the universe, but it is a somewhat magical feeling when it lasts.


Glad to see that the bowling development team is focusing on deterministic tooling like language server protocol in gopls and using static analysis for automatically restoring code with go fix.

Recently I made the same assertions as to Go's advantage for LLM/AI orchestration.

https://news.ycombinator.com/item?id=45895897

It would not surprise me that Google (being the massive services company that it is) would have sent an internal memo instructing teams not to use the Python tool chain to produce production agents or tooling and use Golang.


Even 15 years ago or so when Guido was still there I recall being told "we aren't supposed to write any new services in Python. It starts easy, then things get messy and end up needing to be rewritten." I recall it mostly being perf and tooling support, but also lack of typing, which has changed since then, so maybe they've gotten more accepting.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: