Hacker Newsnew | past | comments | ask | show | jobs | submit | more musebox35's commentslogin

As a 15+ years emacs user the only item on my wishlist is client-server remote editing mode similar to that of vs code. Then I can go back to using emacs on cloud VMs. Does anyone know a solution to this that works as good as VS Code even when your latency is high? Hopefully, I will be pissed off with all the weird configuration flags of VS Code enough to write one myself ;-) To be fair its python integration is quite good at least for the usual stuff.


Two approaches might work here:

  1) Run Emacs on your local machine and use Tramp to edit the remote files

  2) Run Emacs on the remote machine with the files you're editing. This likely means running in the terminal itself (emacs -nw  or requivalently emacs -t).


In certain contexts 20% is a lot bucks, leaving that on the plate would be very wasteful ;-)


Yes, it would be 20% wasteful. But giving up freedom can be more costly.

Also, the 20% would be open to further optimization by the community, so it wouldn't be that bad in practice, probably.


In some commercial contexts with the savings from that 20%, you can buy a lot freedom and then with the freedom you bought you can make more free things :)


Could all be true, maybe, somehow. But I sleep better when my castle is not in someone else's kingdom. That alone is enough for me to accept the small performance penalty.


Google Deepmind is the closest lab to that idea because Google is the only entity that is big enough to get close to the scale of AT&T. I was skeptical that the Deepmind and Google Brain merge would be successful but it seems to have worked surprisingly well. They are killing it with LLMs and image editing models. They are also backing the fastest growing cloud business in the world and collecting Nobel prizes along the way.


Sadly this naturally happens in any field that ends up expanding due to its success. Suddenly the number of new practitioners outnumbers the number of competent educators. I think it is a fundamental human resources problem with no easy fix. Maybe llms will help with this, but they seem to reinforce the convergence to the mean in many cases as those to be educated is not in a position to ask the deeper questions.


> Sadly this naturally happens in any field that ends up expanding due to its success. Suddenly the number of new practitioners outnumbers the number of competent educators. I think it is a fundamental human resources problem with no easy fix.

In my observation the problem rather is that many of the people who want to "learn" computer science actually just want to get a certification to get a cushy job at some MAGNA company, and then they complain about the "academic ivory tower" stuff that they learned at the university.

So, the big problem is not the lack of competent educators, but practitioners actively sabotaging the teaching of topics that they don't consider to be relevant for the job at a MAGNA company. The same holds for the bigwigs at such companies.

I sometimes even see the conspiracy that if a lot of graduates saw that what their work at these MAGNA involves is from the history of computer science often decades old and has been repeated multiple times over the decades, this might demotivate the employees who are to believe that they work on the "most important, soon to be world changing" thing.


I agree, it is another important factor. Pandemic pay and hire rates certainly accentuated this.


And that last 5% is the toughest nut to crack. There is a reason waymo is way ahead even if they can not scale. Cameras are passive devices with relatively poor dynamic range and low light behavior. They are nowhere near a match/replacement for the human eye. Just try to picture a 5 year old at dusk or indoors and what you see will not be what you get.


Agree that the last fiew percentage points are exponentially more difficult each step of the way. What's your metric for saying Waymo is ahead, in terms of tech? They are strictly geo fenced, limited to specific road types, and often get stuck/confused. Also their system is very expensive, and not scalable to million of cars. Your point about cameras seems odd. Cameras have much better low light performance than human eyes. And cars have headlights.


waymo already has driverless taxi service in a major us city and is expanding. Tesla is in the process. again this is if they cover the last 5%. Scalability arguments wont matter when they can not launch such a service. And no, cmos cameras are close but are not better than the human eye in low light unless you have an ir camera and can flood everywhere with active ir lights. they are certainly inferior in dynamic range. I have been doing vision for more than two decades and I would not be comfortable in a camera only robotaxi at high speed. Certainly not at night or under adverse weather conditions. But this is all speculation of course. Considering fully autonomous driving at scale has been a major unrealised promise for the past 10 years, I stand by my assessment until I see a major advancement in camera technology or affordable active sensors.


Honestly, if you have any actual interest in LLMs or other generative ai variants, just go after a concrete goal post that you yourself set with measurable metrics to gauge your progress. Then the predicted timeline from podcasts and blog posts will become irrelevant. Experts and non-experts have both been terrible at predicting timelines since the dawn of ai. Self driving cars and llms are no exception. When you are making predictions based solely on intuition and experience it is mostly an extrapolation. It is not useless. It always helps to ask questions and try to frame the future within the bounds of our current understanding. But at the same time it is important to remember that this is just speculation, not empirical science. That is also why there is such varied opinions on the topic of ai timelines. Relax and enjoy witnessing a major leap in our understanding of natural language, vision, and high dimensional probabilistic vector spaces ;-)


While I agree with the general sentiment on throwaway compute infra, the generated know-how with large scale experiments is not thrown away. I think a lot hinges on the scaling laws and whether you will hit the jackpot at a certain scale before everyone else. This is hard to guesstimate so someone has to do it in the spirit of empiricism. This might sound a lot like gambling or exploring depending on your sentiment. So, I think it is more justified to criticize the scale and the risks than the spirit of these investments.


While scaling laws are only empirical curve fitting and extrapolation, none of them predict a discontinuous "jackpot effect."


AFAIK, certain abilities such as understanding arithmetic manifest at discrete scale points even though there is a continuous build up of potential. There is also the more remote possibility of a discrete scale that AI takes over its own training or at least starts to contribute substantially. A lot of real world leverage and arbitrage depends on such discrete surprises that may not be visible during the continuous incremental evolution. I think this principle holds computationally as much as it does biologically.


I think conceptually diataxis is brilliant. However, it is not trivial to implement. Every project needs a varying ratio of each component and stacking all forms in a single website in the same format is very ineffective. The ratio also evolves with community adoption and expertise level. I really wish it was simply more actionable. Documentation is a hard problem, maybe we will figure out a better way one day. Until then docs will only be as good as the amount of time and expertise spent on them, which is usually not as high as it should be due to resource constraints.


I'm not sure what you mean by

> Every project needs a varying ratio of each component and stacking all forms in a single website in the same format is very ineffective

I think Diátaxis _is_ mostly a conceptual framework. It helps me tremendously, totally implementation agnostic. From the site itself:

> Diátaxis strongly prescribes a structure, but whatever the state of your existing documentation - even if it’s a complete mess by any standards - it’s always possible to improve it, iteratively.


I totally agree with your assessment. I just wanted to highlight that “implementation agnostic” is both a blessing and a curse. You can always apply the principles of diataxis but it provides near zero guidance on how to actually build the documentation for a specific project. This does not reduce its conceptual value, but I wish there was another framework with a complementary practical value.


Brilliant, I have always felt that one of the major problems with machine learning, consequently LLMs, is the boring average based loss functions that under-represent the unique and the rare. It seems our collective civilization is using a similar function and heading in the same direction of optimizing for the average.


The world is rapidly homogenising. You see it with “air space” interior design - coffee shops have the same aesthetic in every major city in the world. You see it in local fashions. You see it as a tourist - travel anywhere in the world and the chances are you’ll find the same kind of shop selling the same kind of trinket. Made in China with a subtly different graphic on it to represent the country you’re in.


This has been happening ever since trade routes were established across Eurasia (Silk Road) or the Americas were discovered. It only keeps accelerating as movement and trade becomes easier.

If pockets of humanity could isolate themselves from the rest, we could get diversity growing again, that one sentinel island might be our only hope.


That is a fine point. However I am not sure if replacing the gpus themselves will be the bottleneck investment for datacenter costs. After all you have so much more infrastructure in a datacenter (cooling and networking). Plus custom chips like tpus might catch up at lower cost eventually. I think the bigger question is whether demand for compute will evaporate or not.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: