Hacker Newsnew | past | comments | ask | show | jobs | submit | jackcviers3's commentslogin

Isn't that true of python as well? I would argue that Github's decision to use markdown for formatting, more than any other, is what resulted in its widespread adoption to other use cases. The simple tool to share code ate the world.

I'm continually surprised that Microsoft hasn't completely cornered the market on LLM code generation, given their head start with copilot and ready access to source code on a scale that nobody else really has.


Python has a spec and multiple healthy implementations, and is overwhelmingly more popular than org mode, so I don't really think that's a rebuttal.


The last one is fairly simple to solve. Set up a microphone in any busy location where conversations are occurring. In an agentic loop, send random snippets of audio recordings for transcriptions to be converted to text. Randomly send that to an llm, appending to a conversational context. Then, also hook up a chat interface to discuss topics with the output from the llm. The random background noise and the context output in response serves as a confounding internal dialog to the conversation it is having with the user via the chat interface. It will affect the outputs in response to the user.

If it interrupts the user chain of thought with random questions about what it is hearing in the background, etc. If given tools for web search or generating an image, it might do unprompted things. Of course, this is a trick, but you could argue that any sensory input living sentient beings are also the same sort of trick, I think.

I think the conversation will derail pretty quickly, but it would be interesting to see how uncontrolled input had an impact on the chat.


I'll add to this - if you work on a software project to port an excel spreadsheet to real software that has all those properties, if the spreadsheet is sophisticated enough to warrant the process, the creators won't be able to remember enough details abut how they created it to tell you the requirements necessary to produce the software. You may do all the calculations right, and because they've always had a rounding error that they've worked around somewhere else, your software shows calculations that have driven business decisions for decades were always wrong, and the business will insist that the new software is wrong instead of owning some mistake. It's never pretty, and it always governs something extremely important.


Now, if we could give that excel file to an llm and it creates a design document that explains everything is does, then that would be a great use of an LLM.


And adding ads into the responses is _child's play_ find the ad with the most semantic similarity to the content in the context. Insert at the end of the response or every N responses with a convincing message that based on our discussion you might be interested in xyz.

For more subtle and slimier way of doing things, boost the relevance of brands and keywords, and when they are semantically similar to the most likely token, insert them into the response. Companies pay per impression.

When a guardrail blocks a response, play some political ad for a law and order candidate before delivering the rest of the message. I'm completely shocked nobody has offered free gpt use via an api supported by ad revenue yet.


I'll attempt to provide a reasonable argument for why speed of delivery is the most important thing in software development. I'll concede that I don't know if the below is true, and haven't conducted formal experiments, and have no real-world data to back up the claims, nor even define all the terms in the argument beyond generally accepted terminology. The premise of the argument therefore may be incorrect.

Trivial software is software for which

- the value of which the software solution is widely accepted and widely known in practice and

- formal verification exists and is possible to automate or

- only has a single satisfying possible implementation.

Most software is non-trivial.

There will always be:

- bugs in implementation

- missed requirements

- leaky abstractions

- incorrect features with no user or business value

- problems with integration

- problems with performance

- security problems

- complexity problems

- maintenance problems

in any non-trivial software no matter how "good" the engineer producing the code is or how "good" the code is.

These problems are surfaced and reduced to lie within acceptable operational tolerances via iterative development. It doesn't matter how formal our specifications are or how rigorous our verification procedures are if they are validated against an incorrect model of the problem we are attempting to solve with the software we write.

These problems can only be discovered through iterative acceptance testing, experimentation, and active use, maintenance, and constructive feedback on the quality of the software we write.

This means that the overall quality of any non-trivial software is dominated by the total number of quality feedback loops executed during its lifetime. The number of feedback loops during the software's lifetime are bound by the time it takes to complete a single synchchronous feedback loop. Multiple feedback loops may be executed in parallel, but Amdahl's law holds for overall delivery.

Therefore, time to delivery is the dominant factor to consider in order to produce valuable software products.

Your slower to produce, higher quality code puts a boundary on the duration of a single feedback loop iteration. The code you produce can perfectly solve the problem as you understand it within an iteration, but cannot guarantee that your understanding of the problem is not wrong. In that sense, many lower quality iterations produces better software quality as the number of iterations approaches infinity.


>> Your slower to produce, higher quality code puts a boundary on the duration of a single feedback loop iteration. The code you produce can perfectly solve the problem as you understand it within an iteration, but cannot guarantee that your understanding of the problem is not wrong. In that sense, many lower quality iterations produces better software quality as the number of iterations approaches infinity.

I'll reply just to that as it being the tldr. First of all tech debt is a thing and it's the thing that accumulates mostly thanks to fast feedback iterations. And in my experience the better the comunication, to get the implementation right, and the better the implementation and it happens that you can have solid features that you'll unlikely ever touch again, user base habit is also a thing, continuing on interating on something a user knows how to use and changing it is a bad thing. I'd also argue it's bad product/project management. But my whole original argument was why we'd need to have a greater speed in the first place, better tooling doesn't necessarily means faster output, productivity as well isn't measured as just faster output. Let me make a concrete example, if you ask an LLM X to produce a UI with some features, most of them will default to using React, why? Why can't we question the current state of web instead of continue to pile up abstractions over abstractions? Even if I ask the LLM to create a vanilla web app with HTML, why can't we have better tooling for sharing apps over the internet? The web is stagnant and instead of fixing it we're building castles over castles over it


Tech debt doesn't accrue because of fast feedback iterations. Tech debt accrues because it isn't paid down or is unrecognized during review. And like all working code, addressing it has a cost in terms of effort and verification. When the cost is too great, nobody is willing to pay it. So it accrues.

There aren't many features that you'll never touch again. There are some, but they usually don't really reach that stage before they are retired. Things like curl, emacs, and ethernet adapters still exist and are still under active development after existing for decades. Sure, maybe the one driver for an ethernet adapter that is no longer manufactured isn't very active, but adding support for os upgrades still requires maintenance. New protocols, encryption libraries and security patches have to be added to curl. emacs has to be specially maintained for the latest OSX and windows versions. Maintenance occurs in most living features.

Tools exist to produce extra productivity. Compilers are a tool so that we don't have to write assembly. High-level interpreted languages are a tool so we don't have to write ports for every system. Tools themselves are abstractions.

Software is abstractions all the way down. Everything is a stack on everything else. Including, even, the hardware. Many are old, tried and true abstractions, but there are dozens of layers between the text editor we enter our code into and the hardware that executes it. Most of the time we accept this, unless one of the layers break. Most of the time they don't, but that is the result of decades of management and maintenance, and efforts sometimes measured in huge numbers of working hours by dozens of people.

A person can write a rudimentary web browser. A person cannot write chrome with all its features today. The effort to do so would be too great to finish. In addition, if finished, it would provide little value to the market, because the original chrome would still exist and have gained new features and maintenance patches that improve its behavior from the divergent clone the hypothetical engineer created.

LLMs output react because react dominates their training data. You have to reject their plan and force them to choose your preferred architecture when they attempt to generate what you ask, but in a different way.

We can have better tooling for sharing apps than the web. First, it needs to be built. This takes effort, iteration, and time.

Second, it needs to be marketed and gain adoption. At one time, Netflix and the <blink> tag it implented dominated the web. Now it is a historical footnote.Massive migrations and adoptions happen.

Build the world you want to work in. And use the tools you think make you more productive. Measure those against new tools that come along, and adopt the ones that are better. That's all you can do.


You _can_ largely ignore the toxicity. Don't give toxic individuals attention, and they go somewhere else to stor the pot.

Just debate the ideas with the merits in the source code, ignore the haters, and be kind and helpful. It's not difficult to do.


Oh for sure, I'm not too stressed by it -- but I think the ship has sailed on the chance for mainstream Scala adoption. Perhaps it was always delusional, but there was a period when it really seemed like Scala had somewhat of a chance to be the Ruby replacement and become one of the main backend languages (after the Twitter rewrite to Scala, when Foursquare, Meetup and various other startups of that generation were all in on the language); then there was a generation where at least it was the defacto language for data infra. Now, I'm not even sure if many major companies are even using it for the latter case.

Mainstream adoption isn't everything and I still mostly use Scala for personal projects, but it's such a different world working in a language where the major open source projects have industry backing. The Scala community, meanwhile, seems mostly stuck starting entirely new FP frameworks every other week. Nothing against that, but I don't see that much advantage to choosing Scala over OCaml at this point (if you don't need JVM integration).

Momentum appears to be behind Rust now, of course, but I've yet to be convinced. If it had a better GPU story and could replace C/C++ entirely I'd be on board, but otherwise I want my everyday language to be a bit closer to Python/Ruby on the scale against C/C++.


Same. It's easy to setup and use. As is gptel, aidermacs, and claude code ide.


A true inspiration


It's clearly a consistent typo and was probably part of the prompt used to generate the article.


With the size of the asteroid, is this one where we could use the gravity tractor[1] or the Yarovsky Effect[2] techniques for deflection, or is there not enough lead time?

1. https://en.wikipedia.org/wiki/Gravity_tractor 2. http://ui.adsabs.harvard.edu/abs/2021plde.confE.119A/abstrac...


To change orbit of so large asteroid, need huge amount of energy.

Unfortunately, humanity now have only small power ready, for such far things just about 50kW, it will need decades to transfer so large energy.

Russians claimed to create nuclear reactor for space with somewhere near 200kW, but unfortunately, now these are just words, very far from real hardware.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: