Hacker Newsnew | past | comments | ask | show | jobs | submit | dreamcompiler's favoriteslogin

Ah your mistake was going for ROS 2 which is by any objective metric a complete trainwreck compared to ROS 1. Am a maintainer for a package in both, AMA.

The move_base stack in ROS 1 was pretty bad overall, but if you made your own navigation it ran really well and was pretty bulletproof as a middleware for the most part (changing network conditions ahem ahem). Nav2 in ROS 2 is better, but it's really hard to tell because the DDS is so unreliable and rclpy is so slow that it's usually impossible to tell why it's only working half the time for no reason. Defaults are bad and pitfalls are everywhere. Incidentally this makes the entire thing unreliable even if you implement your own stuff so.. yeah. We'll see if Zenoh fixes any of this when it's finally out and common sense prevails over DDS cargo culting. Personally I'm not entirely convinced we won't see a community fork of Noetic gain some popularity after it's EoL.

There is some effort to add Rust support, but it's just a community effort because OpenRobotics is completely understaffed. Google bought them and then they sort forgot they exist. There's only like 4 core devs in total. That's mainly why they're treating python as a second class citizen.


Missed the edit window, but here's the command I use. Newlines added here for clarity.

  wget-mirror() {
    wget --mirror --convert-links --adjust-extension --page-requisites \
    --no-parent --content-disposition --content-on-error \
    --header="Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8" \
    --user-agent="Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:129.0) Gecko/20100101 Firefox/129.0" \
    --restrict-file-names="windows,nocontrol" -e robots=off --no-check-certificate \
    --no-hsts --retry-connrefused --retry-on-host-error --reject-regex=".*\/\/\/.*" $1
  }

Some notes:

— This command hits servers as fast as possible. Not sorry. I have encountered a very small number of sites-I-care-to-mirror that have any sort of mitigation for this. The only site I'm IP banned from right now is http://elm-chan.org/ and that's just because I haven't cared to power-cycle my ISP box or bother with VPN. If you want to be a better neighbor than me, look into wget's `--wait`/`--waitretry`/`--random-wait`.

— The only part of this I'm actively unhappy with is the fixed version number in my fake User-Agent string. I go in and increment it to whatever version's current every once in a while. I am tempted to try automating it with an additional call to `date` assuming a six-week major-version cadence.

— The `--reject-regex` is a hack to work around lots of CMS I've encountered where it's possible to build up links with an infinite number of path separators, e.g. an `www.example.com///whatever` containing a link to `www.example.com////whatever` containing a link to…

— I am using wget1 aka wget. There is a wget2 project, but last time I looked into it wget2 did not support something I needed. I don't remember what that something was lol

— I have avoided WARC because I usually prefer the ergonomics of having separate files and because WARC seems more focused on use cases where one does multiple archives over time (as is the case for Wayback Machine or a search engine) where my archiving style is more one-and-done. I don't tend to back up sites that are actively changing/maintained.

— However I do like to wrap my mirrored files in a store-only Zip archive when there are a great number of mostly-identical pages, like for web forums. I back up to a ZFS dataset with ZSTD compression, and the space savings can be quite substantial for certain sites. A TAR compresses just as well, but a `zip -0` will have a central directory that makes it much easier to browse later.

Here is an example of the file usage for http://preserve.mactech.com with separate files vs plain TAR vs DEFLATE Zip archive vs store-only Zip archive. These are all on the same ZSTD-compressed dataset and the DEFLATE example is here to show why one would want store-only when fs-level compression is enabled.

  982M    preserve.mactech.com.deflate.zip
  408M    preserve.mactech.com.store.zip
  410M    preserve.mactech.com.tar
  3.8G    preserve.mactech.com
Also I lied and don't have a full TiB yet ;)

  [lammy@popola#WWW] zfs list spinthedisc/Backups/WWW
  NAME                      USED  AVAIL     REFER  MOUNTPOINT
  spinthedisc/Backups/WWW   772G   299G      772G  /spinthedisc/Backups/WWW


  [lammy@popola#WWW] zfs get compression spinthedisc/Backups/WWW
  NAME                     PROPERTY     VALUE           SOURCE
  spinthedisc/Backups/WWW  compression  zstd            local



  [lammy@popola#WWW] ls 
  Academic                        DIY                             Medicine                        SA
  Animals                         Doujin                          Military                        Science
  Anime                           Electronics                     most_wanted.txt                 Space
  Appliance                       Fantasy                         Movies                          Sports
  Architecture                    Food                            Music                           Survivalism
  Art                             Games                           Personal                        Theology
  Books                           History                         Philosophy                      too_big_for_old_hdds.txt
  Business                        Hobby                           Photography                     Toys
  Cars                            Humor                           Politics                        Transportation
  Cartoons                        Kids                            Publications                    Travel
  Celebrity                       LGBT                            Radio                           Webcomics
  Communities                     Literature                      Railroad
  Computers                       Media                           README.txt


Some of this could stand to be re-organized. Since I've gotten more into it I've gotten better at anticipating an ideal directory depth/specificity at archive time instead of trying to come back to them later. Like `DIY` (i.e. home improvement) there should go into `Hobby` which did not exist at the time, `SA` (SomethingAwful) should go into `Communities` which did not exist at the time, `Cars` into `Transportation`, etc.

`Personal` is the directory that's been hardest to sort because personal sites are one of my fav things to back up but also one of the hardest things to try and organize when they reflect diverse interests. For now I've settled on a hybrid approach. If a site is geared toward one particular interest or subsulture, it gets sorted into `Personal/<Interest>`, like `Academics`, `Authors`, `Artists`, `Goth` (loads of '90s goths had web pages for some reason). Sites reflecting The Style At The Time might get sorted into `1990s` for a blinking-construction-GIF Tripod/Angelfire site or `2000s` for an early blog. Some times I sort personal sites by generation like `GenX` or `Boomer` (said in a loving way — Boomers did nothing wrong) when they reflect interests more typical of one particular generation.


The thing is, China as a culture does have a completely different attitude towards intellectual property. For them, copying is not theft, it's acknowledge of something's quality. Andrew "bunnie" Huang has written (at least) two articles with more details [1][2].

[1] https://www.bunniestudios.com/blog/2014/from-gongkai-to-open...

[2] https://www.bunniestudios.com/blog/2013/the-12-gongkai-phone...


Whoa, what a treasure trove! https://bernsteinbear.com/pl-resources/

>Alaska Airlines doesn't trust Boeing enough that they're sending their audit team to Boeing to check for quality.

Granted they are different divisions, but after the CST-100 debacle [1], NASA also sent a team to audit them [2]. Some of the off-channel remarks by NASA were striking.

[1] https://aerospaceamerica.aiaa.org/boeing-starliner-error-wou...

[2] https://blogs.nasa.gov/commercialcrew/2020/02/07/nasa-shares...


The difference between a cult and a religion is this: a cult wants it all but a religion just wants their cut.


And once you let the Lisp Machine teach you everything about itself, you had superhuman abilities!

One of my favorite Lisp Machine stories (I've fixed dead links to point to archive dot org, and archived the file from stanford's ftp to my https server):

https://news.ycombinator.com/item?id=30812434

Reposting this from the 2014 HN discussion of "Ergonomics of the Symbolics Lisp Machine":

https://news.ycombinator.com/item?id=7878679

http://lispm.de/symbolics-lisp-machine-ergonomics

https://news.ycombinator.com/item?id=7879364

eudox on June 11, 2014

Related: A huge collections of images showing Symbolics UI and the software written for it: https://web.archive.org/web/20201216064308/http://lispm.de/s... ...

agumonkey on June 11, 2014

Nice, but I wouldn't confuse static images with the underlying semantic graph of live objects that's not visible in pictures.

DonHopkins on June 14, 2014

Precisely! When Lisp Machine programmer look at a screen dump, they see a lot more going on behind the scenes than meets the eye.

I'll attempt to explain the deep implications of what the article said about "Everything on the screen is an object, mouse-sensitive and reusable":

There's a legendary story about Gyro hacking away on a Lisp Machine, when he accidentally trashed the function cell of an important primitive like AREF (or something like that -- I can't remember the details -- do you, Scott? Or does Devon just make this stuff up? ;), and that totally crashed the operating system.

It dumped him into a "cold load stream" where he could poke around at the memory image, so he clamored around the display list, a graph of live objects (currently in suspended animation) behind the windows on the screen, and found an instance where the original value of the function pointer had been printed out in hex (which of course was a numeric object that let you click up a menu to change its presentation, etc).

He grabbed the value of the function pointer out of that numeric object, poked it back into the function cell where it belonged, pressed the "Please proceed, Governor" button, and was immediately back up and running where he left off before the crash, like nothing had ever happened!

Here's another example of someone pulling themselves back up by their bootstraps without actually cold rebooting, thanks to the real time help of the networked Lisp Machine user community:

ftp://ftp.ai.sri.com/pub/mailing-lists/slug/900531/msg00339.html => https://donhopkins.com/home/SlugMailingList/msg00339.html (I archived the whole directory so you can follow the "Followups" links to read the whole discussion.)

Also eudox posted this link:

Related: A huge collections of images showing Symbolics UI and the software written for it:

http://lispm.de/symbolics-ui-examples/symbolics-ui-examples....


These are surprisingly common for local governments these days - this site lists 598 of them: https://dataportals.org/

Here's another collection with more than 3,700: https://data.opendatasoft.com/explore/dataset/open-data-sour...


I got called a paranoid a couple of years ago, because I said that my provider slows down long downloads, since every download starts fast, but after a few megabytes they become slower and slower.

I tried asciinema and I forgot which others, but I eventually used t-rec because it’s able to easily compress to a small size file that I can upload to GitHub (see demo on here https://GitHub.com/Langroid/Langroid) and it also creates mp4 filed that I can upload to loom etc.

https://github.com/sassman/t-rec-rs


Most of that complexity isn't needed, though.

Getting 5V out of USB-C is pretty trivial, all the way up to 3A. The vast majority of hobbyist projects run on 5V, and very few of them need more than 15W.

Most of the rest seems to be over-engineering. You can get a ready-made "PD Trigger" board for literally $1 [0], which will allow you to select the desired voltage using jumpers. It even has support for the non-standard (although still common) 12V profile. Or, if you really want to do it using software, grab a $6 one from Adafruit [1] which comes with a tutorial and Arduino library.

If you really want to yak-shave, grab an FUSB302B. They are quite affordable, widely available, and provide easy access to protocol-level PD comms. It's only really needed once you want to implement something like Alternate Modes, though.

[0]: https://aliexpress.com/item/1005003828144045.html

[1]: https://www.adafruit.com/product/5807


Not just AI, it applies to any form of automated decision making (https://gdpr-info.eu/art-22-gdpr/):

The data subject shall have the right not to be subject to a decision based solely on automated processing, including profiling, which produces legal effects concerning him or her or similarly significantly affects him or her.

But there's the usual "explicit consent" caveat, which Meta will argue they have because the user (as in: habitual consumer of harmful substances) has agreed to it in their TOS.

But then there is also a transparency requirement on automated decision making (https://gdpr-info.eu/art-15-gdpr/):

[data subject has access to] the existence of automated decision-making [and] meaningful information about the logic involved, as well as the significance and the envisaged consequences of such processing for the data subject.

This explicitly says that the user has the right to know what logic is employed by automated decision making systems.


The initiative seems to have been abandoned. The results they managed to achieve were nowhere near adequate for live performance. The best results they got on the best possible hardware was about 14ms, which -- as you know -- isn't adequate for real-time audio performance.

My guess, based on experience with Rasberry Pi's: to do better than that, you need all drivers on the phone to have PREEMPT_RT patches, and contention with GPUs may be a deal-killer.

You might be interested in this approach:

    https://play.google.com/store/apps/details?id=com.twoplay.pipedal

    https://rerdavies.github.io/pipedal/
PiPedal delivers sub-4ms latency (measured using loopback) without dropouts using a Raspberry PI that's accessed from a phone using Wi-Fi direct. (disclosure: I'm the author).

A few observations:

- An SMP_PREEMPT or SMP_RT Linux kernel is mandatory. Audio threads need to run with high real-time priority.

- Real-time and audio and GPUs don't seem to play well together. PiPedal delivers stable low-latency audio -- I think -- primarily because it runs headless.

- On Raspberry pi, GPUs have a dire effect on low-latency audio. Using non-GPU graphics drivers helps.

- SD card i/o has a moderately dire effect on low-latency audio. Ethernet and Wi-Fi i/o seems to have little to no effect on low-latency audio. Drive i/o to USB-connected storage devices is less of a problem with the caveat that the USB 2.0 ports share a bus, and USB 3.0 ports share a separate bus. The storage device should be on a USB 3.0 port, and the UBS audio device should be on a USB 2.0 port.

- All of this requires a Linux kernel later than ~5.15. Round about that time, the kernel ingested a significant portion of the PREEMPT_RT patches; and USB audio is essentially unusable prior to 5.10. This disqualifies a lot of ARM SOCs which still have 4.x kernels!

Contention with SD card i/o may just reflect the fact that full PREEMPT_RT patches haven't yet been propagated into mainline device drivers for that i/o path. Or that the patch set doesn't exist at all. A PREEMPT_RT kernel doesn't fix the problem.

Significantly, the Linux 5.1x audio and SMP_PREEMPT improvements post-date the Android real-time audio initiative. It might be time for Google to take another kick at the can. But in the meantime, real-time audio on Android remains borked.


I agree completely, and the best antidote is getting a little knowledge about how home solar works. Something that should be well-in-reach of most of the HN crowd.

A typical grid-tied home solar system has 3 big areas of component costs:

1. The actual solar panels.

2. The thing that mounts the solar panels in place (called racking)

3. The things that turn the solar panel's electricity into electricity that can be used in your home and by the grid. Typically inverters and wiring.

The good news is that you can price this all out yourself, to get an idea of what your system SHOULD actually cost. Then you could theoretically do it yourself, or be a more informed consumer when shopping around to have someone do it for you.

I haven't pulled the trigger yet, but I've been planning, revising and tracking prices on a DIY install for the past couple years.

Here are some of the resources I like:

1: Unbound Solar is a good resource for ready-to-install DIY kits. Their kits are a good resource for "this just works" - and you can then price out individual components as-needed. - https://unboundsolar.com/shop/solar-kits?product-category=gr...

2: For buying actual panels, I like A1 solar. They seem to have the best selection/pricing I've found: https://a1solarstore.com/

3: OpenSolar is a free tool designed for solar installers, but available to DIY'ers. It lets you specify your panels, racking, inverters etc., and then lay them out no your roof or the ground. https://www.opensolar.com/ - It's very likely the tool that the contractors you're getting bids from are using.

The last bit of info I'll share is that in general, microinverters don't make sense from a cost/benefit standpoint. Panels have gotten really cheap, microinverters haven't. You're probably better off adding more panels with a traditional inverter system vs. paying for microinverters to get marginal efficiency gains from a smaller number of panels.


How RestGPT differs from ToolLLM or Gorilla?

papers:

1. ToolLLM: Facilitating Large Language Models to Master 16000+ Real-world APIs https://arxiv.org/abs/2307.16789

2. Gorilla: Large Language Model Connected with Massive APIs https://arxiv.org/abs/2305.15334


Here's a class at MIT that takes undergraduates who don't even know verilog and at then end of the class they have a mostly passable RV32I. Admittedly pushing that through VLSI CAD will not be push-button easy but many universities also have an undergraduate tape-out class, including MIT...

https://ocw.mit.edu/courses/6-004-computation-structures-spr...

Via MPW and assembly service one can easily have that die manufactured, packaged and mounted on custom PCBs for under $50K, a.k.a. much less than you'll pay in salary to have it designed, fabricated and tested. Even at non-American labor rates.

In 2023 producing a computing core, albeit not a state-of-the-art one, is just not the moat it once was...


So, here is my take. I've built my own battery pack, a pretty big one (170 cells of the largest capacity available), and it has served me well. But: I'm not your average DIYer and have spent a ton of time reading up on the do's and don'ts of pack construction, made a smaller one first to experiment with and have researched pack safety until my eyes hurt.

My short take is that if you are someone who wants to revive their e-bike battery pack after it fails and who does not have extensive experience yet that it is going to be much cheaper and safer to just go out and buy a replacement pack from the same brand. I've seen multiple packs that had burned out wiring (usually: balancing wires, but also the main current carrying conductor), burned out BMS boards. Rarely did I open a pack that had been in service after developing a fault where the fault could not have potentially propagated. That it did not was in most cases sheer luck. Even for brand name gear!

It is super easy to mess up with these. Drop a pack, drop a sliver of wire or interconnect ribbon, misplace a tool and you have a thermal runaway on your hands which you will not be able to stop. Doing this in a facility that is not set up for it (say, your kitchen or living room, especially if it is in shared accommodation such as an apartment building) is super dangerous.

If you're going to do this anyway, prepare to spend a grand or so on tools and a good month or so on creating a safe setup. After that you might as well go into business. I'm all for DIY stuff, but this isn't one of those.

Finally: manufacturers could help with this: they could design their packs to be more reliable and easier to repair. They also could stop price-gouging the consumers on the packs, which is one of the reasons people want to repair them to begin with. But the packs are so chock full of DRM and other nonsense that repairing them properly is specialist work and they are so expensive that there will be a number of people that will try to repair a pack themselves anyway. Just disassembling a pack is not without risk.

If you go down this route feel free to drop me an email and I'll send you a document that I have been working on for a long time to make working on these safer. Some ground rules:

- replace all cells at once

- weld your connections (NEVER solder them)

- use only the same cells

- use only cells that have all been charged to the same voltage

- make a thermal scan of the pack after the initial build during high speed charging and discharging to make sure you don't have a cell in there that somehow stands out (it will surely fail)

- if you drop a pack or a cell consider it lost, even if it seems to still work

- only use A brand cells from a reputable vendor

- do not use recycled cells

- only use the original BMS

- Make sure none of your sense wires are crossed or loose enough that they can cross

- spend as much time on the mechanical part of your build as on the electrical part

hth


Is there a Llama2 65b quantized version for Mac M2?

Home infrastructure for power and data is due for a switch. Copper is expensive, and running 120V at 15A everywhere is really overkill with a 3w led bulb being blindingly bright and needing dc. Electronics largely are all hooked to USB-c chargers at this point. The poe to USB-c splitters are largely 5v at this point, but poe can produce over 70W depending on type, so it doesn't have to stay that way. Ideally, daya could be bridged as well, plug in usb-c start charging and it looks like a network adapter too. The number of things that need more than 70w isn't that high. Kitchen mostly. Bigger TVs. Real beefy laptops or desktop computers. If they made a new POE standard that could support USB PD3.1 you could run anything outside the kitchen other than the a/c. That would probably mean a new cable standard though. Even just as things stand, POE could largely replace electrical wiring outside the kitchen.

CLOS is more than fast enough for games (see e.g. Kandria, there are comments on HN about it, though it's easy enough to do one's own benchmark and see how easy it is to get hundreds of frames per second even with thousands of objects updated and drawn through methods every frame), and if you do have some numerical generic functions in a tight loop you can optimize them further with libraries like https://github.com/alex-gutev/static-dispatch The book Object-Oriented Programming: The CLOS Perspective (1993) even includes as its final chapter/paper "Efficient Method Dispatch in PCL" which is essentially a memoization technique, so it's been a concern (or because of such techniques, not the biggest concern) for a long time.

The dev experience of using defstruct is bad enough when I want to change something about it that I just default to defclass. I can see some people not liking the somewhat more verbose syntax (I don't mind myself) but of course that's a barrier trivially removed...


>Would love recommendations for other sauces that allow me to squeeze a nationality of cuisine over my template to make it taste like that country's food (kinda).

Yeah I got you:

We're going to use two different basic techniques. The first is extremely simple: throw everything into a blender and then blend until your desired consistency (normally smooth).

Mexican: Combine tomatoes, jalapeños, cilantro, lime juice, and salt for a fresh salsa.

Mediterranean: Blend olives, capers, lemon juice, and olive oil for a tapenade.

South American: Mix parsley, garlic, vinegar, oil, and chili flakes for chimichurri.

Greek: Yogurt, cucumber, garlic, lemon juice, and dill for tzatziki.

Middle Eastern: Blend tahini, garlic, lemon juice, and water for a basic tahini sauce.

Technique 2 is a little more complicated but once you get the hang of it trust me it's worth it. Call it stovetop simmering.

In a saucepan, combine the base ingredients and bring to a simmer. Add primary flavor agents/spices and continue to simmer for the desired time until flavors meld. Adjust consistency if needed (e.g., with a slurry or additional liquid).

Italian: Start with crushed tomatoes, add garlic, basil, and oregano for a marinara sauce.

Chinese: Combine soy sauce, vinegar, sugar, garlic, and ginger; thicken with a cornstarch slurry for a basic stir-fry sauce.

Indian: Start with tomatoes and onions, add garam masala, turmeric, and cumin for a basic curry sauce.

French: Start with a roux (butter + flour), then add broth and reduce; season with herbs for a basic velouté.

Thai: Coconut milk with red curry paste, simmer and season with fish sauce and sugar for a basic Thai curry.


Even more annoying is why aren't decent USB-only ups systems?

just ditch the 110 outlets for usb-a or usb-c ports and skip the inverter.

I'd love to power a pi or 3 from ups-based usb ports.


Check pdftotext it's a CLI tool (maybe a library too) that makes pdf text selectable. Oh sorry, I meant to say ocrmypdf. But hey, maybe it's worth checking both.

Totally unrelated, but since we are talking about QOL tools on macOS, i thoroughly recommend BetterDisplay[0]

It enables reting scaling functionality on any external monitor, regardless of the resolution or the Apple compatibility.

It's great for 2k monitors that are totally hiDPI but are not deemed enough by Apple, and even for FHD secondarh displays that don't need that much display real state so you can use that real state to scale everything nicely.

[0]: https://github.com/waydabber/BetterDisplay


As the article mentions, Donald was featured in a documentary called "In A Different Key". I highly recommend it. Very informative about autism in general, the good, the bad, and everything in between.

useful context is that in most programs a very significant fraction of the instruction mix consists of reads of local variables; another significant fraction consists of passing arguments to subroutines and calling them. so the efficiency of these two operations is usually what is relevant

with dynamic scoping, the current value of a given variable is always in the same location in memory, so you can just fetch it from that constant location; when you enter or exit a scope that binds that variable, you push or pop that value onto a stack. a function pointer is just a (tagged) pointer to a piece of code, typically in machine language but sometimes an s-expression or something

with lexical scoping, in an environment that supports recursion, the usual implementation is that you have to index off the frame pointer to find the current location of the variable. worse, if your lexical scoping supports nested scopes (which is generally necessary for higher-order programming), you may need to follow index off the frame pointer to find the closure display pointer, then index off the closure pointer to find the variable's value. and then, if it's mutable, it probably needs to be separately boxed (rather than just copied into the closure when the closure is created, which would mean changes made to its value by the inner scope wouldn't be visible to the outer scope that originally created it), which means that accessing its value involves indexing off the frame pointer to find the context or closure display pointer, indexing off the closure display pointer to find the pointer to the variable, and then dereferencing that pointer to get the actual value. also, supporting closures in this way means that function pointers aren't just pointers to compiled code; they're a dynamically allocated record containing a pointer to the compiled code and a context pointer, like in pascal, with a corresponding extra indirection (and hidden argument, but that's free) every time you call a function.

there's a tradeoff available where they can just be those two pointers, instead of taking the approach I described earlier where the closure display is a potentially arbitrarily large object; but in that case accessing a variable captured from a surrounding scope can involve following an arbitrarily long chain of context pointers to outer scopes, and it also kind of requires you to heap-allocate your stack frames and not garbage-collect them until the last closure from within them dies, so it's a common approach in pascal (and gcc uses it for nested functions in c, with a little bit of dynamic code generation on the stack to keep its function pointers to a single word) but not in lisps

dybvig's dissertation about how he wrote chez scheme is a good source for explanations on this, and also explains how to implement call/cc with reasonable efficiency. http://agl.cs.unm.edu/~williams/cs491/three-imp.pdf

current cpus and operating systems favor lexical scoping more than those in the late 01970s and early 01980s for a variety of reasons. one is that they are, of course, just much faster and have much more memory. also, though, address sizes are larger, and instruction sizes are smaller, so it's no longer such a slam-dunk trivial thing to just fetch a word from some absolute address; the absolute address doesn't fit inside a single instruction, so you have to compute it by offsetting from a base register, fetch it from memory (perhaps with a pc-relative load instruction), or compose it over the course of two or more instrutions. and currently popular operating systems prefer all your code to be position-independent, so it may be inconvenient to put the variable value at a fixed absolute address anyway; if you try, you may find upon disassembling the code that you're actually indexing into a got or something. finally, though i don't know if this figured into the lispers' world, 8-bit cpus like the 6502 and 8080 were especially terrible at indexing, in a way that computers hadn't been since the 01950s and haven't been since, which made the performance gain of absolute addressing more significant

i don't know enough about addressing modes on machines like the vax, the dorado, and the pdp-10 to know what the cost was, but on arm and risc-v (and so i assume mips) you can index off the stack pointer with an immediate constant for free; it doesn't even require an extra clock cycle

a couple of points in favor of the efficiency of lexical scoping: when you have a cache, having all your local variables packed together into a stack frame is better than having them scattered all over the symbol table; popping a stack frame is a constant-time operation (adding a constant to the stack pointer, typically), while restoring n values on exiting a dynamic scope requires n operations; and the space usage of local variables on a call stack only includes the variables currently in scope, rather than all the distinct variable names used anywhere in the program, as is the case with lexical scoping

in a sense, the way you handle callee-saved registers in assembly language is pretty much the same as how you handle dynamically-scoped variables: on entry to a subroutine that changes r4 or rbp, you save its current value on a stack, and on exit you restore that value, and any value you put there can be seen by your callees. oddly enough, this turns out to be another efficiency advantage for lexical scoping; with dynamic scoping, when you call another function, the language semantics are that it could change any of your local variable values before it returns, so it's not safe for the compiler to cache its value in a cpu register across the call. (unless the variable is assigned to that cpu register for your entire program, that is, which requires whole-program optimizations that run counter to the lisp zeitgeist.) this has become an increasingly big deal as the relative cost of accessing memory instead of a cpu register has steadily increased over the last 40 years

i'm no expert in this stuff, though i did implement a subset of scheme with recursion, lexical scoping, closures, and enough power to run its own compiler (and basically nothing else): http://canonical.org/~kragen/sw/urscheme so i could totally be wrong about any of this stuff


I was in Istanbul recently, and what surprised me was the absolute depth of the cities attractions. I stayed in the Kadıköy and Sultanahmet districts, and it was so incredible to walk through a society which had been Monkey Patched through millennia. The runtime behavior of the objects, streets, buildings and the city had been adapted time and time again. Roman temples becoming Churches becoming Mosques becoming Museums, sometimes simply with some new tiles or a freshly laid down carpet. Each layer rich in artifacts. My favorite attraction was the Great Palace Mosaics Museum, some of the most detailed and vibrant mosaics I had ever seen. I hope to return soon.

Tasmota + CloudFree + Home Assistant and never make an account again and have full control over the devices in your home. No accounts and your lights still work when your internet is down.

I recently picked up a Linksys e8450 (twin sibling of the Belkin RT3200) and flashed it with openWRT and it's been great; WIFI-6 speeds on a router that is actually configurable

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: