Hacker Newsnew | past | comments | ask | show | jobs | submit | bluedino's commentslogin

Interestingly enough, the Mac version (done by Burger Bill/Becky) was a port of the Super NES version (which was done by iD)

https://github.com/Blzut3/Wolf3D-Mac


Funny enough, my favorite version has been the SNES version. Despite all the limitations, it's got built-in controller support and also has a map! Maybe I'll try to gran the mac-for-pc version.

That caused me to go down a rabbit hole. The //GS version was released 3 years after the computer was discontinued and you basically had to have a sound card.

I remember it running OK on my LCII with a 68030/40Mhz accelerator and later on a PowerMac 6100/60.


Creating a Wolfenstein-style engine is a great programming project if you're learning a new language or graphics library.

You can smash out a flat shaded version in an afternoon, or go crazy adding all kinds of features and take it as far as you want to go.

https://youtu.be/gYRrGTC7GtA?si=o6-l7XAPpw3z_36t


Ceiling and floor casting is tricky, and pretty expensive at times. Getting vertical panning to work too is tricky but well worth it.

Love 3DSage’s videos! Wish he’d do a part 3 on his doom-ish engine, though.

All the chemical companies do it. They pair it with testing, but still.

This really takes me back. My first actual 'use' for Linux was making routers out of leftover computers.

The perfect machine back then was a 100MHz Pentium, in a slimline desktop case. At the time, the Pentium III was the current desktop chip, so you'd have a pile of early Pentium-class machines to use. And even a 10mb ISA network card (3Com if possible) would have plenty of power for the internet connections of the day. But 100mb PCI cards were still fairly cheap.

Install two NICs, load your favorite Linux distro, and then follow the IP-Masquerading HOWTO and you've got internet access for the whole apartment building, office, or LAN party.

Eventually I moved on to Linux Firewalls by Robert Ziegler for a base to build on.

After that I started piling other services on, like a spam filter, Squid cache, it was amazing to get so much use out of hardware that was going to just get thrown out.


That takes me back, I had the same trajectory , getting a newspaper’s news room and offices online with a single computer sharing its ISDN connection. Think ours was also a 100mhz gateway 2000 computer or some such.

That snowballed into “we want a website do you know how to do that?” and. Well, no, but it had Apache available and I … figured things out enough to take the skills elsewhere.

Repeated the same trick with a place in Wisconsin, who initially shared a 56k dialup connection with all their dispatchers and were impressed the thing had stayed up for 900 days without even redialing. 90% of their work was done in an on-prem wyse terminal anyway, dialup used to do the job for email or googling an address.

27, 28 years later I’m still dragged in front of them once in a while to ask how they can accomplish something cheaply with Linux, bubble gum, paper clips, or whatever . The times and technology have changed, but not how cheap they are!


Sometimes if it's a client that isn't too difficult, they are worth keeping if they come at you with projects that expand your knowledge.

Squid caching takes me back. I was dealing with a network for a large car dealership (2006), and they were having issues with pages appearing out of date, as well as sales people who couldn't help themselves from looking at adult websites. I had to figure out the entire network (was put in place before I ever showed up to provide support), which included both the physical and software layers. Not only was I on ladders in the service area, using a network tone device (for those that don't know, you can connect a cable to a device that pushes a tone down the line, and then pick up that tone on a device that lets you run the device down the line and hear the one if you have the correct wire), but I also had to figure out this server using a Squid cache that stood in front of everything.

Eventually I got all the devices marked from origin to their patch cables in the server room, and I started looking into the Squid cache. It turns out that they were caching everything, as well as blocking websites. I figured out what websites they needed to do their job, and turned off caching, while also learning the ACLs for blocking websites. Anything else was allowed, but the Squid cache would hold a copy for some set amount of time (I think it was 24 hours, so if it was legitimate they only had to wait a day, but it also saved on bandwidth by quite a bit - although think this was used more to monitor user activity).

It was frustrating as someone new to large LANs, as well as to in-house caching, but had been using Linux since an early version of Slackware in the later 1990's. Even to this day, as someone that writes software and does DevOps, that knowledge has helped my debugging skills tremendously. Dealing with caching is a skill I feel you need to be burned by in order to finally understand it, and recognize when it's occurring. I cut my teeth on Linux through a teacher that set up a web server in 1997, and not only gave students access to upload their web files, but also a terminal to message each other and see who was online.


I briefly put a Pentium MMX 200MHz system in service a few years back to bridge my parents to their neighbor's WiFi (with consent of course) when their DSL line was down for a few days. I installed a PCI Ethernet and WiFi card, booted into OpenBSD, and amazingly it was fast enough to get them through the downtime. :)

> The perfect machine back then was a 100MHz Pentium, in a slimline desktop case. At the time, the Pentium III was the current desktop chip, so you'd have a pile of early Pentium-class machines to use. And even a 10mb ISA network card (3Com if possible) would have plenty of power for the internet connections of the day.

I was doing the same. Router and firewall on old Pentium CPUs. I don't have these machines anymore but I still have HDDs from back then with post-it notes on them saying stuff like: "Linux firewall / HDD 120 GB". For whatever reason my HDDs adapter that can read just about everything doesn't have the correct pin out for those HDDs. Would be a blast if they were to still boot: at some point I'll just buy a compatible adapter and see what I can find on those HDDs. I was very likely also saving some backups there.

But really my best memory was years (I think) before 120 GB HDDs became an affordable thing, in the super early Slackware days, on a dial-up connection: I had a 486 desktop computer and I'd share the Internet connection to a very old laptop (!) using... PLIP. A printer cable and the Parallel Line Internet Protocol. Amazing hack: my brother and I could then both use Netscape at the same time and to us this felt like a glimpse into the future.


Someone need to write a new book on Linux router.

The old one is getting really old now, nearly 25 years ago [2].

[1] Book Review: Linux Routers - A Primer for Network Administrators, 2nd Ed:

https://www.linuxjournal.com/article/6314


Inverted case here, my first real use cases for Linux was flashing routers with openwrt and doing fun stuff!

Ha, that's very close to my story as well. I had a 166Mhz Pentium and it was all PCI cards and 100mbit by then. That was essentially the start of my career.

Reminds me of a Pentium Pro router put into a datacenter, two 2GB mirrored scsi drives, two nics, happily running a hardened pfSense, ran with zero issues for the better part of a decade.

It just wouldn't die.

The suspicion was because the electricity going to it cleaner than average, in a datacenter, the normal wear and tear on electronics may have been reduced.

Respect was paid at it's decommissioning to convert it into a vm, knowing it's luck, chances are it would still boot up and keep on running.


Hell, you could do this with a single NIC if you have a VLAN-aware switch.

Far from my first, but early on I set up some 386's to bridge cheapernet segments and tp ethernet. No budget for new hardware, but there were some computers that were way too old for windows or even Linux users who had 486's or even Pentiums. Scavenged ISA network cards for both sides. It was a bit sketchy with the low RAM and old arch, but worked.

IIRC, there were some Macs that were confused if there was a bridge in the network, so had to change the segmentation and run masquerade, but that was still better than not having internet. And no need to allocate those precious public IPs, though you could still get them.

Masq was one of the first killer features for Linux.


This unlocked a very specific kind of nostalgia

You guys with your dedicated hardware. :)

I did routing duties for my LAN with my primary desktop for about a decade, variously with Linux, OS/2 (anyone remember InJoy?), and FreeBSD -- starting with 486 hardware. Most of that decade was with dial-up.

The first iteration involved keying in ipfwadm commands from, IIRC, Matt Welsh's very fine Running Linux book.

WAN speeds were low; doing routing with my desktop box wasn't a burden for it at all. And household LANs weren't stuffed full of always-on connected devices as they are today; if the Internet dipped out for a few minutes for a reboot, that wasn't a big deal at all.

I stayed away from dedicated hardware until two things happened: I started getting more devices on the LAN, and I saw that Linksys WRT54G boxes were getting properly, maturely hackable.

So around 2004 I bought a WRT54GS (for the extra RAM and flash) and immediately put OpenWRT on it. This lead to a long rabbit hole of hacks (just find some GPIO lines and an edge connector for a floppy drive, and zang! ye olde Linksys box now has an SD card slot for some crazy-expensive local storage!).

I goofed around with different consumer router-boxes and custom firmware for a long number of years, and it all worked great. Bufferbloat was a solved problem in my world before the term entered the vernacular.

And I was happy with that kind of thing at home, with old Linksys or Asus boxes doing routing+wifi or sometimes acting as extra access points... until the grade of cheap routers I was playing with started getting relatively slower (because my internet was getting relatively faster) and newer ones were becoming less-hackable (thanks, binary blob wifi drivers).

---

I decided to solve that problem early in 2020. Part of the roadmap involved divorcing the routing from the wifi completely -- to treat the steering of packets and the wireless transmission of data as two completely distinct problems.

I used a cheap Raspberry Pi 4 kit to get this done. The Pi4 just does router/DNS/NTP/etc duties like it's 1996 again. Dedicated access points (currently inexpensive Mikrotik devices) handle all wifi duties.

That still works very well. Pi4 is fast enough for me with the WAN connections available here (which top out at 400Mbps) even while using SQM CAKE for managing buffers, and power consumption of the whole kit is too low to care about.

The whole OpenWRT stack just plods along using right around 64MB of RAM. VLANs are used to multiply the Ethernet interface into more physical ports (VLANs were used to do this inside the OG WRT54G, too).

It's sleepy, reliable, and performant.

---

And it'll keep being fine until I get a substantially-faster WAN connection. For that, maybe one of the China-sourced N150 boxes, with 10gb SFP+ ports, will be appropriate -- after all, OpenWRT runs on almost anything including AMD64 and the UI is friendly-enough.

But there's no need to upgrade the router hardware until that time. Right now, all of my routing problems are still completely solved.


>Part of the roadmap involved divorcing the routing from the wifi completely

This is the move. Let's you upgrade the different parts of the network separately. I have 3 components, an N150 router/fw/DNS/VPN box with 2.5GB NICs running OPNSense. A cheap but surprisingly good 2.5GB managed switch, and a cheap wifi 6 VLAN tag capable wifi access point.


Yes, it definitely is the right way to do stuff. It's not arduous and it represents a highly functional and sustainable level of separation.

It wasn't always practical (dedicated, plain PoE access points of unobtrusive shapes were once rather expensive), but these days it's completely approachable and usable.

If I may ask: Why a 2.5GB switch instead of, say, 10GB? I know 10GB over copper is a mess due to the heat generation, but my own perfect vision of an upgrade involves using optics instead.


Still have some accounts tied to Netscape email addresses. I think they gave them out free during the Hotmail days. Most of those addresses are for accounts for the couple websites from those days that are still around. eBay, etc.

That was probably an incredible amount of memory back then. And it probably cost $1,000 USD for 1KB. Who knows how much radiation-hardened space memory was. 10 times that?

In consumer space, 69 KB of RAM (138 x 4 kbit chips) would cost around $1700 70s dollars for the entire package, ~$10k in modern dollars.

Radioationed hardened for space though — $50k-$100k in 70s dollars, roughly the price of a Silicon Valley house back then - $300k-$600k in today's money.


consumer space and space space chips.

Wouldn't they just put a lead plate around the computer? That can't cost 100k in 70s

Lead makes things worse, not better. High-energy particles go straight through a couple mm of lead no problem, and lead itself is radioactive anyways. The problem is when a particle punches straight through a chip, leaving some energetic charge behind. You won't stop that with a paper-thin layer of lead.

Also, lead is extremely dense.


"and lead itself is radioactive anyways"

Um, I guess in that there are naturally occurring isotopes of lead that are unstable. But those are very rare and can be removed. By this standard, the chips themselves are radioactive since they are made of silicon. By this standard, you, everything you eat, every plant and animal is radioactive since there are trace isotopes of carbon that are radioactive (that's how carbon dating works). And sunlight is very radioactive by this standard.

However, the materials we actually use in chips is highly processed and radioactive isotopes will likely be removed if a centrifuge was used in any step in concentration process. Likewise the lead used in a space shield will probably have similar processing of the material

"Also, lead is extremely dense."

This is the real reason lead isn't used. There are plenty of other elements that shield ionizing radiation quite well and are less dense. However, they are also more expensive than lead which is why we use lead on earth for shielding (its cheap) and use something else in space (where density is more important than price).


No, the real reason it isn't used is because coating your chips in something doesn't really work

edit: ...when you don't have the protection of the atmosphere to begin with*


Because lead turns a single high-energy particle (that would disturb a single bit and punch through) into a shower of many low-energy particles (that would disturb many bits AND induce lattice damage over a wide area).

lead being heavy, I wonder if that tradeoff wasn’t worth it?

I thought the keys were replaceable now?

(first video I found on a search)

https://www.youtube.com/shorts/WYT7YIh00Xk

I know in the Butterfly days those keys would break when you removed them.


Far more likely it's an electrical contact issue.

I've also gotten my last few jobs there. It's great for that. Even if it's 90% low effort recruiter spam.

It's also full of "greatest team in the world", pizza parties, "incredible" training sessions, and "meetings of great minds". And now it's turned into a bunch of comedy reels. Blah.


You forgot "I am honoured and humbled to announce <insert mundane recognition>."

The comedy/tragedy of this is; whenever I talk to people outside of engineering at social gatherings, this is what they do. Tell me their resume and accomplishments. I’m like, can we just a have a conversation please?

I always asks the question “what keeps you busy”? People think my wife and I are retired because of how often we travel. I say I’m not, I work remotely and try to keep the conversation away from work.

I think one of the most objectively pathetic things in the world is trying to ride the counterculture wave against a thing, while shilling the exact same thing.

Hey kids, you know how influencerslop sucks? proceeds to write influencerslop


Is it really great for that? In my experience, LinkedIn makes finding a job easier as much as Facebook makes it easy to find friends... LinkedIn encouraged and made normal gross exagerations and overall dishonest discussions and relationships and recommendations that made it impossible to form any valuable opinion about anyone or anything.

Radio Shack, the short-lived Gateway Country store, Sears had a ton of computers, regional electronics stores, JC Penny got out of it in the early 90's, Sam's Club...

> By way of comparison, Blackthorne for Super Nintendo was all 85816 assembly

Weren't almost all console games up to that era written in assembly? What high level languages were used? I recall hearing many Atari 7800 (?) games were written in Forth? May be mistaken.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: