Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Personal Computing on an Amiga in 2021 (thedorkweb.substack.com)
214 points by shortformblog on Aug 5, 2021 | hide | past | favorite | 138 comments


Several of the key personnel at Commodore went on to create products that have touched the ears of most people on this planet in a way that Commodore didn't. They went on to found Ensoniq, a digital synthesizer company, who was later acquired by Emu Systems. Both Ensoniq and Emu synths/samplers were used in a wide array of music crossing many genres in the 80's, early 90's. Kraftwerk, Depeche Mode, Madonna and a ton more on top-100 radio all used them.

> Ensoniq was founded in 1982 by former MOS Technology engineers Robert "Bob" Yannes (designer of the MOS Technology SID chip for the Commodore 64 home computer), Bruce Crockett, and Al Charpentier. Their first product was a software drum machine that ran on a home computer.

> In January 1998, ENSONIQ Corp. was acquired by Creative Technology Ltd. for $77 million, and merged with E-mu Systems to form the E-Mu/Ensoniq division. The fusion with E-mu sealed Ensoniq's fate: after releasing an entry-level E-mu MK6/PK6 and Ensoniq Halo keyboards - essentially keyboard versions of the Proteus 2500 module - in 2002, the E-Mu/Ensoniq division was dissolved and support for legacy products was discontinued soon afterward.

https://www.keyboardkountry.com/blog/the-amazing-ensoniq/


I once emailed Bob Yannes in 1998 about the C64 SID chip. I mentioned my soft synth (SoftSID) and the Elektron Sidstation. But all he was bothered about was his Ensoniq Fizmo. I'd love to get my hands on one of those now.

I told him of the bad review in Future Music. He was a bit annoyed but explained what they had missed in the review.

A nice guy.


when i arrived at college in 1991 with a 286 and a broken hard drive, my roommate showed me his amiga. where I was stuck in DOS textland, he had a fully interactive windowing system with high quality graphics, audio, and video capture. I stuck with PCs and I don't think I had anything that was nearly as "pretty" even 10 years later.


I arrived at college in 1990 with my trusty Amiga 2000 in hand. I then quickly discovered the school's Unix workstations. I have to admit that the Amiga soon after started collecting a lot of dust.


I arrived in 1987 with my Amiga 1000 in tow, quickly discovered the school's Unix workstations (Sun 3/50, 3/60) and started using my Amiga more. They had the same a.out format so you could compile on the Sun systems with the Unix fun stuff, link with a different library, side-load it onto the Amiga and you're good to go.

That A1000 ended up with 4MB, 14MHz 68020, SCSI controller, and 80MB of disk 'borrowed' from a Sun.

Sure the OS was a bit different, but it's basically a cut-rate Sun 3/60. So many earl Amiga devs were Unix nerds sitting in front of Suns all day and wanting something close to that feeling for home. Macintosh (lol), IBM PC (lol DOS).

Pretty much wasn't until Linux that the PC actually became useful at all.


> Pretty much wasn't until Linux that the PC actually became useful at all.

This. I got lucky to experience a SGI Indy when it came out and although going from my beloved Amiga to a 386 running DOS "because CPU speed" (and because the PC was obviously where it was heading) was a really bitter move. Then a 486. Then I discovered Linux and my 486 + Linux was my "poor boy's SGI Indy". Great memories (and still rocking desktop linux on my workstation on a daily basis).


Are you sure? By the time Windows 95 came out, the Amiga was looking pretty long in the tooth.


Windows 95 ran like a dog compared to Amiga. PCs couldn't match a 25-MHz Amiga for responsiveness until they were in the hundreds of MHz in the late nineties.


Those are your memories. Win95 on contemporary 1995 Pentium system will display explorer window full of icons in split second. Workbench when opening a HDD drawer will paint icons one at a time taking several seconds, even with fastest accelerators possible and fast cf IDE drive.


The reason behind Win95 showing icons much faster than any Amiga was due to the relatively slow filesystem used by the Amiga, not by its hardware speed, which in fact was a bit faster than PCs in a MHz to MHz comparison.


The complete lack of a disk cache integrated with the OS on the Amiga hurt as well. Even MS-DOS was shipping with disk caching software to make use of high memory on 386 systems in the early 1990s when AmigaOS had just launched 2.04.


Also the Amiga was slow due to the simplistic way metadata / icons were handled. There was a separate "file.info" file, containing the icon and some other metadata, for each actual file on disk.


Most users didn't have Pentium systems when Win95 came out or for a long time afterwards.

At least the vast majority of customers we had to support had 386 DX's or 486's.


>1995 Pentium system

Then bitwise is right.


Pentium 75-120 != hundreds of MHz in the late nineties


120 is still a hundred.

Windows 98 crawled under a Pentuim@133 and 64MB of RAM (we had that in the library), but Windows 95 without that crapware called IE4 + ActiveX was fine.


Most people were still on 386 and 486 systems by then.


That’s so true. Amiga didn’t have an RTOS, but it sure felt like it. Windows had so much latency in comparison that I couldn’t bear to use it for the longest time.


The maneuver that always astounded me at the time was right clicking on the menu bar and pulling it down, to reveal whatever application was open on the "screen" behind it. (The Amiga had separate windows and screen abstractions, though I don't recall if I'm using the right terms.) No matter how heavily loaded the system was, whether it was doing floppy I/O at the time, etc., that pull down motion never became choppy. It did feel a lot like an RTOS in that respect.

You could overload the system in other ways though, particularly disk I/O in Workbench.


Those are the right terms.

Even more mind blowing is that the visible screens could have different resolutions. The display hardware could switch modes per scanline while keeping the same output frequencies, so you could have a low-res/high-color screen on the top half of the monitor with a high-res/low-color screen on the bottom half.


Had to do with whether you had 4mb of RAM or 8 or more. 4 was unusable.


that was 4 years later, which was an aeon in internet time.


Well, you said "10 years later" after 1991, which would put us at 2001-ish.


Duh, yes I see now. I actually went away from DOS/Windows between 1993 and 2001+, so I don't know much about what their UI was doing at that point. I really meant it took Linux desktops a long time to be as useful as Amigas (and even then you would have to find a compatible video capture board, compatible 3d graphics card, etc).


I have heard/read enough people sing high praises of AmigaOS that I wonder how intimately it is tied to the hardware platform.

In other words, would it make sense to port it to the PC or maybe Raspberry Pi? It sounds like it would be fun to play with it, but I lack the hardware, and I am not sure I'm willing to sink money into it without having an idea of what I am getting into.


You can experience something very close to it with AROS, which is free both as in speech/beer, can be used either native stand alone or hosted, that is, as process under other operating systems without requiring any installation. There's also an ARM port which runs native on the Raspberry PI or hosted under Android devices.

https://aros.sourceforge.io/


AmigaOS/Workbench relied pretty heavily on custom parts of the Amiga hardware, which is how it performed so well compared to other architectures with similar specs.

That can be overcome with enough brute force; although emulators like FS-UAE can still struggle to keep realtime with some workloads.

There are FPGA reimplementations of Amiga hardware, if you wanted something more 'realistic'.


That's mostly been overcome by now. Retargetable graphics for example let you use a graphics card instead of the built-in chipset, and AHI is an abstraction layer for audio that's not at all tied to any particular hardware.

The bigger issue is applications that bypass the hardware abstraction layer or throw away the OS entirely, typically games and quite a few productivity applications.

I suggest looking at MorphOS and AROS



Yes and no. One problem is that it is kind of hard to describe: most features sound like something Linux sort-of has, and whenever you try to describe them people tend to disregard them as something that has been incorporated into Linux, and thus of little interest. What they can't know is just how much more convenient those things generally were on the Amiga, how much easier it all worked. You really have to experience that to know.

The nr. one thing any new OS should learn from the Amiga is its motto: "simple things should be easy, complex things should be possible." Not "Simple things should be possible, complex things should be Dark Souls"!

The next biggest thing would be the use of pervasive inter-process scripting. ARexx (I wouldn't choose it today but it worked) gave you amazing power to combine applications and automate various tasks - a bit like the Linux command line does, except it works everywhere. To illustrate, there was an ARexx script that combined a mail application with a chess playing game to create a chess-playing robot: you could mail it a board and it would make the next move, and mail it back. You could get two Amigas to play chess against each other, and thus avert nuclear annihilation.

"Screens", we would now call them virtual desktops, were way better than virtual desktops because applications are aware of them, and can be configured to run on a specific screen. Thus you don't waste time setting up new virtual desktops, moving windows across, etc., it's all automatic.

Assigns let you add globally valid short names for directories and devices. An assign can refer to multiple directories, and it can be deferred, meaning it is hidden until you refer to it. I really miss those on other OSes. Yes, you get links. No, they aren't the same. You don't have to create an assign in a location (they are always global), you don't need weird syntax for creating links, and you can delete them without fearing for your data.

The OS is essentially a microkernel that uses message passing for asynchronous communication, and that uses dynamically loaded drivers for things like devices and file systems. For modern usage you would really have to add memory protection (it wasn't available on the hardware of the time), and resource tracking (it wasn't finished in time for the Amiga launch).

And its culture was just different. The Amiga had so many small, simple productivity tools (it also lacked big, complex productivity tools, but that's another story). You had simple, yet powerful programming languages, music creation tools, drawing tools, etc. The tools of today are orders of magnitude better, but that power has come at the cost of considerable complexity. DPaint 2 can't hold a candle to the Gimp, but at least I knew how to draw lines with it. And is there even a single music creation tool for modern OSes that isn't either a sample arranger or a fullblown score editor?

So it can certainly be done. The question is, will you end up with the same joyful platform and the same joyful culture? I sort of doubt that: without a captain on that particular ship, odds are it will quickly turn into a Linux reskin. The world already has enough of those, and even if that weren't to happen, part of its magic was in the applications it ran. Just ports of existing Linux tools wouldn't have that. So I'd love to see a proper AmigaOS port to modern PC hardware, and it would totally rock - but I just can't see it gain the same momentum it had back in the day.


With respect to ARexx, the language is certainly awful, but the ease with which even regular users could use it to script programs matters. Biggest of them all, though, was that we quickly got a cultural expectation that "everything" should have an ARexx port. It was brought up in reviews. Many applications were built around an IPC model where they took an even in without really caring if it was an ARexx command or a GUI action or keyboard event, and dispatched commands, and so often it then became easy to expose "everything" to ARexx scripts.

> So I'd love to see a proper AmigaOS port to modern PC hardware, and it would totally rock - but I just can't see it gain the same momentum it had back in the day.

I've long toyed with the idea of trying to pull pieces out of AROS to do something like that which takes the good bits without the bad (e.g. losing memory protection is not something I'd like), but it's tricky. I think a "Linux reskin" by someone who actually is familiar with both would be the best bet. But it'd be a lot of work.

Maybe an "Amiga layer" to provide some of this. E.g. quite a bit of functionality could be done by a combination of FUSE filesystems to make things (e.g. assigns, datatypes) accessible to all apps, and then you can build libraries and tools separately to expose them in nicer ways where needed.

For screens, I think many of the programmable window managers would be a decent basis for providing APIs like that. E.g. bspwm, while predominantly a tiling wm, does support floating windows too and pretty everything can be controlled over IPC.

For ARexx, we have dbus, and while dbus has always seemed overengineered to me, providing a wrapper that provides a simplified interface might well be doable.

But you're absolutely right about the culture differences, and I think that's the biggest challenge. This worked on the Amiga because "everyone" coalesced around a set of standard tooling. To get something like that to work today I think you'd have to have a group of people willing and committed to expend a lot of time working to get support for these things into existing open source tools before people would start to see the benefits.


The Amiga was its own thing; it wasn't Linux + something. That gave it its own culture, and everyone just naturally went for that as well. Replicating that today would be a tough challenge. It was... I don't know, a different way of looking at things. Take starting a process: on Linux you copy the existing process, then throw everything away you just copied, and then load a new process. And apparently that strikes nobody as weird. The proper primitives, from an Amiga perspective, would be "create a memory space" followed by "load an executable into that memory space". Obviously the Amiga didn't have memory spaces so I'm just making this up, but it would give you the notion of memory spaces as something that exists separately from processes and threads. Also, it would remove the distinction between processes and threads entirely: there are only threads, running in one or more memory spaces. I mean, the existing primitives get the job done I suppose, but it just looks so... inside out, I guess.

But rather than try to copy things like how Workbench looked (which I don't think anybody really cares about), it would perhaps be more useful to try to add the best Amiga parts into other systems. The pervasive scripting would be a good start: I'd love to be able to script as much as on the Amiga in Windows or Linux. Obviously it would require some thought on security, we don't need scripts running rampant.

I wouldn't couple it to a specific language, and in fact I don't think that was the case with ARexx anyway: the interface was open enough that other scripting languages could have been supported on the existing ARexx infrastructure.

I'm not going to do the work though ;-) Got enough going on ATM...


> Also, it would remove the distinction between processes and threads entirely: there are only threads, running in one or more memory spaces. I mean, the existing primitives get the job done I suppose, but it just looks so... inside out, I guess.

This distinction has been reduced to next to nothing in Linux. clone() is like fork() but allows sselective sharing that allows both more sharing (sharing the same memory space) and less sharing than fork (e.g. it can put the new process in new cgroup namespaces etc.

> (which I don't think anybody really cares about),

I don't care about the look, but I do care about the spatial layout of it. It really annoyed me when Gnome dropped support for that for Nautilus. My memory is very strongly spatial, and having windows reopen in the same spaces etc. was an awesome memory aid. Other than that I agree - I'd mostly use DiskMaster II for actual file management, and most "holdouts" today seems to favour Directory Opus Magellan (which can also be used as a Workbench replacement)

> The pervasive scripting would be a good start: I'd love to be able to script as much as on the Amiga in Windows or Linux. Obviously it would require some thought on security, we don't need scripts running rampant.

So I actually took a look at Dbus again thanks to this discussion, and it obviously have had to deal with the security aspect, and it seems to be solved largely by a separation into multiple buses. The problem as I see it with Dbus from what I've looked at is that it wants to be a "proper" RPC system. That is, it's complex because you have to connect to a given bus, then to a given service, then find a specific object, then get the interface from the object. E.g. here's a Ruby example using ruby-dbus [1]:

    require "dbus"
    sysbus = DBus.system_bus
    upower_service   = sysbus["org.freedesktop.UPower"]
    upower_object    = upower_service["/org/freedesktop/UPower"]
    upower_object.introspect
    upower_interface = upower_object["org.freedesktop.UPower"]
    on_battery       = upower_interface["OnBattery"]
      if on_battery
        puts "The computer IS on battery power."
    else
      puts "The computer IS NOT on battery power."   
    end
Whereas of course adopting something closer to ARexx way of doing things we'd expect something more like:

    address "org.freedesktop.UPower" 
    if on_battery
       ...
This omits a lot of what DBus does, of course. DBus is clearly much more capable, in that applications can expose complex hierarchies of multiple buses which have multiple services which have multiple objects which have multiple interfaces, but it makes the "default case" far more complicated for users.

I think it's interesting to note that DBus is this complicated in large part because it imposes structure on the services that could have been built from a far simpler abstraction (allow messages directly to a service; provide the object/interface abstraction by using a reserved command/message name to query the service for capabilities; simple services would just process messages to the service endpoint - more complicated ones would work as before)

Reading the specification, it's not as bad as it looks. It has a lot of complexity that can be hidden, and e.g. the "interface" specification is optional (you "just" risk that the target object implements multiple interfaces with the same method names, so it's up to those writing services to tell you), but it seems many clients refuse to let you omit it easily (ruby-dbus allows you to choose the default interface to specify, but it doesn't look like it lets you omit it entirely).

It'd be interesting to try to tweak/wrap ruby-dbus to see if it's possible to hide enough of the complexity in a reasonable way. E.g. "guessing" the default object name and omitting interfaces unless specified would get you pretty close to decent defaults.

Running an introspection tool, there certainly are quite a few interfaces registered with dbus (and one of the things with dbus that ARexx doesn't have is the ability to register endpoints for programs that are not currently running that allows "activating" them), but if people see examples like the first one above you have to be really dedicated to want to keep diving into it.

> I wouldn't couple it to a specific language, and in fact I don't think that was the case with ARexx anyway: the interface was open enough that other scripting languages could have been supported on the existing ARexx infrastructure.

The port/command execution is open enough, certainly. A good example would be FrexxEd (one of the co-authors was/is Daniel Stenberg of CURL fame), which used "FPL" (a C-like language) as its main scripting language, but all the editor specific commands are shared between FPL and the ARexx port.

The language interpretation for ARexx is provided by ARexx - the hosts only need to provide the ability to execute commands.

In terms of providing the capability, I think you really only need the ability to simply send commands to a given port, and then shell scripts or whatever language people prefer can do the rest. An "rx" variant that either lets you specify the port to address or defaults to address whatever port is specified in an environment variable would make it almost seamless.

> I'm not going to do the work though ;-) Got enough going on ATM...

This is the problem. So many of us miss parts of what the Amiga does, but they're conveniences that are easy to miss and want but hard to justify putting in the effort on. Especially as many of them are just sufficiently dated that they'd need some rethinking, and suddenly it's a lot of effort.

[1] https://github.com/mvidner/ruby-dbus


That neat message passing design is also what made memory protection impossible, even on later versions of the OS. Sending a message from one task to another was really just sending a pointer, giving the receiver permission to use some of the sending task's memory temporarily. That's why it was so fast. Nothing was actually being sent. Still, it was cool stuff.


Using an OS with memory protection between processes doesn't make it impossible to use shared memory regions to pass messages between processes efficiently. Around twenty years ago, I worked on a project that built support for various telephony protocols on AIX. Amongst other things, we used System V IPC and shared memory to send signalling information and voice data between processes. We were able to achieve fairly low latencies using this approach; I don't remember the precise numbers but this was the sort of system where a few 10s of ms of latency would give the user a noticeably worse experience. Admittedly, this was on much more powerful hardware than the original Amiga (by this time, we had PowerPC CPUs running at hundreds of MHz) but I don't see why it couldn't have worked on older hardware.


Yes, sorry, I should've clarified that newer AmigaOS versions couldn't add memory protection without breaking compatibility with old apps, due to weaknesses in the APIs.


Got it, sounds like I misunderstood the point you were making there. Makes sense (and tbh that's a problem that afflicts most of the personal computer operating systems from that era).


Pointers and (library) code were shared all over the place, not just in the message passing system. Still, on a modern CPU you could certainly do an implementation that either copies small messages, or just passes an entire page for a longer message. You'd need some kind of special AllocMsg function that takes the size of the message you want to send, let the OS decide if it should be sent by remapping a page or by copying.


Don't modern OS's use page-based COW anyways for large copies?


Probably; it makes a lot of sense for large transfers.


Not really, QNX and Android do it quite easily with memory protection.


I should have clarified. It couldn't be done with out breaking compatibility with existing apps. If I recall, Amiga Exec's "PutMsg" didn't even take a message length, just a pointer (message) and a port name.


Amiga broke compatibility with existing apps across hardware releases anyway, so?


Exec PutMsg / GetMsg is rather foundational. If they changed it, everything would break. With Amiga OS, if you wrote your apps to API spec, compatibility was rarely broken. I remember those big "Rom Kernel Reference Manual" books! I think I still have one collecting dust, at my parents house.

But yeah, if you did weird things like mucking with hardware registers directly or used undocumented fields of a struct, then yeah, you could have a problem.


Changing graphics and sound chips was more fundamental than OS APIs.


Indeed, that did happen with some games, especially with the newer CPUs. I remember my A3000 didn't run a bunch of stuff that worked on the 500. I wasn't sure if that was due to the ECS chips or the 68030.


No, that's not true. Both the hardware and the OS remained compatible, although enough differences crept in that applications could get into trouble anyway.

One common problem was when applications assumed all memory was reachable by the custom chips; the later addition of fast ram (which was ram that wasn't slowed down by custom chips accessing it, thus starving the CPU of memory cycles) could break those. Another example is using the upper 8 bits of pointers, which was safe on 68000 but not on 68020 and up which had a full 32-bit address bus. Making assumptions about OS structures was also a favorite of many hackers, and something that invariably broke on later OS releases. However, if you stuck to the API you were fine, and many applications run without problems on all OS releases from 1.2 to 3.1.


There were three generations of graphics hardware and sound chips, anyone doing any kind of serious use of them had to recode their applications to take advantage of the new capabilities.


Afaik there was ever only one Paula.



rasz was still correct about Paula though. The sound chip didn't change.

Too bad AGA was too little, too late. If it was released a few years sooner, say with the A3000, it might've been different. ECS barely added anything over OCS, other than extra chip RAM. There were a few modes hardly anyone used.


In regards to the culture, I eventually found that it is easier to find common touch points alongside the Mac and PC cultures than trying to find it on Linux.

You put it quite well when describing the overall motto.


I don't want to bash amiga, actually I really envy the history and capabilities of that system. Nevertheless, posts like this are almost always seen through a bit of rose tinted glasses.

The part about DataTypes:

"Datatypes describe a file format and tell the OS how to handle it. Any datatype-aware program can use any file format as long as a relevant datatype is present."

Doesn't looks that different from how codecs are used in modern OS's. AFAIK, as soon as ffmpeg or gstreamer learns new formats or how to use hardware codecs, so does every program using them without the need of even recompiling. Of course, doing that 30 years ago is another thing.

And:

"RAD: is a fixed-size bootable RAM disk that survives soft reboots. Equivalents exist for most modern Operating Systems. They’re worth trying out. You don’t really know how useful they are till you lose them."

Maybe I can change my mind if I use them, but I really don't feel the need. On my notebook, I simply close the lid and when I open it, it is mostly in the same state I left; some system I used are stable enough to have months of uptime; using /tmp/ seems enough for me. I once had I power failure and recovering whatever was on /tmp/required booting from a keychain but it was easy to do and a very rare occurrence.

The "RAD:" may be interesting on not specially stable, non high uptime, /tmp/-lacking systems but I can't see how they can be relevant on modern OS's.


There is a fundamental difference in that you can drop in a new datatype without any program like ffmpeg or gstreamer having to learn new formats.

More importantly, how many applications actually use these as their primary means of accessing file types?

Chrome doesn't learn a new image type if I drop in a file. LibreOffice doesn't. Certainly not from the same one. Gimp doesn't. There's no reason why all of those should not be able to load a new image type without updating the applications themselves, just by dropping in a datatype/codec.

The concept isn't anything magical - or new -, but it's still largely forgotten and ignored outside of small islands of use.

We have all kinds of little API's to do this for specific types of files, like e.g. ImageMagick's "convert" for images, ffmpeg for video etc. as you mention, but what we lack is a unified API, the ability to drop plugins in for new file types, and for applications to actually unify around using a specific standard.

Heck, we even have a workaround to avoid needing application support: A FUSE filesystem to trigger conversion to suitable target filetypes.

This is the same issue with a lot of the things we're missing from AmigaOS. They're not hard to do. E.g. "assigns" are conceptually largely a OS-provided way of creating PATH's for everything you'd like, for example - it can also easily be provided via a FUSE filesystem. Dbus "almost" gets us to what AREXX provided, but is just not supported as widely or as simple for regular users to get started with.

What was special with the Amiga in that respect was that these things were embraced fully across the board, and integrated into the culture surrounding the Amiga.


Indeed. Systems like the Amiga show us what we have lost from the world of personal computing. Things we could do in the 80s and 90s have disappeared and been replaced by hodgepodges of barely interacting components. It is, frankly, disgraceful and we should be ashamed as an industry that we've regressed so far.


The idea lives on as COM, XPC and DBUS.

The problem is that many kind of ignore using them.


> What was special with the Amiga in that respect was that these things were embraced fully across the board

That’s just because it’s part of the OS. On a Mac you have lots of high level APIs to interact with programs and data that form the visible layers of the OS. On Linux there is a hard border between GUI and OS and a mechanism like this would be more at home in Gnome or other desktops.

Another factor that drove adoption is software scarcity. Back then it wasn’t trivial to just link to a library that reads GIF files and it was much easier to use OS services for that. The cost comes in portability. I can write a Mac or Linux app that can be compiled and runs on mostly anything, but the moment I use OS services, it’s no longer portable.


The thing is there's nothing preventing making a portable API providing the most important capabilities datatypes did. It's one tiny little library. The challenge is cultural, not technical. Without a reasonable hope of adoption, there's little value.

It's also not as simple as it being part of the OS, though space constraints certainly would favour OS libraries - the Amiga world is full of APIs that were adopted across the board despite not being part of the OS.

E.g. XPK is a good example: If you want to support compression in an Amiga app, you'll most likely use XPK, which gives you instant support for every compression algorithm someone wrote an XPK library for.

What is more likely to have mattered was in general whether or not a potential user would be likely to already have a library installed. My first hard drive was 20MB. What an application chose to install was a big deal, and that certainly encouraged people think about sticking to options that were popular and that likely to would save space.

Regarding the "hard border", it's worth keeping in mind that while datatypes could be used for visual elements, it could also be used for "headless" load/save/conversion. Putting something like it in e.g. Gnome would be less valuable than providing the "headless" part, because the format conversion is the most valuable aspect. And that part is incidentally also the easiest part to make portable.


I mentioned Gnome because it’s an application platform as much as it’s a graphical environment. You write apps for it using its services to integrate with the desktop. Datatypes don’t make as much sense at the lower OS level because the OS is not supposed to be aware of the data types applications work with.


It makes little sense to talk about "lower OS level here". It's a library and a spec that can provide most of its functionality in an entirely portable manner. Datatypes basically consists of two separate parts: The class structure that provides the loading and saving and format conversion, and a widget ("gadget" in the Amiga world) that displays the loaded data. The latter would be hard to do on Linux without tying it to a desktop environment, but the former just needs a way to load data, save data and load and execute plugins.


unified API, the ability to drop plugins in for new file types, and for applications to actually unify around using a specific standard.

There have been attempts, such as OLE and ActiveX on Windows, or KParts in KDE 2 (and dcop from Qt 2) (or was it 3?), or Intents on Android, or WebIntents, but yeah, none of them became universal, and many of them had security problems.


I think all of these illustrate part of the challenge: They're all tremendously complex compared to the basic functionality of datatypes. That is part of the problem with many of the attempts to do the same again. A lot of the constraints AmigaOS had to deal with became strengths in that these APIs had to be simple because they had to work on machines with memory measured in kilobytes or megabytes.

Of course it's also more complex to do on a system where you expect security (no memory protection on Amiga's made that rather moot), but I think a targeted attempt to do just the minimum datatypes does - or even less, would still be valuable. E.g. many of the security issues are related to embedding a widget, but just providing a unified mechanism for loading and saving which could easily be sandboxed and interfaced to via a dumb pipe (still security issues in validating what's passed over that of course).


Windows has followed up the idea with COM, for example, the datatypes version would be WIC.

https://docs.microsoft.com/en-us/windows/win32/wic/-wic-lh

However it is exactly as you put it, it is exposed at application level, so unless an application calls into WIC, it will never be made aware of new WIC components deployed into the system.


> There is a fundamental difference in that you can drop in a new datatype without any program like ffmpeg or gstreamer having to learn new formats.

It's not an interesting or useful difference. The thing Datatypes lets you do is pointlessly spell things differently. Once you spot that you ask yourself, "Why am I doing this?" and the answer is only that you need to interoperate with other people who hadn't settled on one way to spell things so the best route forward is consolidation.

Most applications gain nothing from having, say, yet another mediocre bitmap graphics format. You just made your system more complicated without any improvements whatsoever. Clearly we should agree to just stop doing this. And we did.

> Gimp doesn't

Actually Gimp does. Gimp's file format handling is all via plug-ins. As with ImageMagick in this context maybe you actually needed to rescue crappy 1980s files you found on a floppy disk or whatever. But this helps us see what's going on - because Gimp's image file format handling is only for import and export of necessity Gimp itself is richer than any of these formats. Its native XCF format reflects a superset of the features you can care about.

But anything Gimp's native images can't do, you can't "add" using these plug-ins. The SVG plug-in for example can't magically make Gimp understand vector images, it just has to render the vector image as pixels because that's all Gimp understands.

So in the end any innovation is rendered pointless by this abstraction, the only way for an innovative idea to flourish is to sidestep a "datatypes"-like abstraction altogether.

> actually unify around using a specific standard.

You've almost understood. Instead of adding a layer of abstraction, and then unifying around this unnecessary layer, we instead chose to just use a specific standard format. The problem you're intent on solving vanished.

Didn't you ever wonder why Datatypes seemed so brilliant for image files, and kinda sorta OK for music, and then basically of no value at all for most other file formats? It's because there really were sixty different crappy pixmap formats that were completely interchangeable and half a dozen PCM audio formats likewise. But that's not what happened in other domains, because it's pointless.


> But anything Gimp's native images can't do, you can't "add" using these plug-ins. The SVG plug-in for example can't magically make Gimp understand vector images

No, but an SVG datatype can present itself as both bitmap and vector and an application that wants a bitmap can still read an SVG as if it were a bitmap.

> It's because there really were sixty different crappy pixmap formats that were completely interchangeable

Any data type that’s generic enough can present itself in multiple ways as long as it’s possible to translate (even if with losses) to those ways.


And indeed there's an SVG datatype, with an associated generic vector superclass on Aminet, updated last month, as well as a Cairo based SVG class that renders to bitmaps.


Cairo on an Amiga? That’s seriously cool.


Cairo makes very few platform assumptions in general.


> Most applications gain nothing from having, say, yet another mediocre bitmap graphics format. You just made your system more complicated without any improvements whatsoever.

WebP is much newer than most Amiga applications. Since someone wrote a WebP datatype, an Amiga word processor written in 1994 can embed one in a document. An Amiga graphics app released in 1997 can open one and edit it. Literally every Amiga app that can use the datatypes API now supports it, without a single recompile or changed line of code, because someone wrote that loader. That’s pretty powerful.


> That’s pretty powerful.

That's literally just spelling. The exact same pictures could have been added to the document in 1994, only the file format changed.

People keep doing this, as others have observed. And they always get stuck in the same place, they can make different raster image formats work, although each time this happens they find there are fewer anybody cares about, and they can do the same trick with PCM audio, although again not many options anybody cares about (MP3 is about as exciting as you get these days) and then they run out of steam.

This is not a rich seam of unexplored possibilities, it's a small hole in the ground that people keep clambering down into - certain they'll find treasure and then disappointed when it is in fact just a small hole in the ground.

It's the Oak Island money pit of technologies.


No, it's not "literally just spelling". It is additional manual steps vs. having the computer do things for us.

You keep arguing for inconveniencing users and developers, who still need the conversion tools anyway, to avoid standardising the conversion tools. It makes absolutely no sense.


> No, it's not "literally just spelling"

Yes, it is. You don't even try to pretend otherwise, the format isn't delivering new semantics here. The existence of the extra formats that caused Datatypes to be created is an artefact of history, like TIFF, and best left there.

> It is additional manual steps vs. having the computer do things for us.

"Having the computer do it" for every format in every program incurs an unending maintenance and security burden for all systems across all time, whereas lift-and-shift averts that.

> You keep arguing for inconveniencing users and developers

You have chosen to inflict misery on yourself, I don't have any part in that.

> It makes absolutely no sense.

And yet I suppose that today you will continue as before, blaming others for things you choose, and perhaps lamenting that whichever bunch of crooks currently own "Amiga" aren't shovelling more money into the pit.


> Yes, it is. You don't even try to pretend otherwise, the format isn't delivering new semantics here. The existence of the extra formats that caused Datatypes to be created is an artefact of history, like TIFF, and best left there.

The formats exist and are being used, and new ones keep being created. That is the problem. You can keep pretending we don't need to deal with them. Maybe you don't, but I do have to deal with format conversion on a daily basis, as I don't live in a fantasy world where everyone chooses to use the formats I would prefer.

> "Having the computer do it" for every format in every program incurs an unending maintenance and security burden for all systems across all time, whereas lift-and-shift averts that.

Having the computer do it for every format in every program in terms of the actual conversion is exactly what we have today because of the lack of use of things like datatypes. On top of that we have piles of conversion tools to deal with moving data between programs that don't implement the same set of formats.

What we're lacking is automation and deduplication of effort.

There's no added security concern here to automating the execution of code we already execute.

If anything avoiding crappy reimplementation of formats all over the place would be a substantial reduction in complexity and make it easier to actually put in the effort to produce something more robust.

Meanwhile what you're engaged in is meaningless sophistry given that your proposed solution of just getting rid of these formats is not an option available to us.

> And yet I suppose that today you will continue as before, blaming others for things you choose, and perhaps lamenting that whichever bunch of crooks currently own "Amiga" aren't shovelling more money into the pit.

I don't care who currently own Amiga. It's entirely irrelevant to this conversation.

But your non-solution does not become any more of a solution whether or not I get the time to do something about my workflows.


> Most applications gain nothing from having, say, yet another mediocre bitmap graphics format.

Applications gain a tremendous amount of usability for me by being able to read or write the formats I actually use. Instead we rely on a hodgepodge of scripts and tools to do format conversion manually.

Amiga applications released decades ago can load webp files for example, whether or not the people who wrote them are still releasing updates or are even around. It might not matter to you. It matters to me that files keep being easily accessible.

Today there are plenty of tools I use regularly that can't load or save various formats I regularly use directly. That is pointless, when providing a mechanism to reuse conversion was a solved problem decades ago.

> Actually Gimp does. Gimp's file format handling is all via plug-ins.

But it's not reusing system-wide plugins, which was the point.

> You've almost understood. Instead of adding a layer of abstraction, and then unifying around this unnecessary layer, we instead chose to just use a specific standard format. The problem you're intent on solving vanished.

Unfortunately you failed to understand. We haven't unified on a standard format. We never will, because we get new requirements regularly, and people also create new formats for arbitrary and stupid reasons as well. Format conversion will always be necessary to handle formats produced by different tools.

We can choose to do that "out of band", or we can choose to re-implement it for applications time and time again, or we can move towards a unified system. Today we do the first two. A tremendous amount of time is wasted implementing conversion and loading and saving for different applications, and a tremendous amount of time is wasted on doing out-of-band conversion. I've spent the better part of the last month dealing with crappy data conversion because a tool I need to feed data into implements it's own conversion in a badly broken way instead of reusing a better one, forcing me to go via another format. The amount of developer effort wasted on this bullshit is massive, and the amount of inconvenience for users is too.

> Didn't you ever wonder why Datatypes seemed so brilliant for image files, and kinda sorta OK for music, and then basically of no value at all for most other file formats? It's because there really were sixty different crappy pixmap formats that were completely interchangeable and half a dozen PCM audio formats likewise. But that's not what happened in other domains, because it's pointless.

No, I didn't wonder that, because the limitation there was that Commodore went bankrupt and datatypes was not progressed to cover additional domains, and the Amiga community was not large enough to take on the task of covering more complex structured data options. Meanwhile we still depend on conversion filters for all kinds of other formats all the time for e.g. office suites. So no, it's not pointless.


The appeal of RAD: makes more intuitive sense if you think of it as a kind of clipboard that you access via filesystem operations, rather than clipboard operations (though the Amiga had a clipboard too). It did make more sense in an era when disks were slow and a huge proportion of Amigas had no hard disks. Thinking about it now, it might be interesting if someone developed a FUSE interface to the system clipboard on modern OSes.


The actual Amiga clipboard is actually quite interesting. When an application posts a clip to the clipboard device, it resides in that applications memory until the clipboard device requests it "post" that data somewhere. This makes sense on a low memory system - if an application can ensure the "clipped" data isn't modified, it can just keep it as is without making a copy until it's asked for it.

When "posted" to the clipboard (either in response to pasting, or because the application itself wants to, e.g. because it's being closed), the clip is written as an IFF file to the assign "CLIPS:". Like any other assign, CLIPS: can be reassigned anywhere, including RAD: or RAM: or disk. Unless I misremember badly, CLIPS: defaults to a directory in RAM: possibly indirectly via T:

Since AmigaOS will request volumes that aren't currently in a drive, you could even reassign the clipboard to a floppy with a given label, and the OS would pop up a requester asking for it to be inserted as needed.


> Since AmigaOS will request volumes that aren't currently in a drive

That's a great feature and it should be forward-ported to modern OS's.


It was, but it's a feature that's much less important today with less tight integration of removable media. Though it meant e.g. software on multiple volumes could just try to access a file on one of the other volumes and let the OS deal with prompting for it, even that is a quite rare operation today.

You could do it with FUSE relatively easily: Make a FUSE filesystem that acts as an overlay, so, say when you access /<path to the fuse system>/foo it looks for /<path to mount>/foo and if it doesn't find it, pop up a window with "Please insert volume 'foo'"

Frankly there are a lot of these things that would be quite straightforward to support on Linux that'd be "nice to have" but are not quite essential enough to motivate someone to do it.


RAD isn't really relevant anymore, but it was very handy on a floppy-based system without a hard drive.

Data types, on the other hand, were absolutely amazing. Sure, ffmpeg covers some of it, but datatype support was, and remain, a killer feature of AmigaOS. Want almost universal support for a new format, including support by legacy applications? Datatypes. Want to replace a common format's encoder with somerhing else? Datatypes.

The other killer feature of AmigaOS applications was the AREXX support, which allowed advanced scripting far before it was available on other consumer platforms.


Another thing RAD was useful for, namely there is a reason you rebooted more often an Amiga that you'd reboot your modern laptop: due to lack of memory protection / virtual memory (due to the lack of a HW MMU) when an application crashed it brought down the whole OS (the (in)famous guru meditation failures). After rebooting the content of the RAD was still there (unless overwritten by a buggy program)


It was a different world then. RAD: was pretty amazing in 1989, when I added another couple megs of RAM to my Amiga 500. It took 30+ seconds to boot off of floppy, and a few seconds to boot off RAM disk.


A lot of computer history is filled with machines that were obviously a link in a chain between earlier times and later times. They felt "transitional", in a sense. To me, both the Amiga & the original Mac had a "ding an sich", a such-ness that made them feel timeless. They stand alone as objects (art?) that can still be enjoyed today in and of themselves, no excuses needed.

There are probably a bunch of other machine candidates that feel this way to someone out there!


> To me, both the Amiga & the original Mac had a "ding an sich", a such-ness that made them feel timeless.

I know the exact sensation you describe here. But I can never decide whether it's a property of the thing itself, or where I was in my life when I stumbled onto it. My hunch is that it's mostly the latter: artifacts feel seminal when they hit as a formative time when we first gained a new capability.

I'll always have a fondness for the Apple IIe and the early Macintosh because those were what I had when I first started using computers to create. I'll always love late 90s electronica because that's when I fully discovered my love of music. My Lodge Dutch oven will hold a special place in my heart because I got it right when I was really learning how to cook.


We're probably of very similar age, as I feel much the same.

That being said, I do think there is something special about that era of machines, even outside of our right-place/right-time nostalgia for them. I cut my early computing teeth first on a Timex 1000, later on a Commodore 128, and finally an Amiga.

The Timex was... ok. It was fun enough to pfaff about with, but so limited that it didn't really capture my attention in a special way. The Commodore? Amazing. It is responsible for my entire technical career trajectory. I loved exploring its edges. It was a powerful-enough system that Real Things™ could be done, yet still simple enough that the entire machine could live in your head, so to speak. The Amiga? It was amazing, but it was almost too much. I couldn't hold the whole machine in my mind any more, and had to rely of greater level of abstractions to get things done. What was doable was incredible, but there was an undeniable sense of loss, too. At least for me.


I think some things are maybe just timeless. I was too young to fully appreciate computers around the time of the Amiga. My only real memories of computers of that time, apart from games, is being a kid and learning how to write loops that repeated text on my dad's Atari.

But, I've read a lot over the years about computers from then and have spent a lot of time on modern computers.

The Amiga, from what I can tell, was actually special, not just from a nostalgia sense.

They seemed to push the boundaries of computing at the time, gave a glimpse of what could be but were stuck in that inbetween phase and just couldn't quite make it.

I think they deserve all the praise they get. They seem like they were one hell of a machine for the time and despite the huge diversity of things like linux distros and such these days, things like the RPi and various comparable devices, a huge range of consoles, phones and other arm based things, there really isn't anything like the old days of computers where everything was different and devices were made to be tinkered with.

Not to say everything about that era sounded awesome or better or whatever, but something does seem to have been lost that machines like the Amiga seem to embody.


It's the thing itself.

Some designs are made by passionate talented people pushing the limits of their own skills and imaginations to explore what might be possible.

And others are made by mediocre people throwing some some minimal-cost low-effort derivative crap together to make as much money as possible.

One is an inspiration. The other is kind of offensive.

I suspect a lot of computer nostalgia is really about chasing that first feeling. I also suspect that's a lost cause, because you can't find the limits of imagination on the trailing edge.

Of course you can get echoes of it. Which is almost reassuring - but I really wish there were more high-skill high-imagination projects in computing today.


Not long ago I dismantled my first ever computer. An old Macintosh Classic with 4mb RAM and 40 MB harddisk. It was broken beyond repair and since then I am looking for a way to connect the old disk to anything modern. Sadly I can't find a connector that is halfway decently priced.

I will keep looking, but breaking up the thing that was in one way or the other part of my life for >25 years felt very strange. So many memories. So many firsts. And I believe that this is a big part of that feeling of specialness such things have.

A new laptop doesn't provide any firsts for me. Nothing new. Maybe faster. Maybe a better keyboard. But nothing new.


I had similar feelings when breaking down my family's old Gateway 2000 for scrap. That was the computer on which I started learning about how operating systems work (or don't, if you delete system files to free up space). The experience from that computer and the Apple ][e that preceded it has really shaped my trajectory since then.

PCI SCSI controllers are around and fairly cheap, looks like there are even some PCIe ones out there (not as cheap). I've never gotten as far as the software side of it though. I imagine one might create an image of the disk, and use it with Mini vMac, but I think I still need to figure out the SCSI setup. (I have a PCI SCSI card that I was attempting to use to interface with some tape drives-- different story.)


To me the Amiga felt less transitional and more like a branch cut short. These days everything is better, but also nothing is.


Still using my heavily upgraded G4 Cube on a regular basis. The transitional part your're referring to is the Dot-Com times this Mac reminds me of — In addition to simply being a joy to look at.


vaxstation/microvax + 4.3BSD + printed USENIX manuals is pretty much the quintessential mid-80s internet machine


Just use SIMH, tun/tap, Xephyr and xhost.


I think it probably is age related because I feel the same way about the iMac G3. That machine was kicking around schools well beyond its actual useful life. Even when they upgraded them to OSX the OS9 login screen was still burned into the phospor.


The Xerox PARC and ETHZ workstations.


I cannot shake some fondness for those old blocky UIs. It's so less capable than 2020s js/css3 but still it's so grounded to me.


They are far more capable. UI is a strange type of visual functionality. The late 80s and 90s UIs manage to minimise ambiguity and make implied function painfully clear... never did anyone hunt for a button amongst a sea of non-interactive content in any of these UIs.

To be blunt, a lot of modern UI are like pretty glass Norman doors... lest you dirty our pretty frosted glass with your filthy hands for the sake of usability, what will the stake holders think!

I am not blind to aesthetics, but I believe form should emerge from function, as per Dieter Rams. Hopefully we will swing back towards prioritising function, but it doesn't have to look dated.


I agree, the current era got drunk on visuals and capabilities (animations that quickly stop providing any useful value). It's also a sign of over-mainstreamification of computing. Old layouts required you to think just a little more upfront I believe, now it's a very playful but less efficient.


What's less capable about it? To me it looks less slick, but in terms of affordances it's way better.


dom + css (+ layered compositing) is an extremely versatile and free form UI system

the old ui widgets can't really (I assume) vary that much in properties or location (the layouting is stricter and will break if you push it too hard)


You'd be surprised how flexible some of the GUI toolkits on the Amiga actually were. MUI is based on the concept that layout is dynamic and flexible.

It would dynamically calculate a minimum window size based on the components inside and you could then resize the window as you wanted with things calculating and resizing as you go, adapting to font selections and themes.

The actual implementation is archaic by todays standards but it did use attributes on UI objects to determine min/max sizing and other things, similar to CSS in concept.


oh ok, I assumed wrong. Is there any video I can see about that ?


https://www.youtube.com/watch?v=KNEwb7eRmXo

Sadly no font sensitivity on display (possibly because Amiga owners take it so much for granted that they don't think of it as special), but it shows what MUI can do in terms of GUI elements, window resizing, etc.

After leaving the Amiga for Windows, the three biggest things I hated about its GUI were all those endless non-resizeable windows, the complete lack of font sensitivity, and the horrible scrollbars that didn't show your actual content size (and that snapped back if you moved out of some invisible box). And here we are, twenty years later, and nothing has improved at all...


Just have to say I loved my Amigas back in the day. First an A500 and then a dual floppy 1500. Never quite recaptured that feeling with Linux.


If you ever have an opportunity play around with an SGI Irix box, an Indy or an Indigo.


I've got such opportunity. Now, every time somebody jokes about the "it's a UNIX system. I know that!" I have to explain that that file manager actually exists but still looks like sci-fi for mere mortals even today.


Irix was the most clean and integrated Unix for the desktop that I've ever used, even Sun didn't come close with their offerings.


The most shocking thing on the Suns back then compared to the SGIs was the mouse movement. While the SGIs had a smooth movement, Suns mouse ended up jumping dozens of pixels between refreshes.

Suns were not really designed for desktop use - they more or less aimed for the generic case. SGIs, however, were built to be experienced.


I remember the first time I used a Sun Workstation. From a PC background, I was impressed by the fact that the keyboard housed the beeper, the keyboard was a serial device and the bios equivalent had a built-in command line.

It is a shame that such a small set of features jumped from workstations to the PC.


The SPARCstation 1 was the first of the family to have more than a buzzer. Sun 3’s could only beep.


When I went to engineering school in 1995, most of the computers on campus were UNIX machines… Sun SPARC in the dorm, IBM AIX machines, a few others. But the fastest and shiniest by far were the SGI IRIX machines. They not only had a webcam built in, but ran at (IIRC) 200MHz. These all were my introduction to not only UNIX, but also the internet. I first got ImageMagick compiled and running on an Indy, and later, even played DOOM in the lab. <3


That webcam is the very reason why I wrote that first version of live streaming video on the web :)

It was the first camera that came with a computer and it had a pretty easy interface from C so that + a small embedded HTTP server and we were off to the races.

The funniest bit to me is still that people simply would not believe they were looking at a live image from the other side of the world. More than once I had to go in front of the cam and wave at people or show them some tekst :)

Eventually I automated that by putting a remote controlled fan + light (and a mobile of paper cranes) in front of it, but then people would claim that I was faking it. Tough crowd :)


Wow! I’m just a casual HN user but I find it genuinely cool that people as accomplished as yourself post here.


I went from an Amiga at home to an Indy at university (my uni got a massively got deal from the SGI distributor to outfit a couple of the computer science labs; nothing like rows of Indy's placed next to tired old Sun terminals white monochrome screens to make SGI look like the hot new thing), and while I never particularly liked Irix, it certainly felt closer to "home" in a way that Windows or even Linux didn't at the time.


I had an SGI Indy on my desk, back in 1997 or so. Definitely a fun machine. I remember the web cam, and doing full video conferencing (and bandwidth hogging most of a T1!)


I can beat that :) We were bandwidth hogging the transcontinental backbone to the point that I got a testy email from the maintainers that they were going to block port 2047 in a couple of days so if I wanted to act it had better be quick. That's how we ended up with an office in Canada (300 meters from Front 151 with a nice fat fiber).


What this kind of UNIX systems have versus Linux, is that when you develop an application for a SGI Irix, you get a full stack of frameworks and every Irix is the same experience.


Me neither, because Linux is not an experience of combined hardware and software, and every couple of years everything gets rebooted because reasons.


There is a coffeetable style book on Amiga demoscene https://www.editions64k.fr/index.php/product/demoscene-the-a...

Volume 2 is on IndieGOGO


Dreaming about getting my own Macintosh plus and be productive on it, too. Wonder anyone doing the same? It's also a good way to learn low level programming on a 68000 chip.


I'd recommend a later machine with a 68030 at least, the plus is pretty damn slow

maybe a classic II or an SE/30 if you want the original form factor


The SE/30 is actually a pretty amazing machine. It could run A/UX, which meant a very decent Unix workstation in the classic Mac form factor, with X apps running alongside MacOS (classic; System 7-era) apps. One of my hallmates in uni had one.


And to test A/UX, there's the Shoebill emulator: https://github.com/pruten/shoebill


Thanks! Both look awesome. I'm going to run some emulation to see whether I'm comfortable with any and check out the price and inventory.

I guess most of the expense is spare parts and shipping?


SE/30's have become harder to find cheaply, because they now have the reputation as The Best Compact Mac. I've been looking for one here in the San Francisco area for a while now and had no luck; I don't want to get one shipped because I don't trust it to arrive intact.


I managed to get one for about $600 on ebay, a few years back. I always wanted one in my youth. It took almost 30 years to fulfill my dream.


I think the Amiga One is expensive considering it uses a low level PowerPC chip. Some MacMini G4 can run AmigaOS 4.X I heard.

If you want a Free AmigaOS try AROS: https://aros.sourceforge.io/ it can run on PCs and 68K Amigas and has a Kickstart ROM replacement.


Wait, so... Amiga OS 3.2 is newer than 3.9?


Yes. Both are updates to Amiga OS 3.1.

Amiga 3.5 and 3.9 were created by a German company back in 1999/2000 and require a 68020 or better processor.

https://en.wikipedia.org/wiki/AmigaOS#AmigaOS_3.5,_3.9

3.1.4 and 3.2 were created by Hyperion Entertainment. 3.1.4 and 3.2 by the looks of it support all the 68000 series processors and these as stated in the article are more recent.


Yup.

They also were not developed by the same company. 3.5 and 3.9 were developed by Haage & Partner, 3.1.4 and 3.2 were developed by Hyperion Entertainment.

There's even an AmigaOS 4.0 from Hyperion which is PowerPC only.

For extra bonus points, there's MorphOS which is a heavily AmigaOS influenced OS. PowerPC only, runs on some of the older PPC Macs.

And there's also the open source AROS reimplementation of AmigaOS 3.1 (with a lot of extras).

(I'm sure I got a lot of details wrong, been years since I last did Amiga stuff.)


In that Unstable Radio sound set (which is great by the way!)[1], do you think the trackers on the 2 Amigas are synced somehow? Or he is just very skilfully starting the clips at the right time?

I guess it's the latter, but I was wondering if any of the classic mod trackers had external sync capabilities. Back when I was still using them I never investigated if I could sync.

[1] https://www.youtube.com/watch?v=JpTvMSA4m04


Does anyone have any idea how much of this could be done on stock Amiga hardware (without hardware upgrades)?

I’m not anti-upgrades—-just curious :)


Depends on the system. My A1200 (6mb Ram, HD) is same spec as it was back in the day will run Wordsworth, Protext and DPaint IV just fine but doesn’t really do web browsing. I added a basic PCMCIA WiFi card though and it copes with ftp okay over open WiFi.

Don’t expect a screen resolution anything like the ones shown in the article though and RAM and the 68020 become an issue fast working with a lot of recent files. The author has a lot more RAM than me and it definitely shows.


> Don’t expect a screen resolution anything like the ones shown in the article

Up to the 3000, IIRC, Amigas were tied to TV frequencies, so you wouldn’t get more than 200x640. You could do interlacing and get 400 lines, but you’d want to claw your ever out in 10 minutes.


I had an Amiga 1200 and got it working with a VGA monitor soldered onto the connector (like a VGA adapter on Ebay now) and a setup VGA file in Workbench 3. It looked great in 640x480, no chunkiness. So it is possible.


The 1200 launched after the 3000. I think it was the last model to be launched.


On my A600 with 1Mb RAM it was a challenge to use the internet, especially using a TCP/IP stack that had a MUI GUI (don't recall the name). I had to use a custom startup config where Workbench did not load to save memory, to be able to use AmIRC and the like, after connecting to the internet.


I love that demo. You can definitely see how much better something on the Amiga can become once you can feed it better samples and images. It looks so much newer while it's the same A500 I started with mid 80's!


Only Amiga Makes It Possible!




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: