> Nowadays it's almost impossible to uninstall an app completely, because most of them creating files willy nilly.
This has always been the case on Windows. In fact if anything, nowadays it’s better than its ever been because thanks to the UAC and other controls Microsoft have put in place, developers aren’t so free to do whatever they like to the host machine. But that’s remember a time before the UAC when it would often be common practice to reinstall the OS on a semi-regular basis (not something I personally engaged in but a great many of my peers used to).
> And it’s same on all known OSes
It really isn’t. On platforms with a proper package manager you can query what files get installed where. A great many package managers even let you query a file system file and see which package installed it.
Of course you still have the problem of the software writing files during its operation but that should be limited to $HOME (on POSIX systems) or any path that is writable by the owner / group of the user that application runs as (which should be limited even if it’s a system service).
It really is. I'm not talking about app binaries only. But about all files that app creates after install. Most of the reside in home dir, but stays there forever. Like various cache files, settings, ... And most of the time they are not confined to single dir.
That has always been the case though. For as long as I've use Linux as a desktop my $HOME directory has been littered with dot-files and folders. And as for Windows, things used to be so much worse. Since the UAC, Windows applications have been limited in where they can write to lest they annoy their users with frequent escalation prompts. Before the UAC developers often used to write files all over the place - it was a complete nightmare! In fact one of the primary purposes of the UAC - as I recall - was to reign developers in.
Even the UAC aside, on Windows you now have the application data directory and permissions on the registry which both take some reliance off random files dumped anywhere. Before then Windows was like the wild west. And we're not talking that long ago in terms of the history of Windows - Vista was released 11 years ago and it took a few years after that for developers to catch up.
Plus with the trend of moving everything to the web, you're getting fewer native applications which can write those random files in seemingly random locations (that's one of the few good things about the move to web applications in my personal opinion).
You'll always have problems with developers having their own opinions - that's inescapable. But things used to be so much worse.
> Most of the reside in home dir, but stays there forever. Like various cache files, settings, ... And most of the time they are not confined to single dir.
I think you need to support that statement. I believe the vast majority of software on common Unix distros creates no files in $HOME[1], and of those that do the majority use one folder in home[2], which *should+ be used for configuration, and often you don't want it automatically uninstalled on software removal.
The few I can think of that quote to multiple locations do so because the extra locations are shared folders. For example, I would not want my downloads directory removed on uninstallation of Firefox.
1: E.g. Most things in /bin, and /usr/bin.
2: other than what I outlined above, I can't think of any that use multiple directories. If it's truly a common as you say, you should be able to provide some examples.
He's referring to the XDG standard [0], I think. It used to be that all persistent user-configuration resided in ~/.${appname}, but some people were unhappy with that so they recreated the etc|var|lib|tmp filesystem usage distinction inside users' home directories. This means that an application's user files are now spread across $XDG_DATA_HOME, $XDG_CONFIG_HOME, $XDG_CACHE_HOME and $XDG_RUNTIME_DIR.
You're contradicting yourself in your first and second paragraph...
Those proper package managers still rely on the packager doing things correctly - just as it would creating a windows .msi.
There's plenty of linux packages that creates files during operation in their designated /var/log/xxx /var/db/xxx /etc/xxx /home/xxx/ directories that you're not able to query using the package manager.
> You're contradicting yourself in your first and second paragraph...
Those two paragraphs are talking about different OSs. 1st paragraph is talking about Windows, 2nd paragraph is talking about non-Windows systems with first-class package managers such as ArchLinux, Debian, CentOS, FreeBSD, etc.
> Those proper package managers still rely on the packager doing things correctly
Sure, but the point is you can query what the package manager has done.
> There's plenty of linux packages that creates files during operation in their designated /var/log/xxx /var/db/xxx /etc/xxx /home/xxx/ directories that you're not able to query using the package manager.
That's half true. You can query that /var/db/xxx and /var/log/xxx has been created by the package manager and often the directories (and their contents) will be owned by the user which the daemon runs under.
However I do agree with the point regarding your $HOME directory and actually made that point myself:
> Of course you still have the problem of the software writing files during its operation but that should be limited to $HOME (on POSIX systems) or any path that is writable by the owner / group of the user that application runs as (which should be limited even if it’s a system service).
As an aside, you can also query what files a particular application has open. In fact there are a few ways to do this from querying the /proc/$PID directory through to tools like `lsof`.
I have plenty of files in /var/lib/ that are not owned by any package, same in /var/log/ , /var/cache/ , /etc/sysconfig/ and other directories - their parent directory is owned by a different package than the ones creating these files.
I'm not arguing that a decent package manager is a better than none - but they are solving all issues you claim they do.
Pretty much all OSs, including windows, have ways to view which processes has a file open
> I have plenty of files in /var/lib/ that are not owned by any package, same in /var/log/ , /var/cache/ , /etc/sysconfig/ and other directories - their parent directory is owned by a different package than the ones creating these files.
Got any examples of that? You'd expect only docker to write to /var/lib/docker, mysql to write to /var/lib/mysql. etc. Not discounted that I've overlooked something but a quick look in my /var/lib and it's easy to see what is managed by what. So I'm curious what instances you have of a package manager creating a directory and then a completely unrelated daemon writing to that directory.
> I'm not arguing that a decent package manager is a better than none - but they are solving all issues you claim they do.
I'm not claiming they solve all the problems - in fact I literally identified a few problems they don't solve! Plus even those points I identified aside, there will always be edge cases for thing that package manager should have solved but failed to do so.
Perhaps we should turn this discussion on it's head and discuss better ways to solve the problems people are describing? What would your solution be? Or are you ostensibly agreeing with my points but being contrary just for the sake of playing devils advocate?
> Pretty much all OSs, including windows, have ways to view which processes has a file open
Isn't that literally what I just said? (plus I gave a few examples too).
Thank you. I've not used certbot so excuse the dumb question, but is certbot doing that during install (ie via the package manager) or during program execution (ie when the certbot ELF is launched)?
I shouldn't expect too much in /lib/systemd/system is installed outside of package managers but I agree it does happen and at least they're generally quite easy to identify which service file does what.
crontab is definitely one of those nasty things that can often get forgotten about though (and I speak from unfortunate experience there hah!)
We're really drifting into the domain of Puppet and it's ilk now though.
I'm not sure when those files get created, I just knew about that example off the top of my head because I had to spend some time figuring out why our post-renew hook wasn't working.
dpkg -L helps a lot when figuring out where all the files get spread.
Isn't the parallel to UAC a properly configure SELinux? I thought that was the component that lets process rwx from certain locations? I guess a full comparison may be including applocker too.
Not to hot on linux management options I just install the thing over and over.
One trick I use when trying to see where in $HOME a program creates files is to create a new user with an empty $HOME, run the program and then see what files were created. If it's a GUI program, give it permission to run from your regular user with xhost so you don't need to login through the desktop manager.
well, I usually do something alike, though just by changing the environment variable: HOME=$HOME/tmp myprogram. Symlinking the .Xauthority file (if using X) works quite well.
I actually always run that way most applications that do not fully adhere to the XDG base dir specification.
> Of course you still have the problem of the software writing files during its operation but that should be limited to $HOME (on POSIX systems) or any path that is writable by the owner / group of the user that application runs as (which should be limited even if it’s a system service).
The really tricky problem is when a package must modify an existing shared resource. Such as appending lines to an existing config for example.
The really tricky problem is when a package must modify an existing shared resource. Such as appending lines to an existing config for example.
This is currently solved by having applications support both a config file and a config.d directory. The primary owner (package) of the resource modifies the conf file, while secondary packages drop their own config in conf.d/${package}. Numerous examples exist: logrotate, rsyslog, apache, nginx, systemd and apt come to mind.
Yeah that's exactly what's happening. But Arch is an intentionally hands on distro (eg it doesn't even ship a an installer - instead you're expected to do everything yourself via the command line).
Obviously this wouldn't be to everyone's tastes but it's good that market is catered in my opinion (but then I would say that as I'm very much a hands on person).
I've had the installation of apt-get packages permanently hose an ubuntu or debian install. It's all up to packagers to author their packages right so they don't leave garbage on your machine that you have to manually clean up (or give up and reformat).
You're comment is very light on detail so it's hard to understand your issue properly but I've been running Linux as my primary desktop for more than 15 years and have managed literally hundreds of Linux servers too and never had a package manager hose my platform (big caveat: aside the notorious `filesystem` update on ArchLinux but that one is an extreme edge case scenario due to the rolling release nature of Arch. However even package was well documented on Arch's site beforehand as being a package that required manual steps to upgrade).
It's true that Linux package managers used to be buggy and problematic in the 90s but those days have long since gone. And while I'm not discounting that a package upgrade could damage your system, the instances when they do are highly unusual rather than a typical problem users face with each and every upgrade. In fact Windows sysadmins have far more dread with running Windows updates than Linux admins do and yet Windows updates are only focused on Microsoft products rather than every piece of software on the system.
> It's all up to packagers to author their packages right so they don't leave garbage on your machine that you have to manually clean up (or give up and reformat).
Actually it's not. It's up to the application developers to do that. If you specify a package to install a file `x` to location `y` then the package manager will uninstall that file automatically too. You don't specifically need to tell the package manager to do that (or at least not with any of the packaging systems I've used). But if the application developer writes the application to spew out thousands of files into $HOME, that happens outside of the package manager. There isn't a whole lot you can do to stop that aside limit the directories which your application has permission to write to (either via chroot, containerisation, user/group permissions, SELinux, or other forms of ACL. There's actually plenty of tools on Linux / UNIX to handle that problem).
Don't know about apt specifically, but using pacman (Arch Linux), you can list exactly what files on your filesystem were installed by what package and remove them. You can't do this on Windows, as far as I know.
Yes firejail is awesome, but you can only block writes to directories. What I'm looking for is an option to redirect all writes to single directory. This should be transparent (app still might think is writing willy nilly, but in reality all writes would be redirected let's say to ~/app).
I'm pretty sure you actually can do this with firejail, see: --overlay and --overlay-named. For some reason it looks like these are hardcoded (yay, UNIX culture!) to point to `$HOME/.firejail/<progname or name>`.
This has always been the case on Windows. In fact if anything, nowadays it’s better than its ever been because thanks to the UAC and other controls Microsoft have put in place, developers aren’t so free to do whatever they like to the host machine. But that’s remember a time before the UAC when it would often be common practice to reinstall the OS on a semi-regular basis (not something I personally engaged in but a great many of my peers used to).
> And it’s same on all known OSes
It really isn’t. On platforms with a proper package manager you can query what files get installed where. A great many package managers even let you query a file system file and see which package installed it.
Of course you still have the problem of the software writing files during its operation but that should be limited to $HOME (on POSIX systems) or any path that is writable by the owner / group of the user that application runs as (which should be limited even if it’s a system service).