Well, if you're self hosting for yourself, friends, and family.. it isn't likely to be a thing you'll want to care about fixing when it eventually breaks in mysterious ways.
Better to use sqlite or just a blob of yaml in a file if self hosting might be involved.
Sure, you’re hosting a few hundred or thousand videos, a blob of yaml can be good enough. I have no history if this project, but I suspect they started out like that. Something like Elasticsearch usually comes into play when you are ingesting a lot of data point very quickly, so I would assume the authors wanted to archive fairly huge amounts of videos.
It can do full text search (subtitles, comments) right in the ui. It seems to be designed for large scale backups. I use it for a couple hundred important to me or likely to be removed videos and it works excellent, too.
Would have preferred a more unsurprising db like psql or SQLite for my usage, but they also support data dumps so if needed I can escape.
The functional programmers (in my circle) have latched on to forEach/map/reduce/filter etc as the bibile of functional programming. Writing a simple for..each loop will make them reject PRs
forEach is just an alternate syntax for a for..in loop, that instead of being a block it takes a closure. It's not functional, since it's only used for side-effects.
Full agreement, Immich has a similar problem. I don't know at what point basic systemd services stopped being enough but docker is usually a non starter for me.
I dunno, I enjoy not having to install 500 libraries on my system just to test an app. Also upgrading those libraries without butchering my system is also nice. Not to mention rebuilding a system is really fast. Too many pros outweigh the cons
90% of my services work fine without Docker. Mastodon, Lemmy, Peertube, Caddy, Forgejo, Jellyfin, and Plex. Immich also has a way of installing w/o docker from source but it isn't documented, I don't mean to single them out because the app is great otherwise.
A developer must have had a working config at some point to create the Dockerfile. Providing literally only the Dockerfile is usually just a sign of throwing hands up and saying "It's too hard!".. you should be able to package for at least one platform that isn't Docker. That's just app development, or at least it was until recently.
I disagree, as I have spent way too many hours fighting ancient packages in ubuntu repos at work. Image processing is very difficult when limited to the available versions in ubuntu or Fedora. I do wonder how mastodon or Plex does it. Plex is a paid product though.
Fair enough, maybe image processing is just something I have not dealt with much and it is no coincidence that the two image hosting services are the ones with Docker oriented processes.
You can always build and tag your own docker image using the docker files included from source. Or simply follow the docker files as install instructions:
I'm not sure if you perused the docker files for this repo, but imho there is nothing simple about them
Edit:
I was curious so I dug into them a bit and found that the dockerfile references a develop docker image. There is a second docker file to see how that base image is built. In the steps to build that base image, we grab some .zip files from the web for the AI models. We install go, nodejs, mariadb, etc for quite a few deps, and then there is also a substantial list of packages installed. One step also does:
apt-get update && apt-get -qq dist-upgrade
Which seems a bit iffy to me. Each step calls a script which in turn has its own steps. Overall, I'd say the unfurled install script would be quite long and difficult to grok. Also, I'm not saying any of this is "bad," but it is complex.
They're both kinda dumb though. Updating will create a new layer, but the old binaries will still be a part of the image as part of the history.
The only correct way is to either rebuild the base image from scratch or just fetch a new base image.
My suggestion would be the latter, just run docker pull again for the baseimage and use that, without running update.
Docker is the one thing that works on MANY flavors of Linux.
If I want to provide a tool I want to spend my time on building the tool, not building an rpm, a snap, a deb, ...
The Docker build process is significantly easier. For example, I can just pull in NodeJS 20. I can't do that on Ubuntu. It's not available on packages.ubuntu.com.
Building a deb/snap/rpm is a whole other language to understand how dependencies are set up.
And then I need to test those. I've never even ran CentOS.
I’m assuming openjdk is versioned correctly and the name of the opencv package is opencv. I’m also assuming openjdk, replace openjdk with your jdk package name.
You would put this line in your rpm .spec file. Do you really think the above line is hard? Maybe the difficulty you have is in never have touched rpm. Start here: http://ftp.rpm.org/max-rpm/index.html
Yes that is already way too much extra work. And when those versions are not yet available to depend on? libheif, libaom and libraw needs latest versions and be built together.
That is an implementation detail, not a problem with rpm spec. When comcast builds rpms, the entire chain of required software is built with it. I should add: it is trivial to make integration tests to catch exactly what you describe. If the software is not built internally then the infra team ensures the required repos exist and dependencies match up to official repo rpms.
None of that has to do with the difficulty of rpm spec, but entirely with organizational planning. You do plan while building software.. right?
Also github does the exact same thing but with deb. It works in the real world. Quite well, too!
I generally don't either, but I wonder: are you more comfortable with running a docker image without internet access? You can firewall your host so the container can't access it and assign an internal network to the container.
Genuine question: what do you consider "trusted" code/apps? What difference is there between compiling from source and using the prebuilt official Docker image?
Yup! Worked great for the few months I used it. I think it's kinda funny how much simpler the Nix package is when compared to upstream's dockerfiles lol
Yuck. Every time my system has some random binary that doesn’t seem to resolve to any expected location it’s because it’s installed with flatpak, snap or appimage (why 3?!).
What percentage of people install this on their own computers?
Docker is much less manageable locally than, say, systemd or supervisor.
The only thing (which is something) that docker has going for it is that it's cross platform. But the same argument can be made for writing apps using electron. If you're going to do that, fine, but acknowledge the compromise. It's not better.
If portability is a project goal, maybe pick a language that is actually portable (which python is not). This is almost exactly the argument for Electron: I want a portable app written in js + html, therefore I need to include a js runtime (i.e. chromium + node) with it.
Your comment on complex deployment reminded me of https://git.cloudron.io/cloudron/box#cloudron - "Web applications like email, contacts, blog, chat are the backbone of the modern internet. Yet, we live in a world where hosting these essential applications is a complex task."
What you are saying is so true. I don't understand why no standardization work is being done to make server deployments simpler with high reliability. We have some takes like cloudron, sandstorm, yunohost but nothing is "mainstream".
No matter what OS underlies it the problems are not simple. No matter how simple the easy case is made, the complex case will stay garbage. When faced with a choice, a sysadmin that might be woken up at 5AM will pick something a bit harder to set up that's at least still recognisable and debuggable at 5AM. So we have linux, something infinitely more accessible and introspective than android.
Edit: thinking about it, running something as a Cloudflare worker is basically this. You even get a K/V store. It's just that there isn't an "app store" for these for individuals .. for essentially market reasons (in almost all cases, you're better off using someone else's service than spinning up your own)
OP is talking about having a mechanism / Operating system for servers to easily deploy and manage. Atleast, that's what I understood from the original post. How does k8s help me deploy say a blog ? It's just too technical and geeky. If this was "Android", I just download some blog app, click and install. Don't have to worry about updates either.
Docker compose wraps the app and db in a few lines of config: https://github.com/docker/awesome-compose/tree/master/offici... the extra config is for networking / db connections - you don't get those on Android because you don't run network services on it.
You can do exactly that with a Wordpress Docker container: install and run the container, it might even spin up a database for you if part of a docker-compose or you just rely on SQLite. The technical geeky aspect comes into making your blog available to the wider web through naming, you need to configure DNS, and serve requests.
I don't see how an Android app that you can "click and install" would help to configure your DNS entries.
I think in a few years at least some self hosting will have moved to things like cloudflare tunnels and ngrok, and domains and certificates will be a "Only if I'm getting paid to deal with it" kind of thing.
Maybe to reduce latency there will be hosting services that have the tunnel proxy in the same data center as your VPS, and include the service and the subdomain with your account.