Hacker Newsnew | past | comments | ask | show | jobs | submit | TheCapeGreek's commentslogin

Somewhat adjacent in how I look at using Docker at all in prod, here's what I always wonder:

Is using Docker/Compose "just" as the layer for installing & managing runtime environment and services correct? Especially for languages like PHP?

I.e. am I holding it wrong if I run my "build" processes (npm, composer, etc) on the server at deploy time same as I would without containers? In that sense Docker Composer becomes more like Ansible for me - the tool I use to build the environment, not the entire app.

For the purpose of my question, let's assume I'm building normal CRUD services that can go a little tall or a little wide on servers without caring about hyper scale.


> if I run my "build" processes (npm, composer, etc) on the server at deploy time

It's perfectly fine, as long as you accept the risks and downsides. Your IP can get ratelimited for Docker Hub. The build process can exhaust resources on the host. Your server probably needs access to internal dev dependencies repository, thus, needs credentials it would not need otherwise. Many small things like that. The advantage is simplicity, and it's often worth the risk.


> IP ratelimited for Docker Hub

How? What I'm describing is using Docker less.

> The build process can exhaust resources on the host

Maybe, but I've yet to have a host where that's the case for usual CRUD fare.

> The advantage is simplicity, and it's often worth the risk.

That's basically what I'm evaluating for here.

For bog standard LAMP or similar stack applications, I've not understood the advantage of going through the build-image-then-pull-on-host rigmarole. There's more layers involved there than something like provisioning with Ansible and just having a deploy script to run the usual suspects.

But I have seen that done fairly often, hence was wondering what the point was.


I would say it's bad practice because you end up having to copy all the build dependencies (source code) to the host and you're potentially putting a bunch of extra load on the host during the build process.

Also adds moving parts to your deploy which increases risk/introduces more failure modes.

Couple things that come to mind

- disk space exhaustion during build

- I/o exhaustion esp with package managers that have lots of small files (npm)

However, on the small/hobby end I don't think it's a huge concern.


> you end up having to copy all the build dependencies (source code) to the host

> disk, i/o exhaustion

This is why I mentioned specifically for ecosystems like PHP, which are interpreted. I'm specifically asking for that use case.

I'm not building binaries, my "build" steps are actually deployment steps (npm build, composer install, etc) that I'd be running in exactly the same way on the host. The image I'm deploying by definition also contains my source code because I'm not deploying anything compiled.


>I'm specifically asking for that use case.

That's what I answered for.

>I'm not building binaries

If you were, I would have added CPU to the list.

>my "build" steps are actually deployment steps (npm build, composer install, etc)

No, those are build steps. If you weren't using Docker, you would either run all those and shove in a zip/tarball or package into a deb/rpm, etc

>The image I'm deploying by definition also contains my source code

It doesn't contain .git or need credentials to your git/SCM

>I'm not seeing the benefit of the whole "build image, pull on server" pipeline when I can just ditch the registry and added layers by doing those steps on the server as I would normally in other kinds of scenarios

You don't need a registry--you can Docker save/load to push images directly to the server. Images buy you a versioned artifact with all the code-level dependencies baked in. Some maintainer yanks their package from npm? Who cares--you have a copy in your Docker image. Your new app version doesn't work? Edit 1 line to point back to the old image tag and rollback.

>> The build process can exhaust resources on the host

>Maybe, but I've yet to have a host where that's the case for usual CRUD fare.

When the build process completes, it tears down the overlayfs which causes everything to sync which leads to a big I/O spike. Depending on the server and amount of files, it might have no impact. However, I've seen build servers become completely unresponsive for 5+ minutes due to the I/O load when this happens. One place I worked, we had to switch our build servers to NVMe--the Docker container teardown caused spikes over 100k IOPs. Can't remember the exact details--it was React either React web front end or React Native mobile app.

>There's more layers involved there than something like provisioning with Ansible and just having a deploy script to run the usual suspects.

`docker save myimage:tag | gzip | ssh user@server 'gunzip | docker load'`

Not saying creating distributable artifacts is the de-facto answer, but I'd strongly consider whether it's really that much more complicated.


> Images buy you a versioned artifact with all the code-level dependencies baked in.

Fair enough, that buys a little bit of time to not break deployments I supose.

> When the build process completes, it tears down the overlayfs

Ah okay, I misunderstood you then - I was referring to Docker-less servers and my build steps running there, not building the images on the machine.

Thanks for the info!


Have a look at multi stage container builds. Your images should not need a build step at start, the result should be in the baked image. Else you become reliant on fetching packages during build etc.

I guess what I'm asking for is what the point is of a "baked" image for interpreted language ecosystems. Already using multi stage builds.

"Builds" are the same as deploys, so when working with server(s) instead of larger scale deployments, I'm not seeing the benefit of the whole "build image, pull on server" pipeline when I can just ditch the registry and added layers by doing those steps on the server as I would normally in other kinds of scenarios.

But I have seen this in action, which is why I'm wondering if I'm missing something.

The clearer benefit to me seems to be in this scenario to use it as a fast environment provisioning tool.


Apple's RAM price bumps were already insane, now they'll get worse.


They’re literally not changing


If they can just absorb the current ram price hike, you know they having insane margins.


I have maxed out M4 Max Pro. Doing the same for the latest machine is +2000 Euros.


It did change. They bumped $200 on the entire line. So even the 16GB version is more expensive.

I'd love to have customers like Apple. Bumps $200: "it didn't change!!!"

And no power adapter included.


> And no power adapter included.

To be fair, ever since the advent of high power USB-C PD that really, really is not needed any more, way too many power bricks are effectively e-waste.

People already have USB-C power bricks and docks everywhere and unlike pre-USB-C generations, you can use them not just across different generations of hardware, but across vendors as well.


I doubt if that many have USB-C high power bricks unless they are upgrading from another USB-C laptop.


> unless they are upgrading from another USB-C laptop.

Which MacBooks have been for almost a decade - the 2016 MBP with Touch Bar was the first that went fully USB-C PD. Anyone who has had a MacBook in that time frame will have had at least one high power USB-C PD wall wart.

The Windows world, as usual, has been different, but even there, I'm not aware of any mainstream model being sold in the last two years without even a single PD capable port.


The point isn’t the port but that they don’t come with USB PD charger in the Windows world.

But this is really because of EU regulations anyway. It comes with a charger in the US.


You mean bumped $100. M4 MacBook Pro and M5 MacBook Pro started at $1599 with 512GB SSD.

Now it starts at $1699, a $100 bump but comes with a 1TB SSD. Previously it would have cost $1799 for the 1T SSD, so it's a $100 bump on base price but you are also getting 1TB SSD for $100 less than before.


To me, this is kind of like Telecom providers giving you bandwidth headroom that realistically should have been there for a long time, but removing the option to get a cheaper plan whether you'd otherwise pay for the upgrade or not.

Like for my last upgrade, I bit the bullet and upgraded to 1TB for the first time ever instead of base storage at Apple's absurd prices, so it's good, but if I'd not have been willing to spend money on that at all, they lifted the floor.

My cell phone plan has been increasing every year by small amounts, but my usage pattern hasn't changed, and meanwhile they've restricted HD streaming using Deep Packet Inspection or whatever, so I theoretically have a 100GB full speed cap but can't practically use more than 20gb anyway, so they're pricing the bandwidth into the contract but I can't save money by getting a lower ceiling


> I'd love to have customers like Apple. Bumps $200: "it didn't change!!!"

Try making a good product that people love?


The base storage increased as well, and the upgrade prices for RAM are the same, which is where the real issue was.


> It did change. They bumped $200 on the entire line.

I wonder if that would happen regardless of RAM, e.g. for tariffs etc.


The EU forbids them from including power adapters. They're still included everywhere else.


EU doesn't forbid including. The new law requires there to be an option without the adapter. If the manufacturer chooses so they can have an option with and without the adapter.


I can buy a laptop right now close to home and it comes with power adapter.


Except that it's literally not true and people are repeating it for some stupid reason, I assume you just never actually looked it up - laptops are specifically excluded from that regulation, and in fact Apple does bundle a power adapter with their laptops, just not on the cheapest models.


> in fact Apple does bundle a power adapter with their laptops, just not on the cheapest models.

Here in the UK, they no longer include the power adapter even with the top models. I just specced out a fully-loaded M5 Max Macbook Pro, 128GB RAM, 8TB storage on the Apple Store, and it doesn't include a power adapter by default.

The 140W power adapter can be added as an option to the MacBook Pro for an additional £99 + VAT, or purchased separately. If you purchase separately you can of course choose a lower-power adapter for a lower price.

Now that a power adapter isn't included and you have to pay for it separately, it might make more sense to get one of the good brands of GaN power adapters instead, because they are smaller than the Apple ones for the same power, and have more ports.


>>Here in the UK, they no longer include the power adapter even with the top models

That's incredibly stupid(of apple), I'm in the UK and literally got my M4 Max MacBook Pro delivered on Friday, it came with a power adapter.


Are you going to return it for an M5?


No, it's provided by my employer so I don't really have that choice. And it's a the 16 core M4 Max, 64GB ram and 4TB storage, it's not really lacking in any way, it's a beast of a machine.

(But yes if I bought this with my own money I would have swapped lol)


East-West in US is a lot different to a 1 hour shift. Hence minor jetlag.


Nice. Love seeing simpler solutions like this pop up.


Firefox has lost the plot, Orion is close but still has the odd UI bug that makes it tough to recommend, Safari is just Safari.

There is no truly good, independent, feature complete browser out there right now if you want to avoid Chrome and have something that a) works and b) isn't hostile to the userbase.

Brave at the very least said they'd keep supporting Manifest v2 extensions, though not sure how you'd acquire them anymore unless Chrome web store has kept the listings up.


Haven't checked on it since about mid last year but Facebook settings didn't work with it for me.

LinkedIn has also been especially hard to find a good blocker for to remove the sponsored/suggested posts from the timeline (it's just full of garbage engagement bait hot takes).

I just vibecoded a tampermonkey script to block scrolling on Instagram and also block reels. I also had it redirect from the `/reels/` URL to `/reel` which is just the single video view (for when friends link me memes), but it seems they removed that.


IGPlus worked much better for me on Instagram. https://weblxapplications.com/en/products/igplus

For facebook I love Facebook Purity. Only problem is it's desktop only. https://www.fbpurity.com/

Edit, some things I've been able to us ublock origin for. Like facebook reels. Not 100% clean but enough to stop them from working (no video, just still thumbnail when you open them). In some ways that provides some reinforcement learning to the brain.


I use ScreenZen for mobile - it can block specifically short form content so you can stick to the actual social parts.


I use timelimit.io for managing screen time. Nice granular controls and schedule based rules.

I apply the 80/20 rule. 8 minutes allowed, followed by 2 minutes blocked. Then at night I reverse it. 2 minutes allowed, 8 minutes blocked. Just enough time to make one search query and then get off my phone. Or if I do get trapped by something, it's only for a short time.

https://timelimit.io/en/


Tons of non-Apple phones support NFC payments, unless you're specifically referring to Jolla.


Sure, but they’re android and I don’t necessarily consider google better


Wispr Flow just got an Android release, so everything except it talking back to you is now doable


Doesn't have to be. Before OpenClaw was a thing, people were experimenting with setups to allow them to drive their agent remotely.

And of course, OpenClaw is built to be a very generalist agent with a chat interface - same effective outcome as remotely controlling an AI harness, but not exactly what everyone wants.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: