Why not just run Linux locally? Mac OS X is BSD, not Linux, which means its build tools & environment are subtly incompatible. You're always going to be chafing if you develop anywhere other than in your deployment environment.
Linux is superlative for desktop & development use. It supports things like tiling window managers far better than does a Mac. It's free. The hardware is far more affordable. Although this is definitely a matter of taste, I find Linux far more usable than a Mac. I will grant that Mac laptops are lightweight & thin.
It just seems weird to me to go through all these contortions to develop software for Linux on a Mac or Windows box when it's far easier IMHO to just … run Linux.
I love Linux, but saying that the desktop use is "superlative" is a bit, let me don my fire retardant suit, wrong. It's a decent enough desktop system, but it falls markedly behind in terms of tooling.
There's simply a large number of high quality tools which are not available for Linux, which means that you have to resort to an open source tool which, while decent (and sometimes even fantastic), are still nowhere near the quality of many closed source tools.
I wish this would change, and I do what I can to make it change, but the simple reality is that depending on the tools you want to (or need to) use, Linux is behind the game.
That said, I agree wholeheartedly with your very first statement: Why not just run Linux locally? Except, I would recommend a local VM. Best(ish) of both worlds.
Both XCode and VisualStudio are proprietary IDEs designed to support building apps for their respective operating systems. If you're writing a Windows native application, you will find Visual Studio on Windows to work the best, likewise if you're developing a Mac OS X native application then XCode on OS X is the smart choice.
In my opinion, neither one is well suited for building a native Linux application nor the sort of applications that would need a Docker container in which to run (web application, etc.)
XCode has a fantastic interface for writing Swift, which is no longer specific to Apple hardware. It also has a decent (if not fantastic) IDE for basic C and C++ development. Visual Studio has some great plugins for not only generic C and C++ development, it also has great integrations available for JavaScript and Python (which many developers prefer to use over even JetBrains fantastic cross-platform offerings).
If you look at Go, Python, and Ruby web development, there is nothing special about the Linux environment that these programming languages do not provide built-in solutions to. Sockets, file handles, threading and process management are all abstracted to the level that you can run the same code on Windows, Mac, and Linux.
So, in _my_ opinion, it comes down to which OS you enjoy the most for your development work; you can make any of them work for whatever type of development you want to do (particularly something as high level as web development).
I build mostly web applications (Java, Ruby, etc.) and I have missed very little after moving from Mac OS X to Linux. OmniGraffle is the only thing that I really miss.
unless you have a recent gpu, or want to listen music without tuning configuration files every update, or use input devices that aren't mouse/keyboard, or change multi screen configuration every now and then, or...
FWIW, I: have a recent medium-end graphics card; have listened to music on my desktop for almost twenty years; and change my screen configurations nearly daily. No problems here.
I only use mice & keyboards, so I can't say anything about graphics-tablet support.
Developers don't exist in a vacuum. A lot of people have to use stuff like Office and Skype; if you do that on Windows, "you're always going to be chafing", the pain just moves from one area to the next.
I've been using a Mac for 10 years and am thinking of switching back to Linux. I also want to get new hardware as 4GB RAM is really limiting. What would people recommend? I do like the Mac hardware, but considering you can get better for a lot less I don't think I'll stick with it.
I just took a new job in a company that uses Dell and Linux for the main OS, coming back from 4 years in another company where macs were the thing.
Let me tell you: it sucks.
- Lockscreen disapears when my session is locked, I have to type my password blind..
- Scrolling sucks, plainly. The trackpad registers clicks when I don't want him to, leading to me moving text around when I just want to move the pointer.. it's so unbearable that I brought my old mouse at my job.
- The fan is audible and always on. It also sends the heat back on my left hand (I'm left handed and the vents are on the left of the laptop), which sucks.
- Debian 8 packages don't include Intel acceleration, so you have to use experimental packages. But using them, it results in Skype being impossible to install..
- Did I talk about the miserable battery life? The laptop is brand new (Latitude E5550).
Indeed. Went from developing primarily on OS X to Ubuntu Linux for about 6 months before I couldn't take it any longer and switched back to a Mac.
Biggest complaint by far was trackpad support in Linux. It's a mess out of the box: weird acceleration curves, bad scrolling, twitchy, overly sensitive clicking, the works.
Luckily it's Linux, so it's Configurable™, and with enough tweaking I had a script that would feed a bunch of settings into the userspace synclient synaptics configuration tool, and get a trackpad experience that was mostly bearable. EXCEPT that the damn driver would somehow reset itself randomly every hour or two, and my trackpad would be back to the default mess until the script re-ran.
There were the other common problems too: an update broke X config, font rendering was janky, battery life was a joke. The Alfred replacement I was using would forget its keybinding on every startup. Unity/compiz would stop respecting screen hotcorner triggers until I open the config, cleared, saved, reenabled, and resaved them. Etc, etc, et-fucking-cetera.
It's more or less the same experience I've had on-and-off with Linux on the desktop since 1999. It mostly "works", but it's inevitably death by a 1000 papercuts and after a few months I'm fed up with it and switch to something less grating.
Completely agree. Run Linux on the desktop which mostly "just works" with a few minor tweaks required now and then. Laptops though? Forget about it. Abysmal battery life, constant issues with waking from sleep, driver issues, trackpad support and the list goes on. It truly is death by 1000 papercuts.
Every now and again I forget about the mess and try again and usually give up after a few days when I start remembering it all. I'm sure some people thrive in that kind of environment but I just need to get work done and when the OS gets in the way of that, it doesn't matter that "you just need to configure ..."
You forgot to mention the distribution you're using.
I've been running Archlinux myself for the last 2 years and it's relatively painless, some issues with trackpad sometimes but I use a mouse most of the time anyway.
For the lock screen, most of the ones I've used don't have a box to type the password so you have no visual input, like in a terminal. There are some that display it like lightdm that you can install
What if you deploy to Linux and Windows? Do you have to develop on both OS's?
Monocultures where you develop and deploy on the same OS are more efficient in the short term, but cause massive problems in the long term when you have to start supporting a different OS and you suddenly realise that you have finely tuned things for one environment at the expense of robustness.
It's just like in nature. Monocultures are better in the short term, until the environment changes. Then they struggle to adapt. (I suspect that most mass extinction events have been a failure to adapt to changing environments when overly adapted to a specific environment).
Docker is still useful in some cases, ie I'm using it to run separate postgres databases for each project. I could use a nix environment instead but Docker is just easier.
Linux has never been very good for standard desktop use in the form presented by OS X and Windows. It's still a hobby OS, where your computer itself must become your hobby, rather than whatever it is you want to do on the computer.
We provide an alternate solution ([1]Wormhole) that covers this scenario (and others). The main differences with e.g. Weave are:
- Non Docker-specific
- Easier to deploy, in my opinion ofc :-)
- Based on Open Source (You can just deploy SoftEhter's vanilla client as the agent)
- We don't use vxlan, just SSL encapsulation.
- Actually, every client will only generate outbound SSL connections, so chances are you won't need to reconfigure any firewalls or network gear for Wormhole to work.
- By default, chances are you won't overlap with the provided IP addressing (non-public, non-RFC1918 IP space)
- Multiplatform Windows / Linux / Os X
- API available to create and deploy networks and clients
- Our architecture requires traffic to go up to a central server and down to destination, so there's added latency. In my tests I've found this to not penalise performance for most applications as other solutions based on extra layers (I.e. Vxlan) like Weave.
Don't get me wrong, I think Weave is brilliant. We're just an alternative that aims for simplicity. There's some overlapping, but we probably have different markets.
Offtopic, but does anybody know how the diagrams in this article were generated? I'd love a simple piece of software to generate beautiful, straightforward pictures like this to explain architectural problems.
Why do you need to connect to remote services using weave? Should the stage environment be separated from you development one?
Anyway nice article.
Thanks
We have around 18 different microservices each with their own configuration and repos. Setting them all up, keeping them up to date and managing them locally is a pain (and also turns my laptop into an inferno). Our staging environment is on Runnable which spins up all 18 of these in an isolated stack so I do no have to worry about configs or management.
I dream of a company that installs Linux on some of the most popular desktops PROPERLY, and charges users 100$ to install it, and helps you maintain your Linux desktop over the years.
Many have tried, no one has succeeded.
Canonical had an opportunity to do this... Instead, they had to do "Enterprise" stuff. Bah.
The per-packet processing overhead is a real and unresolved problem.
I'm using Rancher's networking now instead. It uses ipsec between hosts, so everything gets handled by code paths which have been optimized in the kernel, and performance is good (especially if you have a not-ancient CPU and have the AES-NI instruction - then wirespeed gigabit works with acceptable overhead).
I like Weave's decentralized architecture and wish it were realistic to use it.
If they are going to stick with a user-space solution they probably need to use DPDK or one of the other high-performance software defined networking toolkits, which tend to process packets using a SIMD approach.
If you check out the PDF that fons posted above, http://rp.delaat.net/2015-2016/p50/report.pdf, then you will see pretty extensive testing showing that Weave Net, flannel, and Docker Networking have similar VXLAN performance for unencrypted traffic. In all cases, it is good enough. Alas the testers were unable to get Calico working.
The question is: when do you want top performance for encrypted traffic? Most of users want encryption for the wide area or public cloud, and when they can't use a VPC. Our solution is pretty good for these cases. Obviously at some point we'll enable IPSEC too.
Widearea/public cloud & non-VPC use cases are my use cases.
I really wish this weren't true, but your solution is not pretty good yet. For now, if you need encryption, it's useless.
Machines spend their life handling packet overhead. Application performance suffers horrendously, and the scalability of the application goes from excellent to terrible.
Weave looks really good if you give it easy tests involving big packets. But if you give it a workload involving many small packets (which in today's microservices architectures is not exactly uncommon), it stops working.
IPSec is still pretty slow. While it might be fine for X DC stuff, without a real private network you're not going to find the low latency that makes microservices attractive.
Linux is superlative for desktop & development use. It supports things like tiling window managers far better than does a Mac. It's free. The hardware is far more affordable. Although this is definitely a matter of taste, I find Linux far more usable than a Mac. I will grant that Mac laptops are lightweight & thin.
It just seems weird to me to go through all these contortions to develop software for Linux on a Mac or Windows box when it's far easier IMHO to just … run Linux.