I see this idea repeated so often, but it's unfortunate that we don't also have the _value_ of a supercomputer in our pocket. The sole purpose of a supercomputer is to advance the interests of its owner, who has exclusive control over it. Whether the purpose is prediction, or simulation, or to advance the state of the art, the benefit goes to the owner.
Yet we seem to have less and less control over our smart-phones. With so much information about each of us being siphoned off through the Internet, it's easy to wonder whose interest they serve.
With all of the advances in computing power, you'd think we'd put a bit more imagination into capturing more value for each individual smart-phone user, and less into centralized capture and analysis of our digital activity.
My parents both fairly recently bought iPads. Previously they used a Windows laptop and an Ubuntu desktop - both far more powerful and giving far more control. Or at least that's one way of looking at it.
Their iPads may have limits on their functionality, but as a result of their simplicity, they enable my parents to control a lot more of their computing experience.
They rent movies, read books, use RSS (!) via Flipboard, edit photos, install apps (previously they would never install software) and - most importantly - they experiment with new things. Unlike the old PCs, which they'd become more fearful of with every inevitable fault or virus, with their iPads they can just hit the home button.
For you and I, you're right - mobile can be limiting. But don't underestimate the power of mobile for those previously intimidated by computers.
This is an important point that often seems lost when abstracting out concepts like "power" and "control."
There is some theoretical sense in which a user of free software has a lot of control, including the power to change it. Software being free can have a lot of indirect benefits to the user, but ultimately practical "power" is a function of what you can practically do with it, and that is very different from what the thing can theoretically do.
Apple probably has motives for the app store other than making user experiences simple. But, they also really have the goal (and the effect) of making the user experience easier, which puts more real power into real hands. If the below median user can install a photo sharing app to see her nieces photo collage, that's real power in users hands.
Also, I think being strict and abstract about "exclusively to advance the interests of its owner" is losing a lot of important texture. First, very few technological benefits go exclusively to anyone. They go to users, vendors, various other users and vendors. A lot of them get released into the ether as externality vapors, like productivity or ideas for new technology.
The reality of the current state of computing is interesting, but it's very hairy and hard to grasp and boil down to simple abstractions. When we use our phones, we are interacting with other people, generating data/content on other people's phones, in various databases to the benefit of a bunch of people/companies. When you are using your navigation app, generating data about traffic, you are benefiting the app, other osiers. Possibly future users. Possibly that data will be sued in other ways. Maybe a construction company will buys it to plan new roads (and win a contract) which will benefit the city… Maybe someone's girlfriend will one day have it beamed to her implant to estimate when Brian will arrive.
Is this bad because it's not to the exclusive benefit of the user? The benefit to the user would not even be possible with this mesh of stuff.
Doing more and having more control are not interchangeable. Your parents reading more books on their computer means they do more. More control means they know what happens inside the software/machine and they can change something about that.
You are right that people don't care about control as much as an idea of it. And you are also right that giving control to someone else who can get a value out of you might result in you being able to do more, but at some point having no control over ones life will bite people in the ass. And that's what the control topic is about.
In a simple example Apple can decide to not show the books to your parents any more. There is nothing they can do about it. Getting your Linux computer to work right takes time but then it's much less likely that a software can delete your data or if they delete it you are more likely to have a backup because you put sweat into configuring your system that way.
> Getting your Linux computer to work right takes time but
It doesn't just take more time; once something is sufficiently difficult, it goes from incrementally time-consuming to insurmountable (for most people) -- it might as well not exist.
If my parents ask "How do I get control?" And someone answers "Getting Linux to work takes some time, but..." Their response should be that they won't even try. My parents have a hard time with the concept that there are things called "files". Directories are completely beyond them.
I never intended to convince you or your parents to use Linux. It's just, that you said that they are able to do more, therefore they have more control. If you understand that the opposite might be the case then my job is done. In some scenarios it's okay to have less control in exchange to do more. I do such exchanges on a regular basis myself. But one must be aware that one gives away control, not gains control.
Directories and files have a real world analog that your parents are likely to understand, but I think the doofy interface obscures this to the point of confusion.
Why not simply set up and maintain Linux for them? Especially because if the only thing amateurs need is a browser, then Ubuntu is exactly the same as everything else.
They still won't have ultimate control, but you will. This is certainly a tradeoff (a personal relationship has more motives for violating trust), but depending on your relationship it can be a great way to distribute control away from ApplNSAoogle.
Really? You could not use actual paper file and filing cabinet as physical world analogues of computer files and directories? Pretty much everyone understands those metaphors, so why don't your parents?
Modern cars take a tremendous amount of control away from you. You lose the ability to easily service your own engine. You rely on electronics to do everything from braking to steering to regulating the temperature. If one thing fails, the car is undrivable and will likely need a lot of expensive repairs you can't do on your own.
On the other hand, it gives us things like AWD, so people don't need to be afraid of when they should use 4WD or when doing so might damage their car. It's automatic. It allows people with disabilities to operate a car, since they don't need to shift or use both feet on the pedals. It's automatic. You can hook up your iPhone and play music from it without touching the phone. It's automatic. And even slippery weather isn't as deadly anymore with automatic traction control.
So no, I can't rip my car down to the chassis and count every bolt. But I can drive a car in a far more safe manner, a car with far more power and far more convenience. There's two types of "control" at play. One is a mechanical control where I could tear down my Toyota 4Runner SUV with 180hp (a small amount of power) and put it back together as a Tacoma pickup. The other is the knowledge that if I hit a slippery patch in my Nissan Murano with 260hp (a huge amount of power), I will continue on just fine even with more power under the hood while the Tacoma driver is in the ditch wishing he had known to engage 4WD at that second. The news will report that the Tacoma driver "lost control" of his car, whereas the Murano driver "stayed in control".
Now that's a bleak picture, sure. I'm just using it to illustrate a point. There are two types of control. You'd never give root access to an average user. So does the Linux computer actually give them any more control? Or are you just giving them a Tacoma and locking it in 4WD for their own safety? If you give them a Corvette and then tell them to be careful or they will spin out and die, they will never let the RPMs go above 2k.
I like your example. Let's say it this way: With more electronics in your car you lose control over the car, but you gain control over the road, and the traffic around you.
Facebooks also works that way, now that I think about it. It takes control away over your data, but it enables you to interact with more people, which may improve your control over your connections.
Exactly. Everyone has different needs, it's not necessarily wrong.
For the record, I use a Linux laptop at work. When I get home, I enjoy sitting down with an iPad and an Xbox for entertainment. I have a 16 year old truck for the weekends and a brand new Fiat for my daily driver. I understand both sides of the argument, and they're both valid.
I think it's fallacious that deeper understanding of system would guarantee data safety. I have no idea what the most of the software and hardware on the n devices I use actually do. I know they are targeting large masses of end users so I can trust the industrial process that created them. What is needed, I suppose is educating that all data that is not backed up should be considered volatile, and digital goods such as kindle books are for consuming but not for collecting.
That said, I don't mind consuming digital volatile goods - there are very few books I re-read and I usually acquire paper copy of those.
I think that is too limited a measure of control - if you can use the iPad to do a budget, but not MS Excel, then with the iPad you can get an extraordinary amount of control over your own life.
Depending on what kinds of books his parents read those might result in more control too. Even if we account for a loss of control from the computer to the iPad (and I am not entirely convinced that a practical loss happens, given that they can barely use the computer) it isn't possible that the total amount of control, for his parents, go back up.
My mother bought an iPad at the age of 67. She's not remotely technical - before she got a mac, I'd routinely get support calls of the quality "help! microsoft isn't working!", where 'microsoft' meant anything from Word to a browser to, well, anything. Anyway, the ipad was used for about a month, then plonked on a shelf to gather dust as she went back to her macbook.
As a naif and something of a technophobe, she definitely did benefit from moving from Windows to OSX (back in the late XP days), but iOS left her cold.
I think the argument should be(or is) whether I can have a computing experience other than that provided currently by Apple, Google et all.
I think historically user has lost once, seriously Linux never delivered the promised land for more mass users. I would be more interested if users can have mobile experience which is not tainted by interest of big co-operations. Unless there is an alternative, I hardly doubt there will be any change in status quo.
if it comes, the big corporate interest will have been "The shoulders of giants." Apple's iphone, the competition from Google and all the financial benefits from that have generated a lot of progress fast.
I use open office, and it's a good example I think. If we didn't have MS office, would we have open office? would it be as good?
If Firefox OS succeeds, would it ever have been made without the smartphone market having already been defined?
"2. Simplified browsing. There are a lot of cases where you'd trade some of the power of a web browser for greater simplicity. Grandparents and small children don't want the full web; they want to communicate and share pictures and look things up. What viable ideas lie undiscovered in the space between a digital photo frame and a computer running Firefox? If you built one now, who else would use it besides grandparents and small children? "
After 7 years, we know clearly now tablets/phones and mobile apps are the answers for that question for the present.
I just got my wife a Surface Pro 3 and now she doesn't use her iPad anymore. She rents movies, reads books, uses RSS via Flipboard, edits photos, installs apps and experiments with things. She also takes notes with the stylus, rips songs from YouTube and has Adblock installed in Chrome (edit: and after all that, she can throw it in the docking station connected to a 2nd large touch screen, keyboard and mouse) - things she could never do on an iPad.
So yeah, mobile is limiting but it doesn't have to be. Once you venture outside of Apple's cold embrace, many things are possible.
The Surface is a strange one, seeing as it is both a PC and a mobile device in one. There are upsides - as you've mentioned - but downsides too. I suspect my parents might not be quite as comfortable using a Surface than they are with an iPad. I'm glad your wife is enjoying it, though.
However, I'm really not trying to get into a war between platforms. My point is that adding limitations, which many of us see as reducing how powerful a device is, can actually enable the user to go further and do more than they could previously.
What a bunch of mumbo-jumbo. This isn't some Marxist struggle between the owners of the means of production and the smartphoneliteriat. The reason we don't realize the value is because we haven't concocted applications. Supercomputers crunch data & perform math. That's what they are good at, and their value (as I know it) is primarily rooted in their ability to advance math & science (& defense, as it relates to math & science). But nobody has really figured out what sort of massive scientific computation we should be doing with our pocket computers.
Some of it is there- for example, I wrote an app that performs geodetic math that was supercomputer-only in the 60's- but in general, folding proteins and simulating physics in your pocket is not useful to Joe Smartphone.
It's the idea that the content available on our devices is shaped by the economic framework that supports it.
When the economic model is advertising and data tracking, the types of apps/services that are available are the ones that work best with advertising and data tracking. And it turns out that consumer-oriented social sharing services fit that model pretty well. So that's mostly what we have.
Those services do have value. But I think it's limited. When I walk into a cafe with a beautiful view and everyone is on their phones, or when people awkwardly scroll through Twitter at a group event instead of enjoying the world around them or interacting with strangers....I dont see people gaining value. Especially when it's children doing it. I see compulsive addictive behaviour that erodes human value for the benefit of advertising companies.
The economic model follows consumer desires, not vice versa. It's a lot easier for a startup to change their business model to fit what people will actually pay them money for than for them to convince people who aren't willing to pay to ante up.
If people would pay for value, startups would spring up to capture that. That's actually what we see in the SaaS business. Consumer entertainment has always been a hard sell, which is why the radio and TV businesses were also ad-supported.
You don't have to take it as far as science. Previously ,Google had a web search engine that searched discussion groups - a great tool, and an important one from the democratic point of view. Now it's gone. Why ?
One guess would be that no company wants to slap ads near a vibrant discussion - not a good place to sell lies. So in a sense this do fit the theme OP raised.
BTW, with connectivity - the distinction between smartphone and server is little. Even weak phones have supercomputers at their disposal.
So information has no value? Why do we say that information is power? I suppose that idea could be just as wrong as saying we have a supercomputer in our pocket, but the idea that information is power is much, much older.
Information isn't being hoovered up at the cost of billions of dollars without a very good reason for doing so.
Still, it seems a matter imagination. If our personal computers have tremendous (and accelerating) power, there's few actual limits to where we can take them. The deck even has a slide that talks about pulling components off the shelf and playing lego with them.
I think it's prudent to consider whether the average person is, all things considered, empowered by our Internet-connected computers. If so, great. If not, or not enough, then how?
You're trying to tie together unrelated ideas. Beneficial utilization of a phone's incredible raw computational muscle is basically unrelated to the harvesting of your personal data, in which the phone is merely acting as a networked array of sensors. There is virtually nothing stopping you from harnessing the full computational power of your phone supercomputer-style to do whatever data crunching you might imagine. The challenge is the imagining (and implementing)
I'm speaking of value, which I view as the balance of risk vs reward. Your view may differ. On one side, it seems to me that the reward or benefit we're receiving isn't anywhere near proportional to the power increases we've seen. On the other side, what are the risks of exposing so much personal information?
Receiving information seems much more valuable than giving it. After all, information has a price. If giving is not so great, shouldn't we be a little concerned about the consequences of trading away so much of it? Information asymmetry strikes me as the root of many power imbalances.
So, if we want more _value_ out of our "pocket supercomputers", it seems we have to consider the overall value proposition. Then perhaps we can figure out how to do better.
Who's the "we" here? It's already possible to write your own smartphone apps that basically do what supercomputers do. If you work in the field that supercomputers were designed to help (scientists, basically), I'd imagine that they could be very handy, since you could plug your cell phone into a workbench and get results immediately rather than needing to do laborious data entry.
The problem with information is that it's only valuable in context. Scientific computing has given us wonderful breakthroughs, but it needs lab equipment, it needs data collection, it needs trained scientists, it needs theories and hypotheses to test. Computing was developed to break Nazi codes; the Enigma machine may have been the star of the show, but it also depended upon radio intercepts, field operatives, dispatch runners, generals, soldiers in the trenches, people who built fighter planes, etc. - and it obviously wouldn't have been useful had we not been at war. Similarly, most of the consumer data that's vacuumed up is only useful in the context of a task that millions of consumers want to do.
You're right that there's a power imbalance in the distribution of this information. The problem is that this power imbalance doesn't exist because of the information, it exists because of the context in which the information is collected and applied. Find a context where information can let you deliver a better product than competitors, and it's easily worth billions of dollars. But the information on its own - minus the product design, minus everybody else's information, minus the brand name and distribution channel and user adoption - is worth zero. It's already possible to siphon off and redirect all data that is being sent to Facebook & Google's servers on your behalf and save it to use on your own personal behalf. The problem is - it's useless. Without everyone else's data, there's nothing you can do with it.
If that information is useless to us, it's due to inadequate tools: software, hardware, algorithms, and philosophy. Facebook and Google are both able to extract value from an individual's information (even isolated) because they have both the software and incentive. Both can look at word frequencies and activity over time to develop behavioral prediction models that are then used to target ads at you.
I'd say the ability to make relevant predictions is a form of power.
Why is it a forgone impossibility for an individual to possess tools to capture similar value from their own information? Does self-knowledge hold no value? In my experience, changes in behavior (such as to better align with your own goals/interests) are often triggered by a clear and quantified message from a trusted source. Why not make this source your own information and your own computer?
Sure, what Facebook has is lower quality and harder to process than we might prefer, but there's nothing stopping us from collecting and structuring our own information in more appropriate ways, then layering on interfaces and algorithms, and then simply doing better than anyone else can. Because it's ours.
So the We is We hackers, who can design and build new information-contexts with different value propositions and power dynamics. And it's also We thinkers who can put together the reasoning behind such efforts. And We as regular people, who want more joy and less pain, with less effort.
As used today, yes. And that's kind of the point of the original observation. Smartphones are not true supercomputers. But they still smoke a Cray-1, and yet we use them as dumb terminals. What can we use that power for, that filled a room twenty years ago?
Their existing roles have value! But there is a sense of leaving something untapped, when you use them only as dumb terminals. A sense of, what am I missing?
It doesn't make sense to have a supercomputer as your phone. In your phone you value excellent long battery life. Better to offload the supercomputer part to central processing and pipe down a "viewport". Economics always wins.
"Phone" isn't really a term that makes sense for the future. It's just a somewhat anachronistic bundling of microphone/speaker/digital radio/battery (and the tiny bit of logic needed for that to work together).
It does make sense to carry the things that comprise a "smart phone" (as above, +powerful cpu that can scale down to conserve power, gpu, screen/hud/display, camera(s), low-power local-area digital radio (eg: bluetooth), touchscreen (or drawing surface/digitizer), text-input (keyboard), gps, interface for connecting 4k+ display, possibly direct-attached storage), along with a headset, preferably wireless.
I just don't see why this should all be forced into one unit. I'm not sure how much power miracast would require -- but could it really be more than what the led-lit touch-screen uses? Because surely it'd be easier to have big battery+wlan/miracast/bluetooth/cpu stuffed in your pocket, and a small display+touch+bluetooth device on your wrist, with the option to pull out a phablet-sized hi-res touch-screen with it's own battery, but only miracast+bluetooth (maybe a gpu)?
The "smart stick" (the smarphone less the camera/screen/mic) could probably be made to fit more nicely in a pocket if there was no need for a display.
Shape it like a rod, add a powerful vibrator and the thing would even be dual-use ;-) (seriously, though - if it's intended to be kept in a pocket, a vibrator might make sense, as well as a couple of status leds).
The screen is going to be a bit clunky anyway (unless it folds/rolls up) - so no real issue to add a dedicated battery for it.
But how come heads-up displays have never come to market in a serious way? I'm not counting Glass, by the way, given the focus on sharing rather than using.
Beats me. I also wonder why OLED screens only seemed to be used for military HUD/goggles -- and not for the civilian market.
I recall one pioneer from MIT bashed glass, pointing to his experience with having a single image in a different focus area/off-center as being uncomfortable for prolonged used (or maybe even harmful).
So that might be one reason (similar to how everyone realized in the 90s that VR made people throw up -- no market).
As for a smart-rod -- I really liked Canonical's push towards a viable use-as-phone-use-as-computer OS. It's an obvious idea that I think is already "proven" by stick-pcs and chromebooks. The cpu of a previous-generation smart-phone is enough for many tasks.
Not sure if VR might not actually be an area where going wireless might break down though -- Oculus has spent a lot of effort getting the latency down. Still it seems it should be possible to do a lot of the 3d stuff along with head tracking in the head-set, and do more complicated physics modelling on a more powerful device.
Either way, I'm hopeful that we'll see more meaningful separation of devices as bluetooth (as I understand it) keeps getting support for lower power in one end, and higher bandwidth in the other.
I'd want the same stuff on it as I have on other PCs. Tools or access to tools. And yes, mobile-specific stuff too. Why work in/on wherever using a tablet, when you could instead have a full virtual work environment?
Noooooo. Mobile bandwidth is far more constrained that mobile CPU power. Furthermore, radios are the most power hungry components after the screen, another reason to try to avoid them. Data connections are not truly ubiquitous either, so what do you do when you don't have a connection? Lastly, from a data security point of view, if you don't want to hand all of your unencrypted data to a 3rd party, you would have to host the server yourself (in this model the server needs to be able to manipulate your data, which means that it must be able to decrypt any encryption you had on the data connection itself).
So no, I don't see that as ever being a desirable model for mobile computing. Maybe for some specialised tasks, but for general pocket computing, the power is better placed locally, not remotely.
> In your phone you value excellent long battery life.
No. You and I might want a long battery life, but 'the market' has decided that battery life is way down the list, below 'look like an latest iphone'.
The market has standardized on shiny glass brick, as thick as a pancake, and tiny batteries to match. Batteries that are less than 3,000 mAh and that barely last a day, two days, tops.
If the market thought an excellent battery life were important, there would be phones an inch thick, with a 10,000 mAh battery that lasted a week instead of the latest generation featuring 8-core, 64-bit, gigahertz+ processors.
Economics always wins. Unfortunately, economics is not always on your side.
For me, they're mostly missing privacy. That is, they're not securable from supposedly trusted providers. So for me, a precondition would be full control of the baseband, and running open-source software. Or maybe for now, using Android kernel to load Debian, and nuking the radio.
I'm sure that lots of cool distributed computing stuff is doable. But I'm not much interested until it's securable.
For those who think I'm crazy, consider what company would accept smartphone-like security for its servers. And OK, upon reflection, some apparently do. But my point stands.
For most people, computation isn't nearly as valuable as communication. Raw computation isn't useful to non-scientists; the smartphone computation power is put there to render games and web pages.
Unless I'm mistaken, a Cray supercomputer generally had a dedicated and consistent power supply, i.e. they're plugged in. Most phones aren't, and battery technology is still not at the point where you could achieve constant performance if you were using the phone all day.
> The reason we don't realize the value is because we haven't concocted applications.
Some of us realize the value. I just think it's impossible for most people to not take it for granted. If you're under 30, you inherently take it for granted because you know of nothing else, you simply cannot appreciate the technology as much as someone who lived a good chunk of their lives without it. You can "know" and imagine what life was like before our current internet culture, maybe even empathize, but there are still things that are unknowable and can't be experienced by just going off the grid. One such example is culture. The culture of society was different because of the technology (or lack of) at the time. Everything from how you interacted with your friends, to how you received and consumed media, to banking and your place of employment. Everything was different before our current technology.
The people who grew up with supercomputers in their pockets don't know of a world where 3-way calling was something magical and you'd get giddy when you used it (or horrified when it was used as a tool to trick you into admitting you liked someone and they were on the phone silently listening). Or a world where if you wanted to find an address or phone number, you had to go find a phone book. Wanted to go on a vacation and look up cool places to see while there? You were heading to the library. If you were on the road driving, and wanted to call someone, you had to stop off and find a payphone (and pay a fortune if you were out of the area). Good luck if you blew a tire. But back then, more people were inclined to stop off and help because of this very problem. Again, society was very different back then.
People take for granted the fact that you now have all the world's information and entertainment at your fingertips. I know because I take it for granted myself and I know what the world was like before our current technology. I'm in my late 30s. If I take it for granted from time to time, how can someone who never knew a life without internet & cell phones appreciate what they have?
I know this sounds a bit preachy, and I didn't really mean for it to come across that way, but I find myself echoing Louis CK sentiment on technology and today's youth. His bit here really struck a chord with me: https://www.youtube.com/watch?v=uEY58fiSK8E
We don't have the value of the supercomputer, because our supercomputers don't ship with compilers onboard.
Removing compilers - and indeed, software tools in general - from the requirements for a base operating system is the reason we have these disproportionate power cycles in our computing world. The compiler is the single most important application an Operating System can run - yet it is is denied first-class membership in the system image these days.
Alas, this circumstance came about in order to create an artificial scarcity of people who can make computers do things. If the compiler is onboard - anyone can learn to use it and then use it, and they don't "need developers". Removing the most powerful tool in an Operating System in order to create this condition, has crippled forward progress.
If this ever changes, I expect to be seeing a lot better software out there. But as it stands right now, developers of platforms don't want broadly-accessible and easy to use developer tools to be available to all and sundry.
This omission is really just a class battle in disguise.
While I do agree that having a compiler accessible on a platform is a positive, I'm somewhat skeptical of the claim that anyone can learn to use it and then use it. It glosses over the enormous learning curve to develop software with the current mainstream tools (especially the ones that need compilers). If you look at e.g. Mac OS X, where the compiler can be downloaded and installed by anyone, very few do and even fewer develop software.
Our current systems are simply not designed to be extended and modified by non-experts. Even in a system like Smalltalk, where that was one of the explicit goals, it doesn't really seem to have worked out. Having the compiler and software tools accessible is only small part of the solution.
>Our current systems are simply not designed to be extended and modified by non-experts.
Yes, that is exactly the point - and it is by design, not nature.
>anyone can learn to use it and then use it
Well, this has been the case so far - or were those of us fortunate enough to have become software developers only able to attain such lofty states by chance, luck and/or good breeding? Because if that is your position: bollocks. Anyone can learn to read, and write, computer code. All it takes is the intention on the part of the learner to do so .. however when that option is denied them, because OS vendors do everything they can to profit from separated Developer tools, then it is a matter of one intention being greater than the other.
Certainly, anyone who wants to learn to write software, can. But don't kid yourself that there haven't been huge stumbling blocks put in their way in order to make the whole subject more specialized. That is a matter of economic reality at this point.
>Anyone can learn to read, and write, computer code
I agree that most motivated learners could pick up some very basic software skills, and those would be beneficial to them. In a utopian world basic programming skills would be treated the same as basic literacy and numeracy. However, at this point in time, it seems to me that most users don't care (maybe because they don't know what they're missing). As an analogy, most people could learn basic knife skills in a couple of hours and be much more productive in the kitchen with a chef's knife than they are now with the huge array of inferior point tools (cheap dull knives, garlic slicer, chopper, egg slicer, etc.). However there is obviously a big market for the latter. Similarly, I'm not sure extensible and modifiable systems have enough attraction to the average user.
Another reason for closed systems, one which I think is more important than the profit motive, is that designing extensible and modifiable systems is really hard and comes with trade-offs. E.g. Squeak (a Smalltalk dialect) allows you to inspect and modify almost everything in the system while it runs, but that also means that is really easy to screw up the system behind repair (e.g. you can switch every boolean in the running system by evaluating 'true become: false') and that supporting users of such systems becomes much harder. I am not aware of any system that has a solution to this.
Actually, all platforms have a compiler: a jit JavaScript compiler, ready to run whatever code you want by just typing it in a text file and opening it with your browser. (Incidentally, it's how I started back in 2000, on public computers).
There are also plenty of WebIDEs nowadays that people can use out of the box - we are not in the 90s anymore, when all software had to be "installed" to be readily available.
Yes, and you can't access the vast majority of services the phone offers without a hell of a lot of latency. If you can at all. JS is a joke compared to the capabilities of a c compiler.
If you're advanced enough you feel the need for more, you can get the compiler then - they're still available, just not bundled. JS is more than enough to learn programming and write some useful apps.
> If you're advanced enough you feel the need for more, you can get the compiler then - they're still available, just not bundled.
I don't believe you can acquire a C compiler for the iPad without buying an actual computer. At which point, why get the iPad again?
With just JS, it's a toy. A powerful toy, yes, but it's a small subset of what's possible with a computer, and one that leaves a very bitter taste in my mouth.
You're correct and this may be why we see all of these people who say "I have a great app/website idea!" and then do nothing or look for developers to implement their idea. If the barrier to entry is lower there would be more people trying to implement their idea.
It's not enough to have a compiler, we need a simpler scripting language as well so anyone can create throwaway web apps. We need a Tcl or Perl for Android and iOS.
These tools are out there. We need to make it unacceptable that they are not included. The only way to do that is innovate on the open-source/free- operating system front. For as long as "major producers" set the standards, we will be in their traps.
No, we see that because those people want to be rich, but not actually have to work. Seriously, the barrier to entry is stupid low, and yet they're not clammoring to implement their own idea. Why?
Capturing data is cheap, local CPU is expensive. In terms of battery capacity, which is the real currency nowadays. It sounds impressive to talk about a supercomputer in your pocket, but its power comes from a battery that weighs only a few grams more than a lacy garter belt.
I have a feeling that the utility of the fast CPU is limited to providing responsive reactions to a moving finger. A bit disappointing really.
It's basically an open spec for something similar to the Dropbox Datastore and Files APIs, with open-source implementations that anyone can host. The official client library has some fairly robust offline-first sync'ing capabilities, and also happens to be relatively backend agnostic, being able to act as an interface for either the remoteStorage API itself, or the actual Dropbox Datastore and Files APIs, as well as the Google Drive equivalents (although the latter two are under heavy development and probably not production-ready). This allows your users to choose the storage service they want to use for your app without you having to implement all 3 interfaces (and if they choose remoteStorage, be able to host their own storage instance or use one hosted by someone they trust).
I'm prototyping a distributed app right now that uses remoteStorage to enable multi-client sync and simultaneous logins. I believe this approach strikes a fairly good balance, giving users the privacy and service mobility offered by distributed software, as well as the usability and convenience offered by centralized software.
The only thing I feel they're sorely missing is a transparent encryption layer on top of the Datastore API (and possibly for Files too, though I'm not sure how feasible this is). I ended up implementing this part myself for the app I'm working on, but I feel it's unreasonable to expect every developer to reimplement something as essential and error-prone as cryptography.
While I appreciate the attitude so far I don't see too many applications redecentralization might be useful to. Yes, it's more fault-tolerant. Yes, it would for the most part avoid mass surveillance.
However, decentralized systems are also much more complex to build. So far, decentralized approaches in most use cases provide too little value for most users to really make sense economically. Privacy, fault-tolerance are certainly very important aspects. Unfortunately, in general most people couldn't care less. Despite some public outrage the NSA, GCHQ and the BND are still very much in business and doing as fine as ever.
Are there any applications particularly amenable to a decentralized approach? Quite likely. Is a decentralization a generally applicable model of the way we do our networking? I'm not so sure about that. Having a decentralized social network for instance would be some achievement but what benefit would it provide to the not-so-privacy-minded?
For server operators it makes sense to toss more processing onto the client-side as it lowers cost.
The FreeNet model is just ahead of its time: fully encrypted and fully decentralized using separate apps/plugins on the client-side to make sure it's always decentralized.
One issue with client-side decentralization has been how to pass things around to other clients when your main machine/box/mobile device is off. So you need carriers/exchanges that pass them around; in SaaS the client and data are stored on one server; with decentralization the data is stored on the client and on an exchange. Instead of being a server provider, you'll be a client provider coupled with a data host and you can just outsource that to S3 or some other data host provider.
I hope you're right! I've wanted this for the last 5 years and it seems that more and more good solutions are popping up, sandstorm.io being the most prominent to my eyes.
There are hardly any alternative networks? GSM/3g/4g are illegal to run on an ad-hoc basis most everywhere. Which leaves cabled networks (which are generally not legal to set up as metro/internets -- or even link freely between different buildings if you need to cross community owned land) and wireless mesh-networks.
I think both are interesting, but I don't see them forming a global interconnected network capable of delivering symmetric gigabit to 10 billion people (not that the Internet of today does that either -- yet).
I suppose local 802.11-whatever community networks might be connected in metro-/internets with high-bandwidth point-to-point links -- but it seems likely that they'd end up connected to the Internet -- perhaps on the ipv6 Internet.
Either way, hopefully internet access will continue to evolve like a utility, like electricity and water.
Interference with 'official' networks, besides that spectrum use requires a license in just about every country except for some very specific low power unlicensed segments of the spectrum.
Yeah, it does seem like more a problem of philosophy or perspective than a lack of advanced technology. We have all of the ingredients for a variety of dishes, but it seems we're settling into one basic recipe: centralized lockdown.
Let's say that's how social human and culture/natural order is. Lots of people wanting shiny things to play with others. Everything else is irrelevant as long as people aren't deeply aware of it (which is often when things break in their/our face).
>My guess is that it would cost too much, without the commodification of our private information.
Ask and ye shall receive. Uber, the darling of both the mobile world and the "sharing" economy is set to start tracking your every location so they can serve up fucking ads[1].
What if our information stayed private and we used our "supercomputers" to do all sorts of data-mining, or AI, or data-science on it? Same stuff the big guys are doing, but for ourselves. Smaller, higher quality datasets.
If our smartphones worked like this, and we'd benefit more from better information, then we'd also have a pretty good reason to develop advanced information-centric UIs and focus a bit more on security and trust.
That would be wonderful. But I doubt that providers would ever bundle them. So most people would consider them to be too expensive. And so they would die, like Blackberry has.
Smartphones and security don't really go hand-in-hand anyway.
However, I am encouraged by the fact that people paid many thousands of dollars in the '80's and 90's to purchase PC's. They were much harder to use, much less powerful, and yet because these PC's represented either a competitive advantage or a novelty, the price was paid. No Internet, but a heckuva lot more control and trust than now.
If a similar value proposition pops up again, somehow de-commodifying or replacing the standard PC design (smartphone or otherwise), I suspect we'll see a similar willingness to pay a premium.
Perhaps a more appropriate question is not so much if anyone is building a smartphone (from a hardware definition) so much as is anyone building an operating system which runs on commonly available smartphone devices which does not employ these tactics?
> And if not, why?
I'm certain there are efforts in motion which seek to establish a viable choice for many smartphone owners in what their devices run.
As to "the commodification of our private information", well, I have to ask: why accept this in the first place?
You may be interested in the Neo900 [1] project, which is building a trustworthy mobile phone and will support a variety of free software operating systems.
It will be possible to run with completely FLOSS software (including drivers) and it even solves the problem of the baseband modem having full control over the main CPU/memory.
I'm not living with Android. I'm waiting for Linux.
Edit: I don't mean "own" literally, financially. I mean "own" in the sense of access and control. So yes, I can root "my" device, but I can't root the baseband radio.
The next X-prize should be for the best implementation of a smartphone OS/GUI where the "smartphone" has no I/O whatsoever except for the touchscreen and a charging/debug port.
If that couldn't crank up our creativity, nothing could.
For me, it's tough to say what value a smartphone would have if it could not leverage its hardware aside from being a form of a kiosk. Not harshing you or anything, I just can't see it.
I'd set it up as a behavioral mirror. Naturally I have a horse in this race :)
Inputs would be the actions you take in life (choices you've made) and how they relate to your goals (which depend on actions). Output would be an interactive visualization of past actions. The purpose would be to help improve the quality of your choices (actions you take) to better achieve your goals. Algorithms would work with the accumulated information to pull out correlations, patterns, and in general accelerate the process.
No Internet required, and so much greater security, privacy, and trust than usual. Which is freeing in many ways.
@s73v3r, Self-knowledge can be tremendously rewarding too.
Smartphones aren't going anywhere, but their value proposition is not the only one available. If we were to jump-start our creativity through tight constraints like the X-prize idea, I expect that we'd end up combining the best of our current technology with the best of the new. Or maybe the worst of both...
One of the biggest values I get from a smartphone, and I suspect the vast majority of otherse are with me, is the fact that it connects me to the global repository of human knowledge. Cutting off the internet would neuter it to me.
It's more like complaining about license plate readers, cabin voice/video recorders, hackable cars, GPS tracking, and being increasingly dependent on a centralized service for repairs and maintenance. The "good old days" were actually different.
Yes, there are some benefits to all of those "features", but it seems prudent to consider and expose their risks as fervently as we're promoting the rewards. Perhaps the current balance isn't ideal over the long term, and we'll only see improvement by actively exploring the whole picture (whatever it is).
I think the important part here is not to focus on "supercomputer" but rather on "pocket". Imo the big thing is that you essentially have what used to be a desktop then a laptop and can carry it around in your pocket and do similar stuff with it AND it's socially accepted. Noone blinks an eye if you use your smartphone all the time. Rewind a bit and people would have blinked an eye if you had used your (bulkier) laptop all the time.
It's light and mobile yet sufficiently powerful. Locational context matters a lot more now and isn't tapped into fully.
+it's usually connected to the hive aka Internet so you essentially carry lots of information with you at all times. I'd argue that instant access to information (prices etc.) can be more enabling than pure access to the underlying computer.
+some other gizzmos like an always available camera/voice recorder/music player
[don't get me wrong, I'm fully aboard the more free systems NOW train]
Another side of this is that we don't really have supercomputers anymore, just piles of PC servers with fast network cards - which are called supers for pr reasons. Supercomputer vendors mostly went bust in the 90s because it got too expensive to make custom computer architectures much faster than COTS.
The supercomputer in your pocket statement is meaningless. A calculator in my pocket in 1991 was a supercomputer compared to an abacus or the Apollo 11 computer. A gsm or star-tac motorola cell phone in my pocket in 1997 was a supercomputer compared to that calculator. So what?
It is not meaningless, it is describing parallel (or at least comparable) revolutionary technology. Think about each you mentioned: How many industries where positively impacted by the advent of cheap pocket or desktop calculators? Would the Apollo missions have been possible (or as successful) if the astronauts had to calculate every azimuth with a slide rule? The slide deck seems focused on the point that mobile is the next major revolutionary technology, comparable to the introduction of super computers and the other revolutionary tech that you cite.
Really? You're going to dismiss the ability to pull up maps and be guided to your destination. Voice recognition, browsing the web, a music player, and having a video game console in your pocket.
The value people often get from a locked down device [for popular definitions of "locked down device"] is of a kind with the value people get from a book or a television. The kind of potential value is below that of a general computing device, but there's still a lot of value in that space; e.g. books provide value despite being static.
I don't think there is any kind of example that would benefit me as an individual, not for my personal life anyway.
I don't need a render farm, I don't have complicated delivery schedules to calculate, I don't even want to use a spreadsheet solver very often away from my desk - and I study computational logistics.
Currently supercomputers are used to do these things, because you can't do them on a mobile phone or desktop.
The "supercomputer in your pocket" discussion is somewhat disingenuous - an iPhone6 from 2014 wouldn't even have made the top 10 in 1993 over 20 years earlier. A million-fold increase in transistor density and it still wouldn't rank as a supercomputer.
There are a lot of fascinating things you can do with more computing power, but it's not really clear that doing so on a power- and heat-constrained mobile device brings a competitive advantage in a highly-connected environment.
"Everyone gets a pocket supercomputer from past century."
One can not compare smart-phones power to modern supercomputer !
And like you say you can not do all the useful things that are possible with supercomputer and PCs !
> Yet we seem to have less and less control over our smart-phones.
An objective analysis of the two predominate smartphone OS's would suggest that we have never had control over the devices we wield.
> With so much information about each of us being siphoned off through the Internet, it's easy to wonder whose interest they serve.
See statement above.
> With all of the advances in computing power, you'd think we'd put a bit more imagination into capturing more value for each individual smart-phone user ...
I'm pretty sure this is the case, with Apple having a market cap of $735,165,038,300[1] and Google having one of $367,770,688,769[2].
>An objective analysis of the two predominate smartphone OS's would suggest that we have never had control over the devices we wield.
Right, that's the state now. Back when blackberry and nokia were rocking things, it wasn't so grim.
>I'm pretty sure this is the case,
No, you're stock quotes prove that they are capturing the value of the user, it doesn't say how effective they are at capturing the value for the user. All they need to do is keep people interested in the phone to accomplish the former. That doesn't mean anything about actually providing the best for them.
> No, you're stock quotes prove that they are capturing the value of the user, it doesn't say how effective they are at capturing the value for the user.
It's gratifying to know I can convey significant meaning in the form of prose ;-).
> All they need to do is keep people interested in the phone to accomplish the former.
Excellent observation, IMHO.
> That doesn't mean anything about actually providing the best for them.
Unfortunately, this is generally not a concern for a 10-K[1].
When Blackberry and Nokia were leading things, smartphones were far less popular, and the market for smartphone software far less so. Thus, if you wanted something to run on your smartphone, you paid for it. But nowadays, people aren't willing to pay money for smartphone software. So they have decided to pay in another way.
We are capturing value for those users. Those users have decided beforehand, though, that they don't want to pay up front for their apps to realize this value. So they have to pay in another way.
Nearly all of the comments in this thread are extremely negative - ranging from hand-wavy about how valuable mobile is to expressing excitement that "[dumb] people will [again] be leaving the Internet." What an incredibly pessimistic and self-centric way of viewing the world.
The incredible thing about a supercomputer that fits in your hand isn't that we're putting them in the hand of hackers who went to MIT 40 years ago. It's not exciting for someone who was going to be sitting behind a Linux terminal anyway. This pretty much changes nothing for them.
But it changes everything for the kid in sub-Saharan Africa who has never had access to a computer. It changes everything for my friend's family in Iran - none of them ever had a computer, and now all of them have a smartphone.
It even changes everything for my father-in-law - a hay farmer who had little interest in using a computer, but inexplicably loves his iPad. He takes it out to the farm and performs what I would consider the most trivial of computing tasks, but it's something he never did before, even when PCs were cheap and ubiquitous and I spent hour after hour teaching him how to scroll and double-click.
Of course, hackers are right: That smartphone is not be as good at editing photos as your 15-inch Macbook Pro. And it's a horror to write code from. But, to borrow an analogy from Peter Thiel, the difference between editing photos on your phone and editing them on a computer may be a move from 1 to n for hackers. Hackers, who have been at n for years, are rational in not caring. For this Iranian family, and for my father-in-law, however, this is a move from 0 to 1. And that's a big deal.
Are the operating systems more closed than hackers would like them to be? Yes. But my friend's family in Iran has neither the interest nor the ability to hack on the kernel of some mobile operating system, so they don't really care. Is it harder for them to type on a phone than it is on a keyboard? Of course, but now they're typing something. It works, and they're using it, and that's a big, big deal.
This isn't necessarily a revolution of what it's possible to do with computers, but a revolution of who can do things with computers. We'll soon be reaching economies of scale we could never imagine. Access, where there previously was none.
That you can call an Uber because it's in your pocket and happens to include GPS, in my mind, is just a side-benefit.
Have you considered that less control for the user is more control for someone else?
The more we do on computers that are opaque to us, and yet interconnected on the Wild West of the Internet with OS's with known vulnerabilities (and potentially backdoors) and privacy-hostile apps, the deeper and more detailed our activity and communications can be tracked. Are those good conditions for a healthy balance of power? Perhaps citizens of developing countries will face less-tolerant political climates than we're fortunate to have.
The long-term consequences of a lack of control over our smartphones, which as per the presentation deck we tend to have with us all the time and use very frequently, and also have tremendous sensor capacity, seem to be worth some thought.
Spots on! Thank you for writing this.
Access is definitely the most important thing in here. It actually remind me of the important work that Internet.org is doing.
So it's not a revolution in computing, it's a revolution in the ability to sell those devices to other people. Big whoop. Let me know when they can root the devices they buy so they aren't crippled anymore.
One comment from experience
Slide 21 "The mobile supply chain dominates all tech"
/ Flood of smartphone components - Lego for technology
The shiny Lego is only available for the major players.
For the current key components - GPU/CPU/Camera sensor - you can't order them/get support/get docs unless you have scale or amazing connections. If you are a hardware startup your lego is 2/3 yrs behind the big players, and behind public perception.
This makes complete sense once you look at chip fab costs/profit models and the industry structure, but is not great for disruption from smaller players.
NB this was from a European perspective of doing things officially - it's possible there might be more 'unofficial' components and support if this is done in China with strong local support
This has been a long tradition in the hardware industry, it's nothing to do with mobile.
I guess the problem with mobile is more that you need much more expertise to properly design hardware on a mobile scale, and the scale makes every step so much harder. It's a huge barrier to entry.
After the context is set, the most important page appears to be 44. It suggests that the next blessing[1] of unicorns will tackle enormous markets by building products around mobile. Didn't this shift already happen? I couldn't think of many major industries that don't already have mobile-first contenders.
Maybe I'm missing the point of the "tech is outgrowing tech' sentiment?
The presentation touches on smartphone penetration and communication behaviors of teens but really doesn't grapple with that phenomenon with any novel amount of rigor. The talk is aimed at making us believe in (i.e. want to invest in) tech. The argument is that the opportunity is so big, even fools who just throw money in the pot stand to make money.
The trouble is that we're mostly aware of how awesome mobile penetration is and how vital social networks are. I'd much rather see the preso that brings new evidence and rigor to the table than the last ditched effort to pick up conservatives who have been ignoring tech for the past 7 years.
If a16z wants to chart progress, would love to see some of these graphs posted online and live-updated daily/monthly.
There was a time, when the big investors were saying things like "even fools who just throw money in the pot stand to make money". I think it was around the year 2000. It worked out well for a few of those big investors, not so much for the fools.
Mobile puts users back in their proper role as consumers, where they belong. The personal computer, and the Internet, were originally seen as subversive tools of empowerment. Remember those "manifestos of cyberspace" from the 1990s? Remember cyberpunk? Well, that didn't happen. Most Internet traffic today goes to the top 10 sites. None of those sites are even run by companies with a broad shareholder base. The billionaires are firmly in charge.
"If you want a vision of the future, imagine a boot stamping on a human face - forever." - Orwell.
There is always a small market for a slow lumberjack, but there is zero reason to use anything else than the best software.
Best lumberjacks might live in different corner of the world and they are most likely busy. So you hire the slow one. But with software things are different.
When software is eating the world, it means creating monopolies in all markets. All text messaging is being eaten by one company, there is only one social network, there is only Twitter, soon there is only one taxi company and there have pretty much always been only one professional photo editing software. And the list just goes on forever.
How on earth do we collect taxes and finance social services and all that stuff when there is only handful of companies turning a profit and they need fewer and fewer hands building the service and they are not tied to a geographical location?
Strange definition of "best software", if you're implying that a) "best software" wins, b) the top 10 sites are examples of "best software".
Google is still best at search for most sane metrics. But to claim that they're "best" at email, blogging, image services? Or even good? For the end users? Popular does not imply best (or even good). It implies marketing and mind-share.
I don't claim to be smarter than the whole market. So yes, GMail is the best email application (assuming it has the biggest market share, not sure about that) because that's what the market tells us. It might not be good, but it is the best.
But better term would have been "best software service" or something like that. Since at least early version of Facebook were bad software, but it was already the best social media service since all your friends were using it. Also, Netflix might be really crappy software (I doubt it, but it could) but because it has the best movie library combined with the best prices, everybody uses it.
There are areas in the world where coca cola is cheaper and more readily available than clean water. Are you trying to say that the only reason for that is that most people prefer it that way?
I have like at least 10 different apps I regularly chat on. This seems pretty typical. There are a bunch of better alternatives to photoshop. Facebook has to keep buying up and coming networks at exorbitant rates.
The markets say that you are wrong. Pretty close to all who edit photos professionally use Photoshop. And as the presentation told us, WhatsApp is eating the whole text messaging markets. I don't use it and I don't like this trend, but that's what we got. And the fact that Facebook had to buy Instagram and WhatsApp just reinforces my point. If they would have not bought WhatsApp, WhatsApp might have replaced Facebook. So all markets are going towards monopolies because there is no reason to use second best software.
You do realize that this is an US-centric view, right? From what I can tell China and South Korea have completely different sets of popular social networks.
Another issue with your "everything will be a monopoly" is the fact that new networks keep popping up - see Slack. The King creates those who want to dethrone him.
No, it's not a country centric view. People do not use Weibo and Facebook, they use Weibo or Facebook depending were they live. Facebook has a monopoly in most western countries, Weibo in China. The reason why Weibo is dominating in China is political. That will change. It might not be Facebook that overtakes Weibo, but something will do it once China opens its Internet. Slack and its competitors are just forming a new market that will be overtaken by one company.
Because software services are not tied to a geographical area and because they are immaterial and will not run out of stock, there is no reason to use second best software. While hiring slowest lumberjack to help the fastest one makes the project finish up faster, the same is not true with software. You just use the fastest one.
Until quite recently, many different countries had different popular social networks, including Vkontakte (Russia), Orkut (Brazil), Hi5 (Peru) and so on.
http://readwrite.com/2009/06/07/post_2
Adoption is driven by network effects, so it depends who got traction first. However, the global trend is obviously towards Facebook. It has the biggest network and ultimately the strongest network effect.
Mobiles and personal computers are different, but the difference will eventually blur.
What worries me though, is that currently mobiles are not as great as PC's when it comes to learning and creating. And that will slow technology growth, as todays youth are consumers rather then hackers/creators.
Just because mobile is more prevalent does not make it more valuable, in fact, quite the opposite: the fact that it reaches more classes dilutes the spending power of the average user. What we have ended up with is a segment with extreme competition AND low app prices (the average PC app sells for at least 10x more than a mobile app).
But this is only the tip of the iceberg. Although we have tried to app-ify everything, I still prefer doing 99 percent of tasks on a device with a real keyboard and enough horsepower to prevent lag (in spite of rapid improvements, I still find the lag on my mobile device (1 year old now - HTC One M8) to be frustrating).
Pretty informative presentation, but the last slides are misleading.
It makes sense that the frequency of the word representing a certain technology in different books throughout time is modeled by a normal distribution. However, because the reason why they were included in a text creates such a huge bias/mental trap, such frequency shouldn't be a relevant indicator to measure the "sex appeal" of an industry.
The $30 fully featured smartphone is already here. I've been using a Microsoft Lumia 635 ($30 with no contract, $50 unlocked in US) as my main driver for a month. Sure it doesn't take epic photos or play the latest mobile games. But everything you'd expect works surprisingly well. Compare it to a couple of years ago, even a $100 Android phone felt castrated back then.
Great presentation. This focuses on new business opportunities enabled by technology... but what kinds of political changes would you expect as half the world's population gets access to cheaper information and communication? Which institutions would you expect to gain or lose?
There is a lot of meditation on the meaning of "Everyone gets a pocket supercomputer" here.
Thing is, mobile devices are not pocket personal computers. You might wish they were, and maybe someday the few million of you, out of the 1.5-2 billion annual mobile device customers, worldwide who wish it will have mobile devices you really can take complete control of.
Heck, out of the 300 million annual PC customers, how many of them buy PCs thinking "This is my personal computer?"
And if we really want secure, controllable, personal computers, we'll need to re-invent them because PCs long ago sold out to IT and monitoring and compliance and all that.
I think it is very "dangerous" to say mobile = smartphone = iOS + Android, at least that is what I hear people say.
What about all the billion devices we get in clothe, toys, tracking, etc?
You can make insanely fast and small hardware today, and it will be used for awesome stuff. That is not just because you have a smart-phone in your nasty little pocketses. .-)
"Mobile" -- an astoundingly popular
collection of new products? Yes.
"Changes everything"? No.
Mobile is new and popular? Yes, and at
one time in the US so were tacos.
New and popular are not nearly the same as
changing everything.
=== Use a Smartphone?
Could I use a smartphone to buy from
Amazon? Yes. Would I? Very definitely,
no!
Why not?
(1) If the user interface (UI) is a mobile
app instead of a Web page, no thanks.
Why? Because with a Web page and my Web
browser and PC, I get to keep a copy of
the relevant Web pages I used in the
shopping and buying. And I very much want
to keep that data for the future.
(2) Want to keep those copies of Web pages
on a mobile device? Not a chance.
Why? Because for such data, I want my PC
with its hardware and software. I want
the Windows file system (NTFS), my text
editor and its many macros, and my means
of finding things in the file system.
My PC also gives me a large screen, a good
keyboard, a good printer, a mouse (I don't
want to keep touching the screen -- in
fact, my PC screen is not quite close
enough for me to touch), ability to
read/write CDs and DVDs, backup to a USB
device, etc.
Do I want to backup to the cloud? Not a
chance. I backup to local devices.
Why? Because for cloud backup, money, a
cloud bureaucracy, the Internet, spooks,
and lawyers could get involved.
=== Business
My business is a Web site. I'm developing
that on my PC, and will go live on a PC --
in both cases, a PC, not a mobile device.
Mobile users of my Web site? Sure: My
Web pages should look and work fine on any
mobile device with a Web browser up to
date as of, say, 10 years ago.
=== New Business for A16Z
It sounds like A16Z likes mobile because
for 2+ billion poor people smartphones are
their first computer and are new and
popular.
Okay, then, A16Z, here's another business
you should like -- bar soap. Also, of
course, just from the OP, tooth brushes.
No way should we forget -- salt. Okay, of
course -- sugar. Sure, one more -- toilet
paper. Naw, got to have one more, plastic
knives, forks, spoons, and drinking cups.
Not to forget -- sell them batteries for
their smartphones. Maybe even solar panel
recharging for their smartphones!
Especially for A16Z, got to have one more
-- sure, Kool Aid.
=== Summary
A computer is the most important tool in
my life. Currently my PC is my computer.
A smartphone most definitely does not
replace my computer.
Actually, at present I have no use for a
smartphone, a cell phone, or a mobile
device and, so, have none.
Actually some years ago a friend gave me a
cell phone. Once I turned it on, and some
complicated dialog came up about my
reading some contract and sending money.
I turned the thing off and haven't turned
it back on since.
Or, my PC has a network effect: It has
all my data and means of entering,
storing, processing, communicating, and
viewing data, all in one place. A mobile
device cannot be that one place, and, due
to the network effect, I don't want to
split off some of my data into a mobile
silo.
=== Denouement
This post was written, spell checked, etc.
with my favorite text editor, using my
favorite spell checker, on my PC, and no
way would I have wanted to have done this
post on a smartphone.
The claim of the OP is that "mobile
changes everything". The OP frequently
compares with PCs.
My point is that mobile does not change
everything, and in particular does not
replace PCs.
I illustrated with examples I know, my own
usage. E.g., I am a very heavy user
of computing -- no one with only 24
hours a day can expect to be a heavier
user. Still, personally I have no use
for mobile devices at all. None. Zip,
zilch, zero. For me personally, mobile
changes nothing. Can't use
it. Don't want it. No sale.
"Mobile changes everything"? Not for me!
Mobile doesn't replace food, clothing, shelter,
cars, medical care -- or PCs.
More generally, for a user interface, there
are a lot of advantages to just highly
universal, device independent HTTP, HTML,
CSS, and JavaScript. As a user interface,
apps are a really big step down -- not
universal, device dependent, can't save,
access, reuse the data, get more security
problems, etc.
Of course it's not about me -- I just used
my examples and thinking.
You are welcome to give up your PC if you
want -- I'm keeping my PC, and at least
for now don't want a smartphone.
I intend to buy at least two new PCs --
I don't want a smartphone around even
for free.
In particular, while A16Z is all excited
about mobile changes everything, for my
own usage I care less than 0.00 about
anything mobile. For my business, my
mobile strategy is just to have really
simple Web pages.
The A16Z
data presentation is from okay up to quite
nice, but their conclusions from their data
are junk.
A16Z is just looking for attention, and
are passing out nonsense.
Maybe with such attention they hope
to get deal flow. So, maybe the
hint is: "Entrepreneurs, send us
your mobile business plans -- we're
eager to write early stage equity funding
checks for such." Likely
nonsense: No doubt among what else
hasn't changed are VCs' criteria for
writing early stage checks -- traction
significant and growing rapidly
for a market and a product for that
market that might quickly be worth
$1 billion and where the entrepreneurs
are desperate for cash and to sign
a bad business deal.
Further, really want people thinking
such nonsense on your BoD?
I want to debunk their nonsense.
Just why they pass out such nonsense
I don't know, but a guess is that they
believe that it will help them with
their LPs. We're talking some really
gullible LPs.
> My point is that mobile does not change everything, and in particular does not replace PCs.
The point is, for a vast majority of users, it does replace PC's for them. They were never writing software or doing photoshop. They were surfing the web, buying stuff, and playing solitaire. Those people can do that, and so much more, on their mobile phones.
> Still, personally I have no use for mobile devices at all.
You are in the minority and you should just accept that.
But the data and sources, as good
as they were, and interesting, even
astounding, didn't rationally support
their claim that "Mobile changes
everything".
Moreover, their frequent comparisons with
PCs flop: So far mobile just will not do
much to replace PCs. The world has a lot
of tacos, too, and they won't replace PCs
either.
Broadly mobile is mostly just a new product
(collection of new products, new product
category),
does replace PCs for some relatively light
work formerly done on PCs, has some new
functionality and uses from being mobile,
having GPS and a camera, etc., but, still,
just does not yet replace PCs, and the data and
sources do not so establish.
For any good theory, need some examples,
and I offered mine.
Basically, for any really heavy PC
user, mobile is not a replacement.
But mobile is changing everything. I don't have a fixed phone in my house, I use my mobile phone. I don't use one of the two expensive DSLRs in my house, I use my phone. In two months recently in the US, I never hailed a cab, but used an app to hail and track a car. I read books on my phone, I edit photos on my phone, I track site analytics and app sales on my phone, I play games on my phone, I navigate using maps on my phone, I record notes on my phone.
I switched from a desktop to a laptop three machines and 10+ years ago. Never considered buying a desktop since. Modern mobile devices (Surface-style tablet/laptop hybrids) are very capable these days and not far away from the MBP I use. Once phone-form-factor devices are more powerful and we have virtual displays and a keyboard replacement, the needle will shift further. That level of capability will cover almost everyone who uses a computer for work.
Don't get me wrong, I don't work on my phone. I feel cramped with less than three monitors. I like to spread out. But I also like to scrawl notes pencil on paper, and that doesn't mean that computers haven't taken over the offices of decades past.
And there's no denying that the mobile form factor - a computer that you carry around with you easily - is dominating and will go further. Someone who doesn't have a use one will be a blip.
Maybe all those are possible mobile; but just not very good. Edit a photo? For real? With one finger? Books one paragraph at a time? That's gotta be slow. Games - not immersive 3d ones, maybe dumb puzzle games or side-scrollers with a tiny part of the screen visible at one time. And navigation is very hard on a phone - turn-by-turn is the norm, which totally blows any global awareness and turns you into a cog. I can't see enough map on a phone to even begin to plan a route.
I'm heavily biased - I can't exist without 2 large screens, mouse and keyboard for what I use a computer for. But from my point of view every attempt to use a mobile device ends in frustration and despair - they are so slow, such a tiny bandwidth for interaction, information comes in droplets. I'm unwilling to dumb myself down to their level.
> Edit a photo? For real? With one finger?
Do you use 2 mice to edit your photos? Most computer graphics people I know edit photos with a pen (also a singular digit). The main thing I miss when editing photos on my laptop is the ability to reach out and touch it, and normally with 1, 2 or more fingers.
The main problem is the damn iPad and its 'fat finger' syndrome, but I have a pen based 8 inch tablet that I wish had more capable photo editing software, because it turns editing photos into a dream (I am not a professional, and could never get the hang of digitizing pads, I like to see what I am editing directly under the pen)
This isn't about productive work for a narrow grouping of people but about life. For 99% of people (the huge growth in mobile the slides talked about), editing a photo is applying filters and tweaking some sliders. It's not Aperture and Lightroom or Photoshop.
My Garmin has about the same screen size as my phone. I've used both successfully for navigation many, many times.
Took a DSLR on a recent two-month trip and outside of Yellowstone (zoom lens...), I barely touched it. My phone takes excellent photos, takes slo-mo videos, auto-stitches panoramic photos instantly, has timelapse options, HDR apps. And it has more storage, built-in sharing/social options and it's far, far smaller.
> In two months recently in the US, I never hailed a cab, but used an app to hail and track a car.
I haven't hailed a cab in years -- rarely
need a cab. If I needed a cab a lot, sure,
maybe I'd get a cell phone. If some Uber
app was a lot better, still, and I wanted
to use Uber, then I'd get a smartphone.
I live out in the 'burbs -- deer, ground hogs,
etc. in the back yard -- where the cab
service sucks.
> I track site analytics and app sales on my phone
The times I try to read graphs of data, usually
I copy the file of the image to disk, read it
with Microsoft's Picture and FAX Viewer,
and zoom in a lot:
Commonly people who draw graphs like the visual
aspect but come close to just ignoring the
axes and commonly have the text way, way, way
too small -- I can't read the text at all
without about 3X zoom. It'd be worse on a
smaller screen.
So, you are using a smartphone to read data
and less to write it. To draw a graph, I
use Excel and scream bloody murder until I
get out my notes on what the heck hidden
magic ways to left/right single/double
click to get the standard things done, e.g.,
get the font sizes up about 4X, make as
much as possible black and bold, etc.
It turns out that, then, I usually, for the
people I tend to show graphs to, print
the graphs on paper.
For photos, it turns out that I bought
for about $2 a digital camera -- cute
little thing, sold shrink wrapped on
a card. Apparently it has the camera
guts of a smartphone. It also has a
USB socket. And, yes, Microsoft PhotoDraw
will read an image from a USB port.
So, I can take and print some images.
But, soon, I saw that my sack of Nikon
camera equipment, especially with
my Honeywell smart flash, takes
much better pictures. To develop
the pictures, I just take the film
to Sam's Club or Wal-Mart. Sadly
their scanning resolution is much lower
than my pictures deserve, but otherwise
I get good pictures. Sure, someday
I will cough up $10,000 for a new
bag of Nikon equipment but with
CCDs instead of film. I'll get a
copy of PhotoShop, a color printer,
etc.
No joke, smartphone cameras (with some
interesting principles of optics on the
advantages of such a small lens,
short focal length, and number
of pixels per inch in the detector)
are no doubt now taking each year,
maybe each month, more photos than
were taken in all the history of
film cameras. But the quality is
like old snapshots.
E-mail on a smartphone? I won't do it:
To me e-mail is important; I have some
good ways to handle it; and those
ways make good use of my PC and its
software and won't carry over to
a smartphone. I won't put my e-mail
in a smartphone silo but want to
keep it on the same computer as
all my other important data --
data to/from e-mail needs
a big fraction of
my collection of all my data.
So, right, for some of the easier, read-only
work on a PC can move to a smartphone. And
a smartphone has some new hardware that
permits some new uses.
But, replace a PC? Not for any very
serious PC user. But the A16Z piece
kept comparing with PCs -- and that's
about as relevant as comparing with
lawn mowers.
It was just such a really good story --
a PC is a computer, a smartphone is
a computer, a lot more smartphones
are being sold than PCs,
some relatively light work done on
PCs can be done on smartphones,
a smartphone can also be used for
some new work that needs the new
hardware, ergo,
now the newsies and A16Z
have a story --
smartphones are changing everything,
including PCs. Nonsense. It's
just a tricky, deceptive story.
And that's a lot of what the OP
had to say: A lot of that data
was interesting, but, net, smartphones
are no more replacing PCs than, say,
an electric bicycle can replace a car.
A lot of tacos are being sold, more than
PCs, but that doesn't mean that tacos
are replacing PCs.
My simplest point: Mobile is not replacing
PCs. Some people don't like me to
say this, but it's just dirt simple
that I'm correct. That you like the
big monitors on your PC is part of the
best evidence -- until there are some
suitably good special glasses
those big monitors will remain a
big reason mobile devices won't replace
PCs. Then there's the mouse, the keyboard,
the printers, the old software --
dirt simple argument.
Just stop being autistic here. You're arguing a straw man. Nobody is saying that no PC's will be sold any more, or that all work on PC's can or will be replaced by mobile applications. The fact is (and all numbers prove so, from the sales numbers of phones and PC's, to the surveys and tracking data on use of mobile and desktop software) that many users can do on phones what they used to need a PC for (surfing, email, facebook) and that they have so little use for a PC that they, when facing the choice between phone and PC for cost reasons, choose the phone. I'm not sure how you're even arguing this is not true, a blind man can see it - you're arguing the equivalent of creationism.
Likewise, you boast that you are such a heavy PC user, which is exactly what puts you 3 or 4 standard deviations away from the modal user. So your diatribe on your own usage habits is statistically insignificant, and so a worthless sample.
I uderstand that phones and tablets have replaced typical desktop usage for the most casual user section of the population but I also find it quite sad.
These people (if studies and surveys are to be believed) basically 'surf' the internet through 4 or 5 particular apps on their phones and they form the filter bubble that is their experience of the internet entirely.
They don't even use browsers for the most part. Maybe thes epeople never used a desktop to 'surf' the internet and discover new things in the first place, but now they never will.
>Actually some years ago a friend gave me a cell phone. Once I turned it on, and some complicated dialog came up about my reading some contract and sending money. I turned the thing off and haven't turned it back on since.
"Computers" -- an astoundingly popular collection of new products? Yes.
"Changes everything"? No.
Computer are new and popular? Yes, and at one time in the US so were tacos.
New and popular is not nearly the same as changing everything.
=== Use a Computer?
Could I use a computer to buy write a novel? Yes.
Would I? Very definitely, no!
Why not?
(1) If the media is a floppy disk instead of a piece of paper, no thanks.
Why? Because with a piece of paper and a sheet of carbon copy, I get to keep a copy of the relevant pages I type. And I very much want to keep that data for the future.
(2) Want to keep those copies of pages on a computing device? Not a chance.
Why? Because for such data, I want my typewriter with its hardware and paper. I want the special correction ribbons, my editor his many years of experience, and my means of finding things in the file cabinet.
My typewriter also gives me a good keyboard, a good (and built in) printer, a mechanical freedom from power (I don't want to keep close to power outlets -- in fact, my writing desk is not even close to any outlets), ability for anyone with a pair of eyes to read/write on my work without needing a whole computer of their own, backup via Xerox and offsite backup via fax, etc.
Do I want to backup to tape? Not a chance. I backup to local acid free archival quality paper.
Why? Because for tape backup, money, a new hardware, power concerns, spooks, and technicians could get involved.
=== Business
My business is a writing books. I'm developing them on my typewriter, and each page goes live as soon as I finish the page.
PC readers of my books? Sure: My pages will look and work exactly the same on any pair of eyes, even with a pair glasses hundreds of years old.
=== New Business for Microsoft et al.
It sounds like Microsoft likes computers because for millions of businesses MS-DOS will run their first computer and it's new and popular.
Okay, then, Microsoft, here's another business you should like -- coffee. Also, of course, just from the OP, paperclips. No way should we forget -- salt. Okay, of course -- sugar. Sure, one more -- toilet paper. Naw, got to have one more, plastic knives, forks, spoons, and drinking cups.
Not to forget -- sell them service level agreements for their computers. Maybe even backup word processing software for their computers!
Especially for Microsoft, got to have one more -- sure, Kool Aid.
=== Summary
A means to write is the most important tool in my life. Currently my typewriter is my means of writing.
A computer most definitely does not replace my typewriter.
Actually, at present I have no use for a computer, a radio, or a television and, so, have none.
Actually some years ago a friend gave me a computer. Once I turned it on, and some complicated dialog came up about my reading some End User License Agreement and sending my first born child to someone named Steve. I turned the thing off and haven't turned it back on since.
Or, my typewriter has a network effect: It has made all my writing and means of entering, storing, processing, communicating, and viewing data, universal to any literate sighted person. A computer cannot be that one place, and, due to the network effect, I don't want to split off some of my data into a immobile silo, tethered to a power socket and only able to communicate with people who have also sent their first born to Steve (if they sent their first born to Bill instead - no deal).
=== Denouement
This post was written, spell checked, etc. with my favorite mechanical typewriter, using my favorite spell checker (The OED, hardback) on my writing desk, and no way would I have wanted to have done this post on a computer and then try to figure out how to plug it into the power and the phone socket.
Instead the page went live on the notice board of my local library where people from all walks of life can easily view it.
But I was writing my Ph.D. dissertation
just as daisy wheel printers were becoming
popular. Then there was no doubt: I
typed my dissertation into a text editor
on a computer (I was from my
non-academic work already really good
with those two tools)
and printed it out on a
daisy wheel printer.
That approach
was much better
than a typewriter because of the accuracy
(I have horrible aptitude as a typist and
desperately need the power of a text editor
and computer to make corrections),
the speed (could
get a new copy at 30 characters per second),
and could make revisions quickly and
easily, and I made a lot of
revisions before I handed in
a copy. Then for a prof I had
to add a few words of clarification
to a paragraph -- did that, printed
the whole thing again, and returned
the result to him on paper in
24 hours. Yes, the only place I could
do that word processing was at my office,
but that was
still better than my typewriter at home.
More generally, one of the most important
uses of computers has been for document
preparation, and that is still a big
need, if only for a post at HN where
I'd much rather have my computer than
a smartphone.
I might have become a tenured prof, but
the main bottleneck was just getting
my academic mathematical
word whacking done. Net, I couldn't
do it. Flatly. No way.
The word processing
group of the university I was in
couldn't do it. I couldn't do it
with just a typewriter. Halt. Full
stop.
Doing the academic research? Fun and
easy. Writing up the math? The same.
Getting the typing done? Just impossible.
Now I
can do mathematical academic word processing
with my good set up
on my PC of D. Knuth's TeX, etc. For the last
paper I published, after I gave up
on being a prof, I used TeX -- it
worked great.
But, sure, a smartphone wouldn't help
with TeX. TeX on a smartphone? F'get
about it.
Net, a PC is a great tool, for many
things, e.g., nearly all of document
preparation, much better than a typewriter,
even though a typewriter has some
advantages as you point out. For nearly
all of document preparation, the extra
cost, complexity, etc. of a PC
are very much worth it. Really, net,
PCs replaced typewriters.
So far I have no use for a smartphone.
One reason is that I stay at my PC
working on my startup. My phone
is right here, with its signal
running through my PC's FAX modem card
so that I can use my text editor
to find and dial phone numbers --
also save phone numbers.
In time a mobile device may replace
a PC. But need:
(1) A good replacement for a keyboard.
Maybe voice or brain waves would
work if can reduce the detailed
complexity of much of current
keyboard input, e.g., HTML and CSS
markup.
(2) A good replacement for a mouse,
and a finger on a screen is not good enough.
(3) A way to view output much better than
a small screen. Sure, may be able to use
some special glasses. Sure,
if 3D is to play a big role, then
special glasses may be the way.
(4) Appropriate versions of, or
replacements for, a lot
of crucial PC software, e.g.,
my favorite text editor, Knuth's
TeX, PDF writers,
lots of old programming languages,
libraries, source code, etc.
I have a lot of such I very much
do not want to be without.
(5) Backup, say, to the cloud, that
can trust enough to replace
tape, writable DVDs, external USB
hard drives, etc. Lawyers
are a biggie threat.
(6) Have enough trust in the
Internet finally to depend on it
nearly totally.
(7) Solid solutions for the
many, severe security threats of
mobile devices.
(8) A way to get data to replace
getting data just by saving
Web pages. E.g., when I
shop for or buy things on the
Internet, I want to keep
the associated Web pages.
Sure, maybe (1)-(8) will come.
E.g., for storage, maybe
some of nanowires, what HP is
doing, etc. will mean that
all my storage can be in little
cubes, about the size of a sugar
cube, I can write and store
in my side desk drawer,
also on the Internet in case of fire.
Also stick under the insulation
on the floor of the attic where
no way can lawyers find it.
But, just for now, biggies are
some comparatively simple things --
keyboard, mouse, screen, laser printer,
DVD R/W, and a few more. E.g., I
still use my laser printer for
some crucial things. Indeed, I
still keep my daisy wheel printer
as the best way to address
envelopes for USPS -- they
are still in business, not yet
totally replaced by the Internet.
Here's another thought: PCs are
here and so far there is nothing
to replace them.
But as it slowly becomes possible
to replace PCs, they
will be replaced not
by mobile devices but slowly
by a sequence of incrementally
better PCs,
some of which might
have form factor and power requirements
to permit being mobile.
Over time,
maybe it will be possible to
replace PCs. Sure, Windows XP
replaced Windows 3.1 and PC/DOS --
that was incremental as I
believe the replacement for a PC will be.
In strong contrast, smartphones
and other mobile devices won't replace
PCs, at least not for a while.
Look, A16Z, there are some uses
for smartphones: Teenage girls
are genetically compelled to gossip,
24 x 7 if they can, so
need at least cell phones.
Nearly everyone who works from
a panel truck wants at least a cell
phone. Cell phones are a good
replacement for the radios
that taxicabs used to use,
and now smartphone are crucial
to Uber, etc.
But for the billions of smartphone
users, the reason was simpler: It was
much cheaper just to put up cell
towers than to lay copper cable.
Ergo, smartphones instead of land lines.
But don't expect that several billion
people in mud huts will be using
their smartphones to order
four feet wide TVs from Amazon.
With some irony, as the Internet
improves, the need to move around
will lessen and, then, so will
the desirability of being
mobile.
Or, having two locations,
one at home, and one more at an office,
and driving between the two,
is a bummer -- waste time, money,
and energy. Being mobile
is often a bummer.
I come at the mobile 'changes everything' just in my usage of computers and mobile devices.
I still use my laptop as a creation device, mainly for your point 1 above, I do like a good keyboard and mouse, but I do like touch interactions for certain things, and I think my next 'main' computer will have a touch screen.
I use my tablets as consumption devices mainly. I love reading, and I find the iPad has a great form factor for reading certain types of books - tech books, graphic novels etc.. But I also have an 8" tablet with pen, which I love, because the one thing I still love to do by hand is brain storm, mind map and just plain sketch out ideas. It's the main thing I miss being able to do on my main PC. And I love the mobility and battery life of tablets for when I am consuming around the house or in the garden.
My smart phone is mainly just a phone, but it is also my tap of flowing information. Be it a map when I am in a new city, or a quick lookup on wikipedia to settle a debate over beers, I find it invaluable for its niche.
So maybe I don't take the view you do that they have to replace the PC, I just find my life is much enriched when I use all of them together, each for what it is best at - for example, if I am working through a coding book I find it much easier to have the book open in the iPad next to the keyboard, than having the pdf open on the second monitor. A quick tap of the tablet and the page turns while I still have my terminal focussed and able to be typed into.
And as for the billions of people coming online with cheap smart phones - I don't expect them to be shopping Amazon - but I am working on, and I hope others will be too, a way for them to get information they need, when and where they need it, to improve their lives. It could be something as simple as an African farmer checking his crops and seeing weird spots on them, taking a photo and sending to a forum of farmers, and maybe within minutes receiving info on what he should do next to protect his crop.
Not everyone has the option (or desire maybe) to be tied to their home and never have to leave. Being mobile should be a choice, then it may not be quite the bummer you think it is.
We essentially agree. But in places
you are misinterpreting my position.
I have nothing against smartphones and
will get one when I have a good use for
it -- which so far I do not.
> So maybe I don't take the view you
do that they have to replace the PC
I don't think that smartphones have to
replace PCs; and I believe that for now
they can't replace PCs for all but
a small fraction of heavy PC users.
My beef with the OP was the frequent
comparisons with PCs, thus, suggesting
that smartphones are about to replace
PCs: There were a lot of nice graphs
and nice data, but those didn't establish
the suggestion of replacement.
Beyond
the hints of replacement was the
claim "mobile changes everything".
Well, it won't change PCs for a long
time. Proof: Big screens, mice,
good, full keyboards, printers,
CD/DVD R/W devices, old software.
QED. Dirt simple.
I do differ with you on reading
computer documentation. I really,
really want that documentation on the
same computer I am using to write
the software: For my software
project, I have about 6000 Web
pages, PDF files, etc. of documentation,
in four collections,
Windows, SQL Server and ADO.NET, Visual Basic .NET,
and TCP/IP and ASP.NET.
In each collection I have a simple
flat ASCII file with, for each
such document in that collection,
the title of the document,
an abstract of the document,
often some notes of my own,
often relevant short code samples
from the document,
the original URL of the document
(except for the relatively few
documents I wrote myself),
and the file name, date, size,
etc. of the document on my disk.
My favorite editor is terrific
for searching, reading, revising
those four files. Terrific.
Then, in my code, when I make
important use of such a document,
I include in the file of my
code a comment with the tree name
and title of the document.
Then, when reading such code,
in my editor one keystroke
displays the document for me.
So, maybe I have a homegrown
version of an IDE with Intellisense
or some such. But it works for
me. I document the heck out of
my code. Any code that is not just
trivial has such comments.
Currently the source code for
my project has about 80,000
lines with only about 18,000
programming language statements --
we're talking a lot of documentation,
and a lot of that documentation
is references back to my
collection of 6000 documents.
So, net, I want
the four collections, the 6000
documents, my favorite text editor
with my 200 or so macros, and
the code I'm developing all on
the same computer -- call it a
network effect.
For screen
area, I'm short on that, but
I have a little code I wrote
that does a useful screen
rearrangement -- moves the
windows, preserving the Z-order,
so that the UL corners of the
visible windows are equally spaced
on a line from roughly the top center
of the screen to the left center
of the screen. That way I can
make good use of about 20 windows
open at once.
So, a lot of those
20 windows are for documentation.
Sure, usually soon I close such windows
and then rearrange the remaining.
For what window has what, I can see
the UL corners of each of the windows,
and also keep in mind the Z-order
and position new windows on the LL
of the screen.
E.g., having such
windows of documentation
open on a tablet
would be a bummer since as I
write code I add documents to the
collection and commonly cut and
paste documentation or code from
the documents into code comments,
notes of my own, etc.
E.g.,
when reading code I really like
that one keystroke to
show me the relevant documentation --
couldn't easily do that if the
documents were on a tablet.
Mobile device? Don't need one.
Can't see how it would help.
I'll get one when I need one.
It won't replace my PC -- for
a long time.
1. WhatsApp is dependant on cellular infrastructure which you count into your 10k engineers.
2. "Erlang was designed with the aim of improving the development of telephony applications."[1] So it powers/powered a large part of the cellular infrastructure.
I see this idea repeated so often, but it's unfortunate that we don't also have the _value_ of a supercomputer in our pocket. The sole purpose of a supercomputer is to advance the interests of its owner, who has exclusive control over it. Whether the purpose is prediction, or simulation, or to advance the state of the art, the benefit goes to the owner.
Yet we seem to have less and less control over our smart-phones. With so much information about each of us being siphoned off through the Internet, it's easy to wonder whose interest they serve.
With all of the advances in computing power, you'd think we'd put a bit more imagination into capturing more value for each individual smart-phone user, and less into centralized capture and analysis of our digital activity.