Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Windows 10 on ARM (msdn.com)
450 points by vlangber on May 11, 2017 | hide | past | favorite | 277 comments


Microsoft are seriously killing it in emulation these days.

First I was amazed with them getting Xbox 360 (PPC) games running at full speed or better on the Xbox One (x86) and now we have x86 on ARM.

They have some wizards working on this stuff.

Edit: I wonder if Dave Cutler is involved in this x86 on ARM stuff? I think he was with the Xbox 360/Xbox One.


As far as I know, they don't emulate Xbox 360 games on Xbox One, they port and compile it for the new console, creating entirely new binaries. Not much emulation going on there. Correct me if I'm wrong.

EDIT: https://en.wikipedia.org/wiki/List_of_Xbox_360_games_compati... "Unlike the emulation of original Xbox games on the Xbox 360, the Xbox One does not require game modification, since it emulates an exact replica of its predecessor's environment – both hardware and software operating systems."

I stand corrected.


It uses emulation. While it is true the user needs to download a 'game image' rather than play the game directly from disc this is because that image is a virtual hard disk that contains the game and a customised Xbox 360 virtual machine that has tweaks for the game. It is similar to PC console emulators that have specific configs for certain games.


I thought they do a static recompilation too, from the original binary, but I can't look it up now.


That's actually probably the case. Xbox 360 banned making arbitrary pages executable (not even the Kernel could do it, the Hypervisor call to mark a page as executable had to come with the signatures signed by Microsoft's private key).

So they probably can get away with static recompilation.


http://support.xbox.com/en-US/games/game-setup/play-xbox-360...

> Xbox 360 games run within an emulator on Xbox One, which means you'll be able to use most features from both systems while playing games, such as screenshots, broadcasting, and Game DVR.


AFAIK you're right, developers have to upload a separate build for Xbox One-compatible games


Which is why they're slowly trickling them out a few at a time (though there's some gems in there!).


from your quote, it's probably neither.

looks like api translation like wine does. ...which is rather easy if you wrote the original api.


The Xbox 360 used the PowerPC cpu architecture, and the Xbone is x86, so there has to be emulation or recompilation involved.


They actually did namedrop Dave Cutler in the main keynote earlier, although in the context of talking about the Linux Subsystem.


That must kill him inside.


I would think not. It finally showcases the flexibility he put in to the core of NT.


Not that "finally", Windows NT was designed specifically to support POSIX and OS/2 alongside Win32 apps and the first few versions actually shipped it.


Though WSL required more work than WSU or OS/2: they needed a new lightweight process model, for example. That's more serious internal work.


Eh, the WSL stuff avoids most of the NT subsystem style flexibility.


I interpreted that as sarcasm, actually.


He isn't known for being a fan of Unix.


Yeah I'd hate to be tied with extreme scale emulation efforts that serve millions of users built by multi billion dollar corps. Shame! /s


Maybe it's punishment for screwing up? I just picture him grinding his teeth and cursing under this breath the whole time....


They make a very good Android emulator as well.


Which is, sadly, discontinued. It's still available for download though but the latest OS is 6.0 and many versions missing.


They pressured google. That was the point. Google does not give a flying fuck about developer experience. I am not saying that is wrong or right. I am saying this was the case for Google for many years. They just didn't give fuck about the experience.

But look at the timing. Everyone was nagging on google for years about slow emulator. They didn't do shit for years. Microsoft released an emulator, which was superior in almost all aspect. Just after that Google released new emulator. and now it does make sense to discontinue that project.


> But look at the timing.

Coincidentally, that was around the same time that Microsoft filed an Amicus brief supporting Oracle (in Oracle v. Google) arguing that APIs should be copyrightable[1].

Did that Project Astoria implement Android APIs? Would Google go after Microsoft in court had they officially released this? My guess is "yes" to both these questions. Nothing says wilful infringement louder than submitting a legal document to the courts spelling out why you think what you're doing (Astoria) is wrong.

1. http://www.groklaw.net/articlebasic.php?story=20130221153759...


I think you are wrong on the Google front. Google did everything to please devs. Take apple for example, they won't think twice about refusing your app. Google OTOH, since the early days, lacked so much in the Appstore, that they let all sort of trash inside. They just wanted the devs more than quality.


Wait till you have someone file a wrongful copyright complaint against your Google Play App. Google took ours down two weeks ago and won't respond to our appeals despite clearly owning the copyright for all work done on the app.


Just like Chrome extensions that get removed without warning and you can't even get an answer of why and how you can fix the situation (submitting it for manual review fixes nothing).


Are you the developer of Frozen Elsa Anna Lego Movie Pregnant Tooth Extraction and Foot Doctor 7 by any chance?


I'm guessing you never had to use the old emulator.


How many platforms does that Microsoft emulator work on? One? Unlike Microsoft, Google supports 3 development platforms and so their new emulator also had to support 3 platforms. Also, the new emulator was written from scratch and that takes time.


> Unlike Microsoft, Google supports 3 development platforms and so their new emulator also had to support 3 platforms

No it didn't. If platform specific solutions make sense then we should use platform specific solutions. Not everything has to be cross platform and 100% identical across all platforms.


Except when you develop a cross platform IDE and emulator that requires feature parity across them all.


Feature parity does not mean the exact same tools on every platform. Trying to be identical across 3 platforms always means adopting the lowest common denominator and always results in being equally shit on every platform.

It's astounding that people still try to ignore the last 30 years of evidence of this.


So Linux users should get the shaft because they're the minority? Android Studio and the emulator work very well on all 3 platforms.

>Trying to be identical across 3 platforms always means adopting the lowest common denominator and always results in being equally shit on every platform.

In that case we're fortunate that Xcode isn't cross platform.


> So Linux users should get the shaft because they're the minority?

Linux users should get the best possible experience on the system, so should the windows users. If a better emulator is available on windows why should they get shafted? Linux tools even have the advantage that they may not need full emulation in the first place, but instead we end up with the worst of both worlds.

> Android Studio and the emulator work very well on all 3 platforms.

In my experience nothing about android development works very well on any platform. I'm not a huge fan of MS, but windows phone development and tools are an absolute dream in comparison.


>Linux users should get the best possible experience on the system, so should the windows users. If a better emulator is available on windows why should they get shafted? Linux tools even have the advantage that they may not need full emulation in the first place, but instead we end up with the worst of both worlds.

You seem to be under the impression that if you develop a cross platform solution that it's going to be inherently inferior. I disagree with that and the new Android emulator is proof of that. It's superior to the MS Android emulator in every way and it's also available on all 3 platforms.

>In my experience nothing about android development works very well on any platform.

In that case you probably don't have much experience developing for Android.

>I'm not a huge fan of MS, but windows phone development and tools are an absolute dream in comparison.

If developing for windows platform is a dream then why is their app store a cesspool filled with horrible looking apps?


Ive previously done iOS development (on the side) and tried Android development in the last few weeks and it feels like a massive, horrible hacked together pipeline. Why go through this horrible hack of a pipeline, recompiling and modifying .class files when you can instead create a compiler which does it directly?


So you could spend three times as much money and design completely different products for each platform, or you could have one codebase with a limited number of special cases for the different platforms and invest the saved money into making a better product...


How did you calculate 3 times as much? No one said nothing can be shared across platforms.

And again, we've got 3 decades of evidence that show precious few good cross platform solutions, so investing the saved money into making a better product doesn't seem to work.


That is no excuse for letting the emulator stagnate for several years, without any kind of visible change in regards to performance.

They hire PhDs capable of sorting out the incredible whiteboard and phone CS exercises for something.

Also apparently all those Phd weren't able to do what GenyMotion guys and girls were happily developing.


How many platforms is GenyMotion available on?


Well, Intel introduced HAXM a long time ago. It made it run quit faster.


It'd make sense if Hyper-V wasn't monopolistic. You can't run both at the same time, or better, no other virtualization host is allowed when Hyper-V is on. Which sucks.


The problem there if I am not mistaken is the CPU Virtualization extensions that can't handle multiple hypervisors using them at the same time.


That's true but it sure is convenient for Microsoft, and they are exploiting it as far as they can with the Credential Guard feature.


Sure is convenient for Red Hat, that you can't run KVM and Xen at the same time

Sure is convenient for VMware that you can't run Virtualbox and VMware Workstation at the same time

Absolutely a business decision by Microsoft, and not a limitation of the x86 chip


Also other virtualization hosts on the desktop don't have this problem at all. You can run all of the others at the same time.


where do you see it discontinued?


I suggested an improvement and got back an email.

https://blog.rthand.com/post/2017/05/02/good-bye-visual-stud...


From the documentation: Make sure “x86” is selected as the architecture. [...] Remember to make sure your APKs have code built for x86!


Where can I download it? Latency on Bluestacks is awful.


https://www.visualstudio.com/vs/msft-android-emulator/

If not using Visual Studio, search for 'Acquire on its own' on the page. There is a link in that section.


https://www.visualstudio.com/vs/msft-android-emulator/

It uses Hyper-V so you will need a Pro version of Windows.


I've always been kind of bummed out that Hyper-V is disabled on the OS layer for basic Windows versions, even if the hardware supports it.


They have to justify the extra price somehow... I don't blame them too much, odds are you are a "pro" user if you want/need Hyper-V. I really like Hyper-V (except the network weirdness no occasion).


If only they'd add GPU passthrough on non server Windows editions...

At the very least make it a pro feature.


Because it breaks other VMs - Vmware, Virtualbox, Intel Haxm. They don't work if Hyper-V is enabled.


A better bluestacks alternative would be anbox.


I believe Anbox employs Linux kernel modules, so WSL would need to load those to work under Windows.


... or implement the interfaces provided by Anbox.


It would be if it worked on anything except Ubuntu that is. Maybe a month ago when I tried it was different, but there was no amount of installing and tweaking I could do to get it to work on Arch.


Don't forget they also did Xbox (x86) --> Xbox 360 (PPC) emulation. It worked pretty decently well too.


Makes me wonder why they don't do original Xbox emulation on Xbox One -- seeing that both are X86.


Likely that the games would look rather poor when upscaled to 1080P from 240P/480i

Also probably licensing issues, for games with tech that old


My hope is Microsoft will use these emulation skills for security, too. I'd like to be able to instantly run an app of my choice in a secure virtual environment. Everything should be wiped out when I close the app by default, but I'd also want an option to keep the app's contents persistent.

I think they're kind of doing this already with Windows Defender App Guard [1] thing, which for one is a mouthful, second, it should have nothing to do with the exploitable Windows Defender code [2].

It should be part of Windows' lower-level systems, so that not much can interact with it, and it should be available for all apps, not just Microsoft's own apps. Windows on ARM, Project Centennial, App Guard, and more, prove that Microsoft has the ability to do that.

If they don't do it, it will only be for political reasons like not wanting Chrome users to benefit from the same technology, or wanting to sell it as "value-add" to enterprise customers, or whatever (basically the same BS they're pulling with Bitlocker).

[1] https://blogs.windows.com/msedgedev/2016/09/27/application-g...

[2] https://arstechnica.com/information-technology/2017/05/windo...


Yeah. Well. When can I get a PC version of fable 2 then ? :-(


This excites me. I work with a lot of geospatial industry tools, many mobile tools are still based on old window CE/pocket PC/windows mobile platforms; I hope to see these snapdragon devices overtake this area.

Mobile data collection tools on iOS and android devices are rather poor (at least open source ones), hopefully this will help solve this problem—we'll just be able to use windows applications that work and are better developed. The windows ecosystem still seems easier to me than iOS / Android, perhaps that's just my bias or that I see more development options.

This could be a huge turning point for Microsoft's mobile divisions.

x86 drivers are still something I'm curious about here...

[edit: I should also mention, the reason why geospatial tools comes into play here is because the snapdragon CPUs have integrated GSM and GNSS (GPS) on the die. Currently, x86 based tablets have to use a separate component, and many don't come with it or even provide GNSS as an option]


It's not your bias. The Windows ecosystem has always been pretty good for developers. If it weren't for the ten-year Ballmer purgatory, MS would have been all over this space, where little iOS and Android toys still haven't hit puberty. Reinventing the wheel has been an overall giant waste of time for the industry.


I am seriously impressed with Microsoft. I haven't used windows in years but they are releasing tons of useful things. VSCode is great on Ubuntu and every version of OS X I have run it on. Plus tons of other cool things like bash on win; etc.

Also, I have a bizspark Azure subscription and it's not a perfect UX; but man does it beat AWS


> Azure subscription and it's not a perfect UX; but man does it beat AWS

I actually disagree with this pretty strongly. The Azure UI seems flashy for the sake of being flashy. The AWS UI is much easier to navigate and use for me.

Azure's UI is more unified though. There are some random AWS services (e.g. sqs) that have a totally different UI theme going on.

Edit: the Azure UI is only more unified if you ignore the fact that there are two totally separate versions (classic + the new portal), and you end up having to hop between the two in some instances if you have some older infrastructure because feature parity is not perfect.


The way I read parent's comment is: "Azure's UI is not as good as AWS, but the instance is better". I may be wrong but I think parent meant that you get more resources for the money.


But does AWS have anything like bizspark that's open to everyone? If not, it is not a fair fight.


looks like you are right. I think I just misread the parent comment.


I read it similarly to you


I have to +1 this. Not only is it easier to navigate, AWS always manages to run pretty smoothly, while Azure's website makes my 2014 Macbook Pro slow to a crawl. It often takes up to a second for a CSS :hover effect to appear!


I agree, what Microsoft shows us today is a real evolution of a software-engineering empire done (mostly) right. The sad fact is that this open-source love + tons of goodies for developers comes after decades of pain and suffering, which were not really "killing" Microsoft, but they were risking a lot. Now they're flying, leaving Apple far behind (Apple completely lacks a multi-tier engineering vision), but if they get a monopoly again... pain and suffering will be back. That's the free market.


What is a "multi-tier" engineering vision?


Outside US, in regards to development tooling and selling applications to business, Apple never meant much.


> Also, I have a bizspark Azure subscription and it's not a perfect UX; but man does it beat AWS

How so?


I am not a super user of AWS and have light demands as far as the console. I find the documentation and dashboard pretty decent; burnrate monitor and basic services(vm, ip, storage) easy enough. The naming and requirements-- and also the split up of resources v. Classic is pretty annoying.

On AWS I was through a sea of services I don't need w/ poor docs, clunky configs and a lot of security tiers. Their pricing is baffling and I really just think it looks terrible. Azure has similar issues; but I can navigate it better and feel more comfortable w) it


Thanks for your input on this. I'm considering getting my AWS Developer certificate so I'm glad to have some warning about it going in..


Again; this is my own opinion and I am not on a competitive dev team. AWS is the defacto standard; and I think learning the API keeps you out of the UI.

Azure has one too; they are both clunky and more than I need but; and someone who uses AWS hopefully will chime in-- my guess is most people use the API almost exclusively. I don't have enough familiarity to judge it; have only use it a bit and a while ago


Whenever I tried to use Azure, it was soooo slow. Is it better now?


The dashboard is still very slow.

But all in all, the technical offering is pretty good and easy to be productive in.


For any Microsoft insiders with knowledge about this, a few questions:

1. In the talk they claim that they get "near native" speed from the translation. Can we have some real number of what can be expected? What about warm up time?

2. Is the x86 translation layer only available in user space, or can x86 drivers be loaded for hardware that doesn't have arm drivers yet (or the manufacturer doesn't care about making them)

3. Are there any plans in the future for supporting x64 code as well in the translation layer?

Edit: One more:

4. Are there any plans on supporting Arm v7 in the future, e.g. with Surface/Surface 2 support?


5. How do they solve the problem that x86 has a much stronger memory model than ARM, which makes emulation of multithreaded x86 code pretty hard without introducing a huge performance disadvantage?


Since there are no publicly available details, I would guess this is solved in hardware. The Qualcomm processor can be made to support stronger memory model for translated instructions similar to how the Power processors support strong-access ordering for efficient x86 emulation[1].

[1] https://www.google.com/patents/WO2012101538A1?cl=en


Previous ARM generations had a "strongly ordered" memory type. ARMv8 doesn't have this for cacheable memory (normal) but device memory has a R(erorder) attribute to control access ordering. Of course device memory isn't cached either. Beyond that the non reordering attribute only applies within an implementation defined block size and its behavior with respect to normal accesses (or outside the block) isn't defined either. Making it mostly useless except for the most restrictive of cases.


https://news.ycombinator.com/item?id=14319420

Not sure where MikusR gets this info from, but it could be right given that Snapdragon 820 is Qualcomm's custom core.


While I consider this explanation as interesting and plausible, it is a patent by IBM and not by Qualcomm (there can still license agreements negotiated etc.).


That's an excellent point.

I hope the answer is not "by restricting all emulated threads to one CPU core".


My off-the-cuff guess is they just force a memory/inst fence where x86 makes guarantees which is a bit better but still incurs some overhead.

Visual Studio used to(probably still does) this around volatile for x86 which leads to fun bugs when you port to ARM/etc.


That'd basically be putting a barrier on every memory access though.


Nah, because ideally most applications are using library/system services to fence their data structures. For the remaining cases the apps could be marked with a new application compatibility flag (this feature already exists in windows for older apps) which adds additional checking that flags pages which are being accessed from multiple contexts as requiring additional emulator level sequencing. There are ton of strategies here, which could be as simple as basically disabling multi threading in the emulator or a page ownership ping/pong, etc.


What do you mean, could you elaborate on that a bit? How is x86 "memory model stronger" and why would application code be affected?


For a short overview read:

> http://preshing.com/20120930/weak-vs-strong-memory-models/

If you want more details of memory models of some CPU architectures, read

> http://www.rdrop.com/users/paulmck/scalability/paper/whymb.2...

Note that "x86 OOStore" is not a model that you should be concerned about (according to https://groups.google.com/forum/#!topic/linux.kernel/2dBrSeI... it was only used on IDT WinChip)

For a perspective on memory barriers for different memory models with a focus on the Linux kernel look at

> https://www.kernel.org/doc/Documentation/memory-barriers.txt

If you are specifically interested in some subtile details of the x86 memory model, have a look at

> http://www.cl.cam.ac.uk/~pes20/weakmemory/index3.html

Best begin with

> http://www.cl.cam.ac.uk/~pes20/weakmemory/cacm.pdf

---

I hope this should give you enough information to start reading up on the subject.


I've found Herb Sutter's talk on std::atomic also useful for understanding this.

The 2nd part of his talk goes into some of the differences between x86 and other architectures (about 31 min in):

https://channel9.msdn.com/Shows/Going+Deep/Cpp-and-Beyond-20...


Strength of memory model refers to concurrency guarantees. Also gets deep into caching architectures, OOO, and inter-cpu communications.

This is a deep rabbit hole because modern processors are speculatively executing THOUSANDS of instructions ahead. So what you have/haven't written to memory is slightly existential.

Here is a good blog post about it http://preshing.com/20120930/weak-vs-strong-memory-models/

The TLDR: x64 tries REALLY HARD to ensure your pointers always have the newest data. ARMv6, very much. ARMv7 kind of. ARMv8 fairly.


Thousands? I thought it was more like a dozen or two at most. Any link on that?


Sky lakes OoO window is 224 instructions.

With up to 96 in scheduler. 72 loads, and 56 writes.

Link: (summery you find hard intel references internally) https://en.wikichip.org/wiki/intel/microarchitectures/skylak...

224 per core. We're talking concurrency. So a 10core, 20HT server class can have 2240 instructions in flight.


Admittedly more than I expected, and thanks for sharing, but "20 CPUs executing a peak of 224 instructions each" is a far cry from "modern processors execute THOUSANDS of instructions AHEAD".

It's neither "thousands" of instructions in flight for a single processor, nor is it thousands "ahead" for multiple processors, let alone both. It's like taking 4000 basic single-stage CPUs and claiming they execute thousands of instructions "ahead", which makes no sense.


Agreed, saying thousands was hyperbole. You do have a point. When talking about instruction windows, one should talk only about one core at a time.

Although the idea of "lots of instructions in flight" was right. From programmers' point of view there's no difference between out-of-order windows of tens, hundreds or thousands. One just needs to be prepared that CPU can and will reorder things within limits of the defined memory model. Which does not define limits for out-of-order window size.


>one should talk only about one core at a time.

Nope.

Because we're talking about cross-core guarantees of concurrency. So you actually do care about what another, or all cores are doing.

What core is loading what, and what core is storing what.. and what is pending/holding up those loads is actually extremely important from the perspective of atomic guarantees.

Weaker CPU's (Like say POWER7) that do batching writes (you accumulate 256bits of data, then write it all at once) don't communicate what is in their write-out buffer. So you may write to a pointer, but until that CPU does a batched write the other cores aren't aware. You have to do a fence to flush this buffer (in the other cores).

There are some scenarios where the same situation can arise on x64 but its rarer. The intel cache architecture attempts to negotiate and detect when you are/aren't sharing data between cores. So for _most_ writes it uses the same situation as POWER7, but if it can predict your sharing data it'll use a different bus and alert the other CPU directly.

This is why x64 uses a MESIF-esque cache protocol [1][2]. It can tell when data is Owned/Forwarded/Shared between cores.

[1] https://en.wikipedia.org/wiki/MESIF_protocol

[2] Intel hasn't updated their white papers in 5+ years they're likely using a more advanced protocol.


> Because we're talking about cross-core guarantees of concurrency. So you actually do care about what another, or all cores are doing.

This is mostly about load/store order of an individual core. How individual core decides to order its reads and writes to memory.

> Weaker CPU's (Like say POWER7) that do batching writes (you accumulate 256bits of data, then write it all at once) don't communicate what is in their write-out buffer. So you may write to a pointer, but until that CPU does a batched write the other cores aren't aware. You have to do a fence to flush this buffer (in the other cores).

You can actually do same on x86 by using non-temporal stores. Although you're not talking about store ordering, but about visibility to other cores. A store won't ever be visible to other cores until it at least hits L1 cache controller.

> There are some scenarios where the same situation can arise on x64 but its rarer.

Yup, that's right. That's why x86 (and x64) got mfence and sfence instructions.

> This is why x64 uses a MESIF-esque cache protocol [1][2]. It can tell when data is Owned/Forwarded/Shared between cores.

Reordering happens before cache controller. When cache controller is involved, the store is already in progress.


He means x86 doesn't reorder memory store order with other stores or loads with other loads, etc.

See: https://en.wikipedia.org/wiki/Memory_ordering#In_symmetric_m...


Why is that? We are talking user space applications rather than drivers/etc. Properly coded applications will be using some form of sync primitives to shared data structures. So, when the emulator hits a LOCK xxx instruction it simply replaces it with a LDAXR/STLXR (load acquire exclusive, store release exclusive which have implied barriers) pair, or drops a barrier in. Generally though, you would expect that most well behaved application are using EnterCriticalSection()/InterlockedXXXX() or other library calls that could be calling ARM64 optimized code (likely one of those hybrid libraries) which would also assure proper barriers.

So, an emulator like this isn't going to be perfect, and applications which have been getting lucky due to x86's ordering model and avoiding proper sync primitives will likely break, and there really isn't much that can be done in those cases but fixing the application although it wouldn't surprise me if there is a compatibility flag which falls back to some slow path emulation mode which orders individual loads/stores and runs at 1/10 the normal speeds.


You're talking about atomic operations and synchronization. This is not about that.

This is about memory order. How CPU is allowed to reorder normal unsynchronized loads and stores. X86 has strong order, but ARM CPUs can reorder loads and stores significantly more. Thus on ARM you very often need memory barriers on ARM where x86 needs none.

If memory order is not handled according to specifications, programs executing simultaneously on more than one CPU core, relying on CPU memory model/ordering will not work correctly.

You don't want to make every ordinary load and store atomic, because that would incur a rather high performance cost.


Yes it is (about atomics/sync). ARM like every other major core is "sequentially consistent" with respect to the core executing the code. The memory order only matters for external visibility (aka other threads or devices). This means that the load/store order from a single thread doesn't matter to the other threads except at synchronization points (aka locks/etc). Those sync primitives have implied barriers.

If someone has a program which is depending on load/store order in userspace then they likely have bugs on x86 as well since threads can be migrated between cores and the compilers are fully allowed to reorder load/stores as well as long as visible side effects are maintained.. I could go into the finer points of how compilers have to create internal barriers (this has nothing to do with DMB/SFENCE/etc) across external function calls as well (which plays into why you should be using library locking calls rather than creating your own) but that is another whole subject.

The latter case is something I don't think most people understand... Particularly as GCC and friends get more aggressive about determining side effects and tossing code. Also, volatile doesn't do what most people think it does and trying to create sync primitives simply by forcing load/store does nothing when the compiler is still free to reorder the operations.

An emulator is also going to maintain this contract as well. That is why things like qemu work just fine to run x86 binaries on random ARMs today without having to modify the hardware memory model.


> If someone has a program which is depending on load/store order in userspace then they likely have bugs on x86 as well

Incorrect. On x86, if you write to memory n times, other cores are guaranteed to see the writes in same order. Second write is never going to be visible to other cores before first write. It's correct to rely on x86 memory model in x86 software.

On ARM, those stores can become visible to other cores in any order.

> since threads can be migrated between cores

Irrelevant. This is about code executing concurrently on multiple cores. Operating system and threads are irrelevant. This is about hardware behavior, CPU core load/store system and instruction reordering, not software.

> ... and the compilers are fully allowed to reorder load/stores...

This has nothing to do with compilers. This has everything to do how CPU cores reorder reads and writes.

> Particularly as GCC and friends get more aggressive about determining side effects and tossing code

If GCC has bugs, please report them. Undefined behavior can give that impression, but again, this topic has nothing to do with compilers.

> An emulator is also going to maintain this contract as well. That is why things like qemu work just fine to run x86 binaries on random ARMs today without having to modify the hardware memory model.

This is not true. See: http://wiki.qemu.org/Features/tcg-multithread#Memory_consist...

Remaining Case: strong on weak, ex. emulating x86 memory model on ARM systems

I recommend you read this: https://en.wikipedia.org/wiki/Memory_ordering.


> It's correct to rely on x86 memory model in x86 software.

If you have complete control over the whole stack, sure. But we are taking windows user space applications. You continue to ignore my point originally that the edge cases you are describing may cause problems, but are just that, edge cases which can be solved with slow path code (put a DSB following every store if you like), and are likely depending on behaviors higher in the stack which aren't guaranteed and are therefor "broken". If your code is in assembly, and never makes library calls, etc then you might consider it "correctly written" otherwise your probably fooling yourself for the couple percent you gain over simply calling EnterCriticalSection().


> If you have complete control over the whole stack, sure. But we are taking windows user space applications.

Well, hardware feature is a hardware feature. Load/store ordering is a hardware feature.

Operating system, user space or kernel space, compiler, etc. are not relevant when discussing about CPU core hardware operation.

You don't need complete control of the stack. You just need to have CPU cores executing instructions on multiple cores.


> On ARM, those stores can become visible to other cores in any order.

And I will repeat this again, for "correctly" written code this doesn't matter. Because the data areas being stored to should be protected by _LOCKs_ which will enforce visibility. If you think your being clever and writing "lock free" code by depending on the memory model your likely fooling yourself.


Lock free code indeed does indeed need to rely on memory model. Relying on underlying machine model is not fooling oneself.

Please don't conflate visibility with load/store order. It's a different matter.

See how C++ memory model operations map to different processors, especially how many cases are simple loads or stores on x86.

https://www.cl.cam.ac.uk/~pes20/cpp/cpp0xmappings.html (link from another comment in this discussion)


> Properly coded applications will be using some form of sync primitives to shared data structures. So, when the emulator hits a LOCK xxx instruction

This is not always true. See [1] for an example. Because of the semantics of the x86 memory model a sequentially consistent load does not generate a lock'd instruction. When such loads are translated to ARM64, you need to introduce barriers or use a ldacq instruction.

[1] https://www.cl.cam.ac.uk/~pes20/cpp/cpp0xmappings.html


That doesn't matter because a properly consistent lock will need a sequentially consistent store somewhere in the sequence, and that store will generate the barrier.Something like a ticket lock works in this case because the store will eventually become visible and when it does the ordering of operations proceeding it will have completed.


Not just the memory model, unaligned word/dword accesses would be tricky too I think? Doesn't ARM usually silently give you garbage when you load or store a multibyte item into a register unaligned?



In linux, you get a nice SIGBUS for unaligned memory access.


It looks like that's not the default, though? https://www.kernel.org/doc/Documentation/arm/mem_alignment


I remember when Apple supported PPC emulation of applications when they made the switch to Intel x86 for a major OS release cycle. I was impressed that the speed there was, in fact, near native enough that the transition was essentially completely seamless.

Although this is somewhat of the reverse in architectures, going from CISCy to RISCy.

The only thing that concerns em about recent Microsoft developments is that they seem very keen to move their OSes into more of a walled garden like iOS and the Mac App Store, and extract 30% from every third party application transaction. They could use one of these sorts of transitions as an opportunity to advance that business strategy further.


They're definitely going in the direction of the walled garden strategy, but it's not necessarily a bad thing. For the vast majority of users, computers are safer with walled gardens. For the more advanced users, developers can always offer their apps through a different distribution channel.

In the case of Windows in particular, it's a chance to leave the installation wizards, the registry and the DLL hell behind and use the packaged model that macOS adopted from the get go.


World-changing technologies like p2p filesharing, bitcoin, app stores, and even the web itself wouldn't have taken off if most personal computers were walled gardens. What future innovations will never even be given a chance at life? Will we have to depend on the big corporate players to move computing forward?


Centennial (read, non-UWP) apps can be published to the Windows Store and don't get any sandboxing beyond registry virtualization by default. Microsoft lives and dies by backwards compatibility, they're not going to kill off Win32 - they may very well push for the Windows Store to be the primary source for applications with traditional installers being blocked by default though.


Unless those non-UWP apps are browsers that use non-Microsoft rendering or javascript engines.

I don't think that's a dealbreaker, especially when they'll let people trade from 10S to 10Pro for free. But it's a pretty big caveat that got added very quietly.


> Although this is somewhat of the reverse in architectures, going from CISCy to RISCy.

Apple did the same when they moved from 68K to PPC. The PPC would run the 68K code in emulation or, if it was a "fat binary" with both 68K and PPC code, it would just load the functionally equivalent PPC binary and run it.

There were utilities to compress executables that deleted the versions not for the current CPU - very handy in low-power 68Ks


In that case, the PPC had probably a 5-10x perf advantage over the 68k's people were upgrading from. So, it wasn't noticeable, so if in the end the emulated code was only running at the speed of the 68k it was replacing it was a net win every time it entered the kernel and ran native code.

ARM doesn't have this advantage over X86, at best its probably somewhere around 1/2 the absolute performance. Of course intel has been selling a _LOT_ of really slow CPU's down the product line, so for someone upgrading from a atom class machine the ARM will probably appear to be pretty reasonable. Not so much, if your running a fairly high end laptop.


I don't think any ARM emulating x86 code will be able to get to Xeon E7 performance anytime soon. May get close to low-end x86's with JIT and jumping to native mode on every OS call (with some clever adjustment to data structures, if they differ/misalign).

And yes. At that time, PPCs ran rings around x86's and 68K's.


> extract 30% from every third party application transaction

This seems tragically the inevitable, but slow direction. We can only hope that it accelerates the transition of app stores to state-regulated markets rather than private arbitary fiefdoms.


Apple previous did emulation of 68000 on PowerPC, so the CISC to RISC has been done "successfully" before.

I put "successfully" in quotes because MacOS in that era was notoriously crashy-buggy.


> I was impressed that the speed there was, in fact, near native enough that the transition was essentially completely seamless.

Not to diminish Apple's accomplishment, which was indeed impressive, but a lot of this was because by that point, PowerPC was so, so far behind Intel's chips that, even after the emulation performance penalty, the Core 2 chips were fast enough to make it no big deal.

There's not a similar Intel/ARM gap this time around so Microsoft will have to rely on their chops alone.


I think you misremember.

Apple started with the Core Solo and Core Duo chips (Yonah), which were fairly fast but weren't remarkably faster than the PowerPC G5. A single core G5 iMac held up quite well versus the two-core Core Duo and thrashed the Core Solo. Aside from memory reads the G5 did quite well.

The strength of the Core Duo and Solo were in performance per watt and that's what Jobs was looking for.

https://en.wikipedia.org/wiki/Yonah_(microprocessor)


It is easy to have rise tinted glasses. I remember my MacBook with 80? GB hard disk and cursing power PC everytime I had to install a universal DMG that included both Rosetta? and Intel apps.


As to #1: they say you _can_ get near-native speed, not that you _will_ get it. Applications that spend lots of time inside Windows APIs will be faster than those that don't.

I think that Office version already is native, and that they gamble/expect that the likes of Photoshop and Mathematica will soon ship real native versions (rightfully so, as that is what happened when Apple jumped PC architectures, too, with the exception of Quark Express, which consequently got eaten by InDesign.)


  4. Are there any plans on supporting Arm v7 in the future, e.g. with Surface/Surface 2 support?
Those are Nvidia Tegra based. So no.


Man, that League of Legends desktop icon on the test machine is such a tease. I would love to know what the status of x86 gaming on ARM devices would be, especially for something competitive that runs on most anything like LoL


They already shown LoL and Photoshop running on ARM a few months ago in a demo video. Google it :)


I have seen the photoshop video, but not League of Legends - got a link?


I could not find it for some reason. Could you provide the link please?


But wasn't it with hardware emulation?


Spotting things in the details: Haru is propping his phone up with bricks from a Jungian personality test. Apparently Haru brings people together and results oriented.

http://blog.mcchristie.com/wp-content/uploads/insights-brick...


I'm sure that tease was no accident


Does this emulation work for x86-64 executables too?

EDIT: no [1].

[1] https://www.theverge.com/2016/12/7/13866936/microsoft-window...


Odd thought: the existence of an ARM Windows, makes it much simpler for Apple to ship ARM PCs.

The macOS development stack has been re-tooled to output LLVM bitcode within its "fat" binaries for a good while now. It'd be very simple for Apple to throw the switch on a compile farm ala the one Google has for Android APKs, and suddenly have ARM downloads for everything on the Mac App Store (without requiring any re-submissions.) Which means it wouldn't be hard at all for Apple to ship a "functional" ARM macOS computer... just, until now, such a machine wouldn't have had a very good Windows story. No Boot Camp, no cheap virtualization, etc.

Suddenly, that story is a solved problem.


This idea keeps popping up. LLVM bitcode is way too architecture specific to allow that to happen. Chris Lattner has stated it himself.


"x86 bitcode" can be transpiled into "ARM bitcode."

Yes, you could do the same thing with a plain x86-ISA binary, but such a binary might have sections that are very hard to transpile efficiently/effectively—because e.g. they exist as the result of compiling hand-optimized x86 ASM source modules. Targeting bitcode constrains the inputs a bit more, such that the input to the transpiler won't be using extremely-microarchitecture-specific instructions.

And yes, the results wouldn't be 100% efficient. But it would be efficient enough—like Rosetta—to serve as an effective stop-gap to allow ecosystem consumers to continue to consume their purchased apps on new devices, until ecosystem developers upload explicitly re-optimized apps. (And it would—like Rosetta—at least let apps from "dead" development studios run, rather than killing them off entirely.)


For the LLVM bitcode as produced by the tools available at http://llvm.org/ , yes it is correct.

However you are forgetting three very important points:

1 - Apple is free to use their own fork of LLVM bitcode

2 - Apple controls what those bitcodes actually mean on their compiler stack

3 - LLVM guys already had a few talks at LLVM conferences about creating an actual platform independent form of bitcode


I'm surprised that nobody has mentioned FX!32. That was the binary translation layer that Windows NT on DEC Alpha used back in 1996 or so to run x86 Windows binaries. There is an old Digital Technical Journal article about it here: http://www.hpl.hp.com/hpjournal/dtj/vol9num1/vol9num1art1.pd...

Everything old is new again!


Ah, DEC Alpha! I had one that ran NT 4.0 and pre-release copies of Windows 2000 (amongst other things) and the day Microsoft decided not to release Windows 2000 for it was one of the saddest of my life (until then).


AFAIR they had a kind of profile-based optimization. It was amazing.


HN is probably not the right target for this type of device. Either way we need Windows 10 on ARM to succeed for Intel to have more competition. I also do think it will be successful if it works at least as well as Intel's Atom-based Celeron and Pentium laptop chips, especially in emerging markets.

Intel shot itself in the foot by replacing the Core architecture in mobile Celerons and Pentiums with Atom, and also by starting to rename lower performance Core M chips to Core i3 and Core i5. This will make it easier for ARM and AMD to "catch-up" and even beat Intel at these levels, because Intel got greedy and tried to trick the market with lower-performing chips at the same price points as for previous (and more powerful) generations.


> Intel shot itself in the foot by replacing the Core architecture in mobile Celerons and Pentiums with Atom

Core mobile Celeron and Pentium are still available, for example Celeron 3865U [1], it's just Intel got tired of having names with useful information in them. That processor should probably have a name like Core v7 2C-2T 1.8Ghz-15W which has most of the useful information, but 3865U is what we got, so you have to filter out the atom chips yourself.

[1] https://ark.intel.com/products/96507/Intel-Celeron-Processor...


Depends on what the results end up being, but there could be a lot of HN users happy to have this as a second/third device, especially those of us who still cling to desktops as our daily drivers.


Given the recent unpleasantness with the Intel ME, I imagine there'd be a ready audience on HN for anything that makes alternate architectures more practical.


There have been rumors of a 'surface phone' that runs Win 10 (not mobile) for a couple years now. Most expected some kind of future Intel chip to enable that, but ARM support certainly opens up the possibility. Just what Windows phones need - another OS change!


I'd be very interested in a Windows 10 phone if it has automatic updates that do not depend on the carrier, to avoid the Android situation where only certain phones get updates on a timely basis.


Windows 10 Mobile gets OS updates directly from Microsoft. Verizon does not support Windows 10 Mobile at all, but my four year old Verizon Lumia Icon got Windows 10 version 14393.1198, which released yesterday, yesterday.

Microsoft has chosen to drop a lot of older models from the Creator's Update, but is still providing cumulative/security updates to the Anniversary Update right now, which is supported on all phones which run Windows 10 Mobile.

At least, at present, if you have a Windows 10 Mobile device, you have recent security updates, regardless of make or model.


I have a Nokia Icon in a drawer (relented and went back to iPhone last year) which I booted up the other day and had tons of updates waiting. It does seem the Win10 Mobile platform is still alive and well, though there are few signs of it in the broader market. Good to see they're still at it, I've always loved Win Phones.


One thing Microsoft does well is Long term support and backwards compatibility. I guess this applies to their mobile OS as well.


Unless your mobile OS is Windows Phone 8.1, or Windows Phone 7, or Windows Mobile 6, etc which got very little in the way of updates once a new major release was made; and many phones are not upgradable from one major release to the next (or the upgrade wasn't pushed to users without intervention, which is effectively the same thing).

I would say, Microsoft has a reputation for long term support and backwards compatibility, but it seems like it's something they're trying to get away from.


I would more characterize it as failures in their mobile space. Microsoft has rebooted it's mobile product line a bunch of times. But, as noted, my Lumia Icon was a Windows Phone 8.x device, and it runs yesterday's release of Windows 10 Mobile. A pretty large number of 8.x devices are capable of 10.

The manual intervention to upgrade to 10 is because, as noted, the big change is that Windows 10 Mobile upgrades are handled by Microsoft, so the update servers had to be changed. Prior versions to 10 had the same carrier-based update issues Android had.

I'd argue that change alone points to the fact that Microsoft is getting better at long term support, not worse.


> (or the upgrade wasn't pushed to users without intervention, which is effectively the same thing)

No, that's a good thing. Pushing major updates and new features instead of just patches is what makes users (and IT departments) disable automatic updates.


This was the case for the Win7/8 Nokia phones, so the signs are good.

(Although I'm in the UK, where the carriers are less abusive)


Weren't there a bunch of windows 7 phones that never got updated to windows 8?


That was MS, not the carriers.


Imagine how shitty it would be to have your phone do the Windows popup "restarting in 4 hours" and then you're forced to be out of a phone for 10-30 minutes.

Worse yet if the update fails...


For me it's way shittier to have an insecure phone. I would even install a Clippy launcher on Android if I could have the latest security and feature updates the same day that they are released (without changing phones every year or using custom ROMs).


Those are not the only 2 alternatives. For instance, iOS has neither of these problems.


Other than the price.

Any phone above 300€ is not worth my money, I am not buying a gaming desktop.


You're right, unfortunately for my use case and taste iOS just doesn't work.


Fear not Windows Phone is just about dead.


My phones still make phone calls, do SMS, have Internet access and execute the apps I care about.

Seems pretty alive to me.


I'm a 20+ year Linux/open source neckbeard and I prefer WP as the ideal balance between cost and a variety of ideals it doesn't seem possible to get outside of an iPhone.

My current Lumia 650 cost £60 refurbished, which is a tenth what I'd pay for the next best option. It's fairly trivial to stop a Windows Phone reporting your entire life history upstream, and unlike pretty much any Android handset I ever owned, this firmware receives constant security updates, as did its predecessor for the 3 years I owned it (old phone cost £120).


Good for you but I was responding to the previous poster and his fear of the Window Phone platform potentially changing again.


Interesting - something they should have done rather than the orphan WindowsRT, in my opinion.

I wonder if this ends up being the inheritor of Windows CE for embedded ARM-flavoured devices. I also wonder if this means that the Win10 ARM kernel has the full Windows API - so if you built an ARM PE executable that could run natively. I suspect 95% of the pieces are in place for that but it's not yet been productised.


I feel like this is in response to the pushback against Windows RT honestly. They put the cart before the horse on that one, trying to get everyone to use an app store with no apps, and quickly realizing that the base Windows experience wasn't enough for the vast majority of their users.

Killing Windows RT was the right choice, because it was never designed to do something like this. It had a weak translation layer for Win32's API built right in, and they would have needed to do some wizardry to have the two live side by side. No, I think RT was a useful but failed experiment, and I'm excited to see them take this approach with Win32 emulation on ARM. If it works as well as these demos suggest it does, it could finally provide a good path for ARM into consumer laptops and even desktops, to spark some great competition in the space. I'm excited to see where this goes.


A lot of people who were burned when they dumped Windows RT will be very cautious of this.


Please correct me If I am wrong. But I think with WSL and this, Microsoft proved they are much much more superior than Google and Apple when it comes to serious system software development (they have to be, they have developed one of the most complex Kernel of the all times and maintained and improved it for decades. Yes I am linux guy too. But Widnows Kernel is extremely complex and well architectured piece of software ever written). Yes, Google does have very good applications, apple does have too.

But when it comes to hardcore system software development stuff. They are unbeatable. Emulating entire x86 on ARM? This is mind blowing.

They have pulled good emulation before too (one I think was inside XBOX).

But this is going to be serious. I would say very serious.

Extremely good battery life would give Microsoft, very good edge over Apple.

1) Qualcomm will be the winner. and so other ARM CPU manufacturer. and Intel is the biggest loser here.

2) Microsoft will hit market with this. After this they will have extremely good position to release good phone, and here we stand, I see successful future for Windows Phone.

3) From Computer Architecture perspective, the bottleneck is not memory or CPU speed. It is IO and energy. I don't know how this will impact on IO. But I am almost sure this is unbeatable from energy efficiency, and when I (and almost all people I know) buying a laptop. battery life is almost the most important aspect of it.


I voted you up as it's rare a Linux man would show any kind of respect to the - actually very very good - NT kernel. Good post also.


[dead]


How is it terrible? This is an honest question.


I agree this is harder and more impressive, but I thought Google's idea to run the entire Android framework at near-native speeds on Chrome OS inside a container was quite a smart technical solution as well.


Which they haven't implemented it yet. have you any experience with it? It is terrible, laggy, and doesn't work sometimes. Even google does not advertise it like they did before.


They are doing this because they want the benefits of great ARM battery life. But they are emulating x86 on ARM. So they believe that emulated x86 on ARM will have a better battery life than native x86. The only way I think this could be possible is if they measure battery life of an idle device. Maybe someone can disabuse me of this belief.


Keep in mind, if you're just using a web browser, Skype and office, these are likely to all be ARM native. It's only when you boot up an x86 program it even comes into play - for a lot of users that will be very rare.


Because they aren't running everyting in the emulator - the kernel, drivers, many of their own apps, and in future possibly 3rd party apps as well are run natively. They're emulating x86 for apps and some system DLLs. (There's a schematic at about 8 minutes.)


Might be costly to emulate x86 - negating the power-saving benefits of ARM, however as it'll be using an SOC, they would have a lower physical presence.

Imagine the motherboard on your phone, but in your laptop. The rest of the space would be used for batteries.


If you imagine a hybrid PC/phone device you could very well restrict x86 emulation to when the device is in PC mode and thus has a larger battery/AC power available in which case battery life isn't as big an issue.


Most computers are idle most of the time.


Really interesting.

I can see a potential milestone Windows 10 S on an Arm V8 based laptop with all day battery+. Sort of a Surface RT but without any excuses.

If Apple follows suit and creates a light weight, network centric laptop experience around iOS, then you'll have three contenders for the 'appliance' environment, ChromeOS, Windows 10 S, and iOS. Each with their own 'laptop' design ethic, Pixel, Surface, MacBook.


I think I might take this line of reasoning a bit further and have Apple actually come out with an ARM (A11?) powered computer. Given that you could put multiple ARM chips in a MBP I could see a whole lot of power for very little money. Another thought would be switching the Mac Mini to ARM.

Consequently, all this ARM talk has got to be making Intel nervous. I don't think now would be the best time to buy any INTC stock. :/


If that runs great on phones, and has virtualization support for e.g. running ARM Linux guests, I would change both my Android phone and my Ubuntu laptop for one phone (e.g. Snapdragon 835 or better + 8-12GB RAM + 256GB flash + USB 3.0 OTG), using a dummy screen + keyboard as laptop replacement.


A desktop/mobile convergence phone is my dream. I'd buy one in a heartbeat.



I think I get it; but could you expand a bit on what you mean?


A phone form factor device that can run a full desktop OS, or something close to it. Canonical was getting close with Ubuntu Touch but that's essentially dead now. I don't want a gimped version of Windows like I'd get with Continuum, I don't want a "desktop-like" version of Android like Samsung is doing, I want a full version of Linux or Windows that can run desktop applications.


Have you seen this? http://pgslab.com/

It's billed as a handheld gaming system but you don't have to play games on it.

Runs Windows 10 on a 5.9-inch 1920 x 1080 screen. Atom processor.

Whether it will lever get made is another matter (it's an ex-Kickstarter project now taking pre-orders), but the approach is interesting.


Dual booting isn't a perfect solution but I'm sure it's a lot easier than trying to design a UI that can adapt to screen sizes as disparate as a phone screen and a monitor. I'm a little leary of hardware pre-orders as it's easy to get burned but I'll definitely keep an eye on it.


Same here. But the price does look good....


You may be interested in the Gemini PDA (which I've funded on IndieGoGo) which provides an Android/Linux PDA.

https://www.indiegogo.com/projects/gemini-pda-android-linux-...


That looks really interesting, I'm going to keep an eye on it.


Ah ok; I actually want something similar; a tablet that can act like a phone but only on WiFi so I don't have a phone bill. Basically; an iPad but w/ real integration & iMessages etc via a free or cheap VoIP number w/o data


You might want to look into Google Voice. Free number for texting, calls over data. I receive and send texts and calls from a browser too.


How small do you want it? The Linx tablets do this but the smallest is 7".


That's pretty close but a bit too big. It also doesn't have a cell radio so it wouldn't be very useful as a phone.


you could do small android phones, install trinux or linux deploy on it, access X through VNC or X on Android - output or cast to display through wifi or cable


Commercially it would seem unviable.

Community ROMS? Keep an eye on Halium, unifying a common base for porting sailfish/luneos/plasma mobile/ubuntu; with anbox for Android apps.


Maybe you are describing the rumored Surface Phone?


Samsung has a dock like this for the S8 apparently.

https://www.cnet.com/products/samsung-dex-dock/preview/


I am not sure there is a huge gap from dummy screen + keyboard + battery to laptop and your not going to be able to power a laptop screen from a cellphone battery.


I would use the "dummy screen + keyboard" only when plugged, obviously.


Microsoft is one of the few companies that really grok Systems Software right now.


They always did, but FOSS people always downplay the work that Microsoft Systems Research does, as if having a clone of an existing OS and UIs was some kind of innovation.


[dead]


My account managers are quite happy with what I get doing enterprise consulting.


Java noob?


[dead]


Red Hat?


Does the x86/x64 cpu emulation work for JIT code or other self-modifying code? (Copy protected games, for example?) In the linked presentation they just briefly mention doing a load time(?) transpile of x86 to arm64 and caching the result on disk. What about exe packers that map memory rw then rx after unpacking, such as UPX?


They say that it's a dynamic translation layer, so presumably it doesn't make a distinction about where the code came for (except maybe you'll lose some performance optimization if they are caching translated code)


Most dynamic emulators don't do instruction by instruction emulation - they take in a whole block of instructions and translate that block, caching chunks that may be used without any inference (eg. branching). Once you hit a similar instruction block you don't have to do any translation; just execute from the cache. This is why you have a load time for reading from an x86 image.


It's actually only x86 emulation. x64 is not supported according to the video.


Windows 10 on ARM would allow Apple Bootcamp on ARM-based MacBooks. There have been rumors about Apple moving to ARM and potentially bringing processors in-house. I wonder if that influenced Microsoft's thinking.


You think that this actual shipping product that Microsoft developed might have been influenced by these "rumors" that are never actually sourced to anyone in Apple?


This is how Windows RT should have been in the first place.


Does this mean that Windows 10 (not Windows 10 IoT Core) will also come to Raspberry Pi 3?


Good question. How much memory on RPi3 and how much required to run Windows10?


RPi3 has 1 GiB of RAM. The system requirements of Windows 10 (cf. https://www.microsoft.com/en-us/windows/windows-10-specifica...) are 1 GiB for 32 bit and 2 GiB for 64 bit (both on x86). So I think if Microsoft is willing to, it should work, in particular if they slim down the fictional RPi3 version of Windows 10 (which I would strongly recommend).

I personally see the much larger performance regression for Windows 10 on RPi3 in the fact that at least on Windows 10 IoT Core there was no hardware acceleration of graphics on RPi3 when I last tested it (about a year ago), which caused it to feel rather sluggish.


I guess not. It requires hardware features from Snapdragon 820/835.


Do you know particular details what these hardware features are?


I'm guessing it's both a performance issue and a "let's not support too many different ARM chips from the beginning" issue.

Snapdragon 835 will probably be the minimum level of ARM performance Microsoft will accept. We may start seeing Samsung's Exynos and a high-end MediaTek processor being supported in a year or two.


As I understand, x86 Hardware virtualization.


Strategically interesting because, up to now, the Dalvik bytecode runtime and the subsequent pre-compiling ART runtimes were the best way to do ISA-independent application environments. Apple's fat binaries got in early enough in their (smaller) developer community for Apple to avoid being tied to an ISA. Microsoft was handcuffed to an x86 legacy boat anchor. But now they have escaped, and those other approaches are less of an advantage.


Wow, This is brilliant. Microsoft is making leaps and bounds improvement to keep their consumer base happy. Almost ten years after switching to Linux, I think I might gladly use a windows machine as a second PC.


I hope that recompiled desktop apps will be allowed in addition to emulation.


Can someone explain how they are achieving high execution speed? I would have expected the speeds (or at least the start-up times, if we have dynamic recompilation) for x86 on ARM to be outright abysmal.


There's a reason their demo apps were pre-installed. The initial recompilation is cached so that running it the second time will be much quicker than the first.


Anybody know when we can expect a phone out that can emulate x86 Windows 10? End of 2017 is the first estimate I heard but I'm not sure how realistic that is.


I dont know much about where to find these chip or what I am looking for but...

http://i.imgur.com/mRdk6ZD.jpg

This is an ancient chip, however I also found MediaTek ones for ~ $10, which had 4 cores and were 1.6ghz SoC.

Sounds to me like we could seriously see $100 laptops that actually function soon.


That company could do so much impact if only they had laser focus and distribution execution. Still so much talent with them.


That's a Microsoft Intellimouse Explorer Mouse and a Belkin USB 2.0 hub. They're like 10 years old.


Fitting when doing a video on backwards compatibility.


If they're supporting x86 emulation to run desktop apps will they support compiling native desktop ARM apps?

They're essentially supporting the end result, I would hope they don't force developers to go through emulation to try to encourage them to write a universal windows app


Props to D.M. for the pivotal work. You know who you are :-)

-- ST boot sector guy


Not the first time I've seen a reference to D.M. and pivotal work.[0] Is giving a shout out to D.M. some sort of Microsoft meme?

[0] https://github.com/netmf/llilum


Great potential for a Win10 phablet I can develop on that also lets me make phonecalls. Not for big projects, but an anywhere device. Does Win10 support non VoIP calls?


I think it does since MS did a big overhall on their windows architecture, phone, xbox, and desktop all run the same base services just with different GUI's. Technically an XBOX could make phone calls but has no GUI or hardware for it.


(1) Windows 10 smartphones currently make cellular voice calls so that feature is already in the mobile version of Windows 10.

(2) There's no reason why Samsung or one of the Chinese manufacturers couldn't make a Windows 10 phablet. Microsoft made Windows free on small-screen devices, so I assume it will continue to be free, or at worst, "zero paid".

[Microsoft had a Windows offering where OEMs paid $10 for the OS but got a $10 rebate for setting Bing as the default search engine. So technically, Windows wasn't free, but OEMs didn't pay anything to use it.]


I doubt Microsoft would be going the phablet direction, even if Windows on ARM now supports "desktop apps". There's still the issue of having a horrendous desktop UI on a phone. Heck, it still pisses me off that legacy Windows apps don't even properly support 1080p screens. It should be way worse to squeeze them in a small screen as well.


Emulation should be a last resort. They should ship an SDK with all the proper import libs and a cl.exe etc. that can target proper Win32 on ARM.


Microsoft appears to have apps that are native to ARM, but the real value of the Windows ecosystem is all the Win32 apps that have been built over the last 2 decades. The emulation layer addresses that so that "it just works" with old apps.

I assume they will be releasing a build configuration for native ARM software deployment on Windows.


I would not assume such. They actively seem to hate this scenario.


Microsoft also has this project, and UWP apps should work on ARM as well:

https://developer.microsoft.com/en-us/windows/bridges/deskto...


Neither of those are what I am talking about.


Agree. But that would mean people would make new Win32 apps which can theoretically be sold on other places than the store. (I realize Win32 apps + bridge makes it possible to sell store apps based on Win32 too - but it's not exactly the way they want it to happen).

It feels like microsoft these days think that people who make desktop apps and don't sell them in the store are robbing them.

Microsoft just needs the library of legacy Win32 apps to get their Windows-on-arm to take off, but they want as much as possible of new apps to be using the store. Not least evident by the new "The app you are installing is not from the windows store" messages in Creators Update.


It would mean they would have to admit that Windows is not an iPhone or an iPad, that they shouldn't blindly copy what that other fruit-themed vendor has been doing, that maybe people own their computers (not the platform vendor) and want to use them. Failure to acknowledge this is why 8/RT failed and they are trying to repeat it edging just a bit closer towards sane but it's pretty disappointing that they won't go the whole way.


I think they are trying to make the legacy desktop a professional and enterprise one, because there is no money selling operating systems to consumers.

If they can get the huge chunk of people who are using their computers without an enterprise domain, and without heavy workstation apps (basically just doing internet/docs/games) to use a cheap and safe iOS-style windows, then they can at least start getting 30% cut of the app sales from all those people.

I'm not too worried about that as long as there is a proper windows for professionals. In the above scenario they could start charging more of a premium than they do today for the legacy windows desktop because there would be no consumer version of it.


The import libs for Windows IoT seems to contain the full api surface, so we already have those - it's just that a lot of the functions are stubbed out in the actual dlls on the system.


Didn't think about this when I wrote the comment above, but the IoT import libs are of course 32 bit, and the new Windows for ARM here is 64 bit, so I guess it's not actually very useful. Still you can easily make your own inmport libs, but that of course does not give prospective developers the best experience. I guess we will have to see what Microsoft are going to do about this.


I hope they drop this on the surface 2. It would save it from being a paper weight.


Was the SurfaceRT an ARM device? I have one collecting dust, would be nice if it got an update


Yes. The RT devices ran ARM, which was one of the reasons for the locked down software policy (nothing else would run anyway).


Except that you were also denied much of the existing API and can only install software if you have a developer key from MS. I use my RT device for media playback in the car, but the windows-RT version of VLC hasn't been updated in years, and is somewhat buggy.


It also took VLC more than 2 years to develop and release the RT version. By the time it was released the market had already declared RT dead.


You might find that there are a lot of updates in the queue :-) . I dusted mine off a few months ago, and there were quite a number of them.


A qualified guess would be that you can just install it, because the bootloader is Microsoft signed so it would pass the secure boot check.

Edit: Apparently Surface 2 isn't AArch64, so probably not.


That assumes that:

(1) Microsoft makes generic install media available (rather than just device-specific restore media, for devices that ship with Windows 10 ARM), and

(2) Windows 10 ARM will run on ARMv7 devices, rather than being restricted to ARMv8 (AArch64) devices.


As far as i understand it, this needs an 64bit ARM SoC?


I'm out of the loop on ARM. Can somebody explain the significance of it?


It means Intel won't have just AMD Ryzen's great performance/dollar to worry about now, but also ARM chips' even better performance/dollar. If you don't think this is a real problem for Intel look back at why Intel failed in the mobile market.

Yes, Qualcomm was "established", but few would've thought Intel can't at least beat lesser known competitors such as MediaTek. It couldn't because Atom chips and Intel's cost structures in general are much higher than those of ARM chip makers - to the point where it prohibits Intel from competing head-on with ARM chip makers. Intel invested at least $10-$15 billion in the mobile market, and it only had 2% market share to show for it at the end. You can't even blame LTE modem technology for it, because Intel actually had better LTE tech than MediaTek, second only to Qualcomm. If anything, it should've helped Intel become Qualcomm's main competitor.

For now, we'll only see competition between ARM chips and Intel chips in budget laptops (which happen to be the most popular type of laptops), but with Moore's Law dying and Intel hitting a ceiling on performance/core, there's no reason why ARM chips can't catch-up to Intel at the higher-end levels of performance, too.

And don't forget that right now ARM chip makers are still building first for smartphones. That means they try to have a 4-5W TDP for the entire SoC at most. They haven't even tried to build a chip that has 15W, 30W or 45W TDP for a laptop. But if things go well enough for them in budget laptops, I could see them start building those types of chips, too.

This is also MediaTek's story in the mobile market. It started out competing with Qualcomm only in $100 smartphones with low-cost chips, and now it has just announced a 10nm high-end chip (Helio X30) that will compete against Snapdragon 835. It may not be quite as good still, but it should be close enough (the closest MediaTek has gotten, in fact). I imagine Microsoft will begin supporting that chip (or its successors) in Windows 10 soon enough, too.


It's the processor architecture that in the past was used primarily on servers. Now it's almost universally used in mobile devices, too. It's ubiquity has caused a much larger technological demand for cross-platform development, which is one of the reasons why technologies like JavaScript and Rust have become so popular. Additionally, it's lower power and lower heat compared to it's x86 cousins.


> in the past was used primarily on servers

Embedded devices, yes, but ARM on servers is more like ARM's perpetual future.


Any word on actual devices that can run this for sale. Configurations ?


Ha-ha, Microsoft will do it again. For Windows on all devices!


Not really. This is more about allowing ARM to enter the PC market. ARM couldn't break into the PC market without Microsoft's help and by being fully supported by Windows. The only other alternative were Chromebooks, and even Google didn't bother to make a big push for ARM in Chromebooks.


What about .Net on ARM?


I believe there was a mention in the Day 1 BUILD keynote somewhere that Samsung's work on .NET on ARM (built for Tizen specifically, but I would imagine generically useful) was going to be open sourced to the .NET Core project at some point.


So, as of today, Windows 10 on ARM support x86 win32 or UWP, but not a .Net?


Oh, UWP .NET has always supported ARM (since Windows 8). I assumed the question was about .NET needs outside of the UWP.


Also, the Win32 x86 emulation I would imagine would support most/all .NET Win32 applications, to whatever extent the Win32 .NET Framework gets added to emulation image.


.NET always supported ARM for embedded deployments, and nowadays UWP as well, so they have enough code around to update RyuJIT to also support ARM.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: