Speech recognition can occur either locally or on Google's servers. Since about 2012 [1], Android has been able to do some types of speech recognition, like dictation, on local devices. Additionally, Google Research has recently expanded on this functionality and it seems like much more of the speech recognition will be done locally [2].
Since Android is open-source, would that mean that the voice recognition software (and/or trained coefficients) could, in principle, be ported to Linux?
In the latest versions, there is no more Open Source anything.
Calendar, Contacts, Home screen, Phone app, Search, are all closed source now.
(btw, all of them, including the Google app, used to be open in Gingerbread)
You can't do TLS without going through Google apps (or packaging spongycastle), you can't do OpenGL ES 3.2, you can't use location anymore, nor use WiFi for your own location implementation.
Since Marshmallow, you are also forced to use Google Cloud Messaging, or the device will just prevent your app from receiving notifications.
To "save battery power" and "improve usability", Google monopolized all of Android.
To whomever downvoted the above post: Please clarify why you think it isn’t relevant to the discussion, or provide counterarguments. All the points I made can be easily sourced (if you wish, I can even post them in here), and are all verifiable.
It wasn’t meant aggressively, just as a question. It’s quite possible that the author of the comment I answered to had not used Android for a few years, or had never cared – or had just missed the announcements of the official apps not being supported anymore.
Oh, wait, there were no announcements, they were dropped silently.