“Winston turned a switch and the voice sank somewhat, though the words were still distinguishable. The instrument (the telescreen, it was called) could be dimmed, but there was no way of shutting it off completely. [...] Any sound that Winston made, above the level of a very low whisper, would be picked up by it.”
- George Orwell, 1984
IIRC it was creating it's own wifi access point your phone scanned for and connected to to configure it. I haven't set one up in a while though so maybe it changed.
Explore and evaluate meshnet projects, find one that you like, and educate others about them.
Set up the hardware, link up with existing mesh if you're in a dense enough area.
Push federated services like Matrix and ActivityPub based services like Mastadon, so when it's time to go mesh, people don't laugh when you say "It doesn't reach Facebook/Insta".
The important part is the configs, other than that is just another distro. a very small niche group serviced this distro, and it seems to be derelict. The useful part of BZ mesh is USB bootup. so any machine can become and unbecome a node.
sometimes something is finished and nothing further is done then a new way pops up. the advantage to porteus is its pared down for rapid boot up, to reestablish a node while walking about on the fly.
ill put something up in show HN
for now dont let any wifi stuff get trashed and wasted,
even "broken things" have components vital to home brewing your own infrastructure.
Max Headroom was an extremely subversive program and I continue to be amazed it was actually greenlit, produced and broadcast on mainstream television - in the United States for primetime broadcast, even!
I will provide some speculation, but first some back story, as I recall it: Max Headroom was, to start with, a collection of special effects to make a British talk-show host seemingly generated by computer, but still able to interact live with guests, etc. (A little bit of the same niche which the early muppets occupied, actually. See the early appearances of Rowlf the Dog on American talk show TV, for instance.) Anyway, the effect proved so captivating and convincing that the show became hugely popular, and a TV Movie was made (still in England) which provided a sci-fi style origin story for the (ostensibly computer generated) Max Headroom character. The character’s popularity was still rising, also in the USA (Coca-Cola famously used Max Headroom for their campaign for the now infamous New Coke), so an American TV series was proposed and made, with most of the same cast reused in the same roles, and the story of the TV movie was used (IIRC) as the two first episodes, except for a slightly happier ending to set it up as an ongoing TV series.
The problem, I guess, was that probably none of the U.S. people greenlighting the TV series had actually watched the original British TV movie; they probably only thought about getting such a popular character. They probably did not know that things like 1984, Judge Dredd and V for Vendetta are all very distinctly British things. The Max Headroom TV movie, as well as the TV series, are both essentially very dark cyberpunk; acerbically critical of prevailing trends in technology and also scarily prescient for 1987. It was also very critical of television itself as a phenomenon, which was probably what ultimately killed it, and by that time the huge fad for the character had passed.
It was canceled rather abruptly – the last recorded episode was never aired, and there was still one whole fully scripted episode which was never produced.
There was a great article some time back about the hisstory behind it. A lot of the episodes satirized the show's own network and specific network executives - some of which flew under the radar and some of which probably hastened the show's demise, etc.
Interesting story in many ways, particularly the wrestling for ownership of the show, and how they recreated it as a carbon copy for the American market. I remember hearing years ago that he wasn’t actually CG, which made a lot of sense — but as a kid I definitely thought he was.
The subversiveness of the show is great; but it’s particularly amazing how accurately they saterized the future.
Am I the only one who has far more sensitive content visible on the screen and filesystem of my computer than through its camera? I feel like, at least for the threat models that I consider likely, if someone manages to hack my laptop and get (for example) microphone access, the thing I am most worried about is that they will use acoustic analysis of keystrokes to recover my banking password, not that they will hear me... singing in the shower, I guess? Based on my understanding of current laptop OS's, just directly making a key-logger the straightforward way is easier anyway, so I'm not that concerned.
I fear that the sense of security people feel from putting tape over their webcam might be hiding much more realistic threats.
> far more sensitive content visible on the screen and filesystem of my computer than through its camera
I mean, yeah, being hacked (and thereby compromising your screen/HDD's contents) is an issue in the first place and keyloggers are only a few lines of code, but I don't think there is anything wrong with controlling the parts that you can control.
You can't make a physical "no keyloggers for this piece of text, please" switch, but at least they can't hear you talk to a close friend or watch you walk by as you get into the shower (though the latter is a bigger risk for women than for the more common HN visitor, I guess). You're not the first comment I see wondering why we even want this, and it's really weird to see. Why would we not want this?! I don't think that, just because we have webcam stickers, we suddenly stop caring about malware altogether. I think even (metaphorically) my mom understands that if you tape your webcam, a virus can still steal the bank login that is happening on her computer. Even if she doesn't really get the concept of a process, malware, and a web browser in the first place, taping a cam does not make everything on a computer virus-proof.
Obtaining compromising video and/or audio of you or someone you care deeply about will often be more than enough to obtain things like banking information and file access.
Yes. Ultimately the main purpose of any surveillance system is extortion. Nudies are not relevant to the topic. Evidence of crime or other compromising behavior is.
It doesn't even need to be about you. It could be about your parent, child, or sibling. What judge wants to risk exposing a relative to prosecution, even if the relative would be acquitted?
If you wonder about inexplicable judgments, consider the possibilities.
Getting your password isn't as compromising as, say, getting a photo of you nude, in the security theatre-sense.
Think of the OPM hack[0] and the potential for a foreign entity to compromise someone (that they know has a security clearance) with blackmail. This has a much larger and potentially far longer ROI than a simple compromise of a password; which probably has to be changed every 'x' days, anyways.
You have to think of things like this in terms of long-term yield and not immediate results; unless we're talking crackers, then they're almost always after immediate results.
> Getting your password isn't as compromising as, say, getting a photo of you nude, in the security theatre-sense.
That... really depends on the society you live in and your own insecurities.
Fear and prohibition surrounding the naked body is a cultural/religious thing.
The most prudish US-regions are probably somewhere in the middle of central Europe and Islamic countries like Saudi Arabia.
I won't generalize to the whole of the US because I know there's a wide spectrum.
However US-wide media serve the lowest common denominator, spreading the unfounded phobia and taboo of nude people.
Personally I couldn't give a damn whether someone has pictures of me nude. People probably do since I was bathing in the ocean naked on some occasions. I don't really want any pictures of myself on the internet (with my name), but nude or not doesn't make much of a difference to me.
I'm more afraid of ending up in some facial recognition database and private companies starting to track my every move.
The tracking doesn't bother me as much as the individualization of data-mining does. The use of collected data to run unexplainable AI systems which can do things from determine how the courts treat me, to what is recommended to me, is the real threat.
The problem with blackmail is it almost never happens - i.e. the profitable digital blackmail is cryptolocker. But the real negative to your life is the unaccountable compiling and production of data products which can be used to try and describe your life to others - credit ratings being mapped to internet activity or insurance rates, or digital dossiers being sold to companies to screen new hires in combination with some blackbox "risk" AI system - or surveillance cameras running people in-real time through a profiler which then re-deploys police units on the go.
The risk is...not dramatic? I think is what I want to say. It's mundane - so mundane that its been creeping up on us for decades and is very gradually attacking people all because individually it all sounds reasonable.
I started doing it because I _never_ use the camera and it took about 30 seconds. It's a _very_ minor piece of security, but it costs nothing. Incidentally, it was due to a (very obviously fake) spam email:
> You probably noticed your device is acting strangely lately. That's because you downloaded a nasty software I created while you were browsing the Ƿornographic website...
> The software automatically:
> 1) Started your Ƈamera and begun recoding you, uploading the footage to my server...[...]
> If you do not do what I ask you now, I will upload this ugly video file with you ... and the stuff you were watching to several video upload sites and I will send the links to all your friends, family members and associates.[...]
> I think 2,000 USD is a fair price for my silence. I know you can handle to send me this money - and it is enough for me to get lost. So how do you send the cash?? Bitcoin.
I wasn't worried about it being real, but being able to dismiss it out of hand is useful.
If the keystroke analyzer was an automated script, being used by some script kiddie who found it on the internet in the near future, the path of least resistance ends up with a really really large number of lanes.
The worry is more about stalkers and abusive partners. Abusive partner using it to further control you or hear that you plan to leave or hear that you talk with someone he disapprove (e.g. vuctim breaking out of isolation and distorted reality victim is in).
Those situations are physically dangerous when they happen.
Sure but you're attacking the problem from the wrong vector: the partner has physical access to the device and control to install malware, and is abusive. Turning off the microphone or camera is going to trigger retaliation anyway - it doesn't help in this scenario because the problem is the abuser.
Helping the victim out of that relationship is what needs to happen - no amount of technical cleverness is going to make it better. Improving our laws certainly would (i.e. taking that sort of device compromise as a serious crime would be extremely useful in this context).
Ability to communicate privately is necessary for that "helping the victim out". The easier it is for abuser to control devices of abused, the harder it is to get out. And the harder it is to stay hidden after being out.
The default being "easy to spy and control someone else's devices" makes abuse easy and escape (both mental and physical) hard.
That helping and escaping and staying away is not something that happens without communication and privacy.
Why do I trust my WiFi cards disable pin, but not the "soft button" on my laptop that triggers it via the OS? I get that there is more software when it goes through the OS, but I trust that a whole lot more than the firmware on the WiFi card.
This is from the same group that tries to explain how they don't use proprietary firmware blogs by using the Redpine chips just because the blog is already flashed on it rather than loaded into RAM on boot.
I wish they would just say it how it is rather than overselling.
> Why do I trust my WiFi cards disable pin, but not the "soft button" on my laptop that triggers it via the OS? I get that there is more software when it goes through the OS, but I trust that a whole lot more than the firmware on the WiFi card.
When using a physical kill switch you have to trust the hardware. You don't have to trust the firmware or operating system, as a physical kill switch usually disables the power lines to the device. That's something a remote attacker can't circumvent.
They are just controlling the W_DISABLE signal which the firmware uses to control the actual radio hardware. It is _not_ a hardware switch in the since that the power is shut-off.
At least in the case of Purism’s Librem 5 (unreleased), it’s not a disable pin, it’s removing power entirely from the wifi peripheral. Which is a step up for sure.
That may not work for low power electronics: they can gather enough power to operate from I/O lines that have pull-up resistors on them.
When this goes really wrong, the chip can burn out.
Some satellite provider in the past tried to exploit differences between genuine cards and fake cards based on Atmel 8515 simulator boards. The fix was to lift the VCC pin of the 8515.
You can turn that off, though. Not that I trust it to work, but at least legally you can deny that.
Do you happen to know about GPS? Because my phone sometimes picks up GPS faster than should be physically possible (knowing a thing or two about how GPS works), and sometimes it takes a normal amount of time. I have GPS turned off almost all the time, don't have a Google account logged in, everything I can deny to Google apps I have denied, nothing weird seems to be running in process lists... any idea where to even start looking?
Most phones use Assisted GPS. It's a combination of cell tower triangulation and GPS. You'll get a potentially inaccurate "GPS" location based on the cell towers and signals you can see, and this will eventually turn into a real GPS lock.
As I understand it, this cell tower location actually helps achieve a real GPS location faster that would be normal - but I'm unsure how that would work .. I'd be guessing at best ;)
For non-assisted GPS, the GPS device needs to download orbital information from the satellite directly when first powered on - with only ~50 bits/s of bandwidth, it can take a while - minutes to tens of minutes. With Assisted GPS, the GPS device can download the satellite orbital information and almanac directly from AGPS server (e.g. cell phone tower), usually at much higher bandwidth - as those local, always-running servers can constantly download and cache all satellite information.
The sats don't change orbit every few hours though, also without AGPS you usually can use the ephemerides from last time you used it.
This is fun to observe when you buy a new phone and don't connect network yet, it takes a good while to find any satellites and finally find a fix. Then after cycling GPS, it finds it almost instantly.
As I understand it, roughly knowing your position lets it get a lock on enough satellites faster, because, given position and time, it knows where they're supposed to be.
Also, I believe that part of the reason why Google wants that SSID info is because they can later use it for location purposes as well, making that initial position determination more accurate than with just the cell towers alone. In general, the more environmental data they can correlate with accurate GPS coordinates, the better they can predict location when GPS is unavailable.
No, that involve sending data to Google (one of the reasons I mentioned turning all that crap off), or a relatively complex config to swap out the location provider for Mozilla's which I haven't done yet on this phone.
To be fair, the WiFi connection is off, it's just the radio is not... It's not so much a lie, rather it's an implementation that covers the average persons expectations 100%, but keeps something they might not want, but weren't thinking about when they pressed that button.
To be clear, I don't like the behaviour.. but it's not misleading, it just doesn't cater to everyones desires.
It's about "who" you distrust. If you distrust the hardware vendor, hope is lost. If you distrust someone with physical access to the device it's about how hard it is, and firmware/hardware is harder to hack (generally) than software. The likelihood of finding a remote exploit into the firmware is a lot lower than finding a remote exploit into a software disable.
Note that a physical switch can be overridden just as easy as a soft switch, the vendor can just put a soft switch in parallel with it and you would never know it. The hard switch could turn off the display, speaker etc. while the processor and radio can stay on.
Hopefully this would easily be detected, and the brand damage from the resultant public shaming should be enough of a deterrent. But maybe it's really well hidden and eludes detection, or people just don't care and there is no brand damage, or maybe even there's no "real" brand to damage (OEM crapware).
I’d imagine a vendor bothering to put an off switch will know that a simple teardown will show if it actually disconnects the device or just signals software to do it. The effort and expense would be for nothing. Most users would never bother to care about the off switch anyway, and the ones that do will know the truth.
Eh, as someone else mentioned the soft switch, you could very easily make this teardown resistant by putting the covert power rail on an inner layer of the pcb with the switching transistor out of the way connected to vias. It’s not teardown-proof, but wouldn’t be obvious with a trivial inspection.
Perhaps but this would also remove any pretense of deniability from the manufacturer. If discovered it would just show that they went to great lengths (extra engineering effort to implement and hide) which could only be explained by the conscious decision to make it a back door and bypass it later in software.
So the issue would no longer be one of ignorance (no button) but of premeditation (button with hidden bypass).
Nest Guard is a monopoly? Or were you referring to something else? I also think a lot of this is completely non-essential (e.g. Nest Guard). I do not have any such "smart devices" at home, and encourage others to do the same. They provide, in my opinion, very little benefit for a great sacrifice. And all that aside, they're just too dang expensive. I don't see the point of spending $600 on a machine-learning toaster.
I think we're still finding that out. Have you noticed the attitude towards large tech companies among the general public lately, especially those whose business models are entangled with privacy issues?
That was my first thought when seeing Sanger's initial tweets about this. Someone else replied to the tweet pointing it out, and his response was along the lines of "that's not an off switch, it just blocks it". I'm not really sure why blocking the camera isn't strictly _better_ than a physical off switch, at the very least from a trust perspective
It's a good idea, but I think a separate switch might be better. I might want to talk on a non-video normal call (most of what I do) or record something but not use the camera, and would want to keep it shut. I think many other people might use the microphone separately from the camera as well.
I thought lying about such functionality to consumers would be illegal. I feel that if you sell me a device with the explicit promise that “off means off”, then bypassing that would be.. false advertising?
Is that true? Assuming they’d market it that way, originally?
Most people aren't capable of using a webcam correctly before the addition of a mechanical switch that adds a 50% chance they'll deactivate it and think it's broken. A physical hand-operated integrated cover that slides in front of the camera is a great option for software engineers and HN-level thinkers, but it will only cause grief for the majority of users. Having hidden redundancy would allow Apple's tech support to turn the camera back on when Grandma calls in unable to use FaceTime. How many failed attempts will a typical user make before permanently giving up on a feature, maybe 2?
I think people have been sufficiently trained in using off buttons that if you give it the right affordances (visibility, an On/Off label, an LED to show it's on), they'll be able to use it without too much friction.
After all, they had to turn on the computer first.
Yes but you can't protect yourself against brands being hostile to consumer by asking them for features to protect you. The only protection against that is boycott.
No, the goal of the switches is to fend off 3rd party hackers.
the printer/scanner/copier at my workplace needs about 30 seconds when you press the physical switch to actually switch off, it's more of a command than an actual switch
Of course, the devices most people carry around intimately all day have batteries sealed inside them. Even if the batteries were removable, and there were no secondary battery, a device could still (and perhaps increasingly will) be able to harvest power, or be actively powered wirelessly and even at some distance.
TVs and other gadgets that have no microphones or video cameras embedded in them should have a certification like "organic". "NoSpy Certified" or a similar trademark would be appropriate right next to the UL and CE marks.
Really? Growing up on a farm, and having friends/acquaintances who were either organic or interested in going organic, I don't recall that pesticides and herbicides were allowed. The so-called "organic pesticides" and "organic herbicides" I've seen are mostly just all natural snake oil...
there are some organic & inorganic ingredient allowed in organic farming. Depending upon below criteria 1) allowed (2) Banned (3) Restricted. Before questioning on organic everyone must understand concepts & standards of organic. You could email me @ vivekon.export@gmail.com or +91 8550994623
Regards,
Sagar
> And UL was kind of a joke from having dealt with them personally.
UL may be a 'joke', but it does keep a lot of sketchy electronics out of the market (at least in the US). In my personal experience dealing with them, bad hardware design is more to blame for a bad experience with UL than the UL process itself (which, is brutally long, but just might be for a reason..)
That's exactly what I would expect from a standards-based certification. But it's also why it's useful - "it is safer" is much easier to fudge (see also: Boeing) than "it is in compliance with this 40-item check list". Either way, it sets the baseline, so there's a net benefit so long as most products would be below that baseline without market pressure to certify.
Most standards have an escape hatch, like PCI's compensating controls. And we weren't blowing smoke, later one of our engineers broke his arm because of the changes on design they wanted.
To suggest an alternative, all devices capable of internet communication must allow their traffic to be decrypted by their owner (how that password gets set is up to the individual device). This would allow owners who care to set up a man in the middle and confirm that all outgoing (and maybe even incoming) traffic to the device is what they expect. Any outgoing message that is not decryptable or not expected would be a red flag (which the vendor could try to explain if it is a non-spy message like an unusual error code).
Cryptographically I believe it is possible to encrypt a message such that either the user's key or the vendor's key can decrypt, but I'm not 100% sure.
What you’re describing is possible and is done currently in corporate environments by forcing devices to accept a self signed cert that allows companies to spy on their employees traffic.
Haven’t seen anything for the home market yet, and I’m not sure how you’d get a consumer IOT device to accept your cert.
Whenever the topic of MITM middleboxes comes up, there is usually a vehement opposition to them from much of the security community... while they bring up some valid points, I can't help but wonder if there is some deeper agenda behind that opposition, since these also seem to be the same people who are pushing the user-hostile walled gardens.
(Personally have been using a MITM proxy on my network for over a decade. Besides the filtering, it also has a useful side-effect of upgrading all connections to TLS 1.2, and when 1.3 becomes more common or mandated, I only need to upgrade the OpenSSL the proxy uses to start using it for all TLS coming from the network. Even older devices that don't support it will still use it when communicating outside the network.)
One of the ways to deal with IoT spying is with enforced standards so the end users doesn't rely on an untrusted black boxes in the 'cloud' for the services provided.
You should be able to firewall your smart toaster so it can only communicate with a service under your control. In particular a service the manufacturer has no control over.
The solution to avoiding spying devices isn't MITM, it's making the devices run software that you can trust. If you buy an user-hostile walled garden then you frankly deserve it being hard to MITM.
Stick all your IoT stuff in a private VLAN and use a MITM proxy to decrypt / recrypt everything. I have yet to find a consumer device of any sort that lets you easily swap out SSL certs. Even the devices where it’s possible to set up LetsEncrypt will usually get overwritten by firmware updates.
Right, but how do you decrypt the vendor's encryption? That's why I think you'd need to be able to provide a second key, because if vendors give out their decrypt key they may as well send it all in plaintext.
It's interesting that no laptop vendor seems to have made a physical webcam cover included by default- to me that seems like a no brainer that most customers would benefit from.
Edit: Thanks for all your comments. It seems that there a few vendors that do offer webcan covers by default now. Definitely will have to check those out.
I bought an aftermarket webcam cover that slides open or closed. After a few days, I realized I was always checking to see that it was closed. I went back to using a piece of black electrical tape because that way I KNOW it's blocked without having to check. Besides which it's cheaper. Anything to lower the cognitive load.
I put electric or duct tape the laptop webcams and have done so ever since my first laptop with a webcam. Only way to be sure you aren't being recorded other than opening up the laptop and physically removing the webcam.
My Lenovo P1 has a fn button to disable the mic and a light showing if it's enabled or not. Not as good as a switch, but makes me confident some random application isn't listening at least.
To everyone doing confirmation bias replies: yes, I'm sure we can find lots of instances where a cover came with the laptop. My old EEE PC netbook also had one (and it kept falling open if I put it in my backpack on its left side). Adding them, however, either fell out of style or was never common to begin with, and the more interesting question to answer is why that is when clearly so many consumers care for it.
I saw a laptop recently (within the year, I think) that had the camera situated under one of the buttons IIRC, so that it was naturally covered when not in use
This doesn't even mention location data from phones which is so persistently on and hard to disable, yet any usage of maps requires it. I want it off after that...
Even with data off, wifi off and gps off mobile phones interact with towers and carriers are often required to keep logs of your location for a while in case they are required https://ssd.eff.org/en/module/problem-mobile-phones#1
>Another “clever” solution is to use a software off switch, like this (for Windows). But it simply turns your webcam’s driver on and off.
The OpenBSD people have recently made microphone off the default at the kernel level. You need to have root access to turn it on again and you need to take an explicit action to have it come on at boot. That is actually sort of secure as people don't run as root in *nix environments when they are doing things that can get them cracked (i.e. run web browsers). Turning the microphone on should really be an admin level thing...
Perhaps what we want isn't a switch but a button. You can activate it but it is off by default. There really needs to be a obvious light as well...
The USB spec has a part where USB devices can be turned on and off by software on a USB hub. Unfortunately almost no USB-Hub implementation/device supports this part of the spec. I have been looking into this for a while with the goal of watchdogging and powercycling unreliable USB devices. The seemingly only USB hub at that time was unfortunately not on sale any more, even back at that time.
Maybe this could be a nice kickstarter, a smart USB hub that can be integrated into the smart home/IoT world. I'd buy it.
Since it is separate software, not the webcam software, that is switching the webcam off, we would not have to trust the webcam software maker to fullfill the promise.
It's too bad there's not a company with enough funding and incentive to make equivalents that don't need to phone home. Ones that are competitive in price, functionality, etc.
While that would be ideal, I don't personally mind the phoning home if it only happens after the wake word and if the cloud service greatly improves the response capabilities it might otherwise have. Yes, the word can be heard by mistake, but that's just the risk trade-off I make.
I think a physical off switch would be good. At the moment I just unplug my Alexa if I'm particularly concerned it might hear something sensitive.
I know many HN readers are far more privacy-conscious than I am, but that's just how I think about it. I personally consider cybercriminals and people who dislike me far greater privacy and security risks to me than tech giants or even the US government.
What about having a home router which had a visual alert to all outbound traffic from connected devices to their locations.
Super freaking simple to implement.
And if it were a page that you could just toggle the ability of the stream flow by clicking on it... to create the FW rule instantaneously and stop that flow.
You pull up a dashboard and see all your threads. If you see a thread from [phone]-->[Facebook] and you can just disable that stream. (Where [Facebook] is a list item of all the known FB addresses etc)
Well the thing is they are absurdly fine electronics. It is trivial to reroute so that the switch applies to the LED and not the camera itself (and some other trickery based upon reading its state while keeping it covertly operational). Said lie could still be detected in other ways of course but it wouldn't be a simple and complete fix.
To be fair, if I wanted to build covert surveillance I could also trivially make the switch appear to turn off the camera to commercial camera software but not actually disconnect from my covert software that I provide to TPTB.
“Must” according to who? Should, in many cases, sure.
I’d rather physically removable sensors rather than off switches, though.
Cameras on laptops are far from my greatest concern. Microphone on cellphone is a much bigger concern, as well as perpetual passive metadata plus sensitive data on third party servers required for normal functioning of most services.
So how would do that with routers, which in the UK conveniently pass a copy to NSA of all communication, with phone baseband devices (qualcomm), with PC CPU's which all have mandatory "lawful" backdoors?
The webcam and microphone is only half of the story.
I would totally buy into this, recently i found two really disturbing facts that made me uninstall Instagram,
- first that while its open it listens to what you say and then shows you adds based on that (no i didnt search for the same thing or could be an expected interest of me)
- second i found that my location was briefly turning on and back off by itself, on one of those on/off and instagram notification arrived, after uninstalling that never repeated.
We need control back on our devices. I friend went so far as to add a physical on/off on he's macbook. I went out of my way to get a mobile i can root easily. There is a market for this.
Can anyone else comment or corroborate this?
I don't disbelieve you but it's a gigantic if substantiated and would definitely get me to leave the platform.
It's just as easy to make a faux off switch that actually just triggers it to disable itself in software. The switch needs to actually cut ALL power going to the camera hardware.
I remember some webcams having LEDs that were controlled in software so hackers were able to turn the LED off in code while still recording with the camera the entire time.
For now, I will still have my physical webcam cover on because hackers can't stop it from working without physical access to my machine.
That said, there isn't much interesting to watch if you really wanted a live feed of my webcam (other than frustrated looks on my face while I'm debugging the work issue du jour).
Meh. Houses still have windows, and people still have binoculars, but we seem to get by fine with blinds. Tape over your webcam, unplug Alexa, or turn off your phone if you want more privacy.
More importantly, there is a social norm that you don't look through people's windows with binoculars. Of course police, spies, or creeps might do it, but that's incredibly rare. Unfortunately, the social norm (and business model) of the modern web is that companies build ever-more-powerful binoculars to constantly stare. That's the real problem we need to fix.
> unplug Alexa, or turn off your phone if you want more privacy.
Fully turning off either means have to wait for the devices to boot up whenever you want to actually use them. That time adds up. A mic kill switch would allow the device to be immediately used at the flick of that switch.
I agree with the author's aims, but for the average person the difference between a hardware and software switch is likely immaterial. To the uninitiated, there's not a great reason to trust that Alexa's "mute" button does what it says.
To gain trust, I think we need tactile physical interventions: a built-in webcam cover and a slider over the microphone. If it doesn't exist already, I'd imagine there's a market for a nicely designed phone case that includes both.
How do we enforce this? We want the federal government to start investigating factories & banning imports of electronic devices with X sensor and Y connectivity capabilities? Do we sue Best Buy unless it changes all its suppliers immediately? The precedents set by categorizing and condemning could create more harm than good.
Like so many other privacy articles I read, this one has its heart in the right place, but it treats reality and implementation as an afterthought.
Its about time this idea has caught some traction, ive been taping over cameras, and stabbing microphones with a safey pin and stating the absence of lens covers for years.
I would dearly love to have this. If it's controlled by software, then it is subvertible in a way that pure hardware solutions should be invulnerable to.
The 1st-generation Echo has a hardware microphone cut-off, but I've never seen confirmation that the Dot (or any later variants of Echo) continue to have a hardware cut-off.
I'd hazard a guess that once the public seemed relatively unconcerned about Echo snooping on them, the hardware cut-off would have been removed for cost reductions (alongside the twist-to-adjust volume control).
A vendor who doesn't like it could just make it very inconvenient to use the on/off switch, for example by making it take a very long time until the device becomes available after it has been turned off using a switch.
Thus users would be strongly discouraged to use the switch.
This entire branch of the discussion is being buried in downvotes, but that is actually a very good analogy. Except in the case of Alexa, it's like you are holding the onion and someone else is slicing it. With a very sharp knife.
Don't think this analogy really applies. I mean you have complete control of the knife - it's a physical tool you hold in your hand. Can't compare this with any electronic device at all, which always does lots of stuff behind your back.
In theory. Plenty of reported instances of audio being recorded in situations where the wake word wasn't used, was misheard, etc. So the wake word concept doesn't seem reliable. :(
This sounds similar to the "do not track" setting in browsers ... which was respected by almost no websites. Why would it work this time? (Physical switch or not)
"Do not track" is kind of fundamentally misguided because it's physically impossible to verify and amounts to just another bit of tracking information.
Physical switches are physical and auditable and if the switch is audited to work, it works.
Metrocop: “She'll get years for that. Off switches are illegal!”
—Max Headroom, season 1, episode 6, “The Blanks” https://www.maxheadroom.com/index.php?title=Episode_ABC.1.6:...