Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

How can the kernel trust your offline verification? At best, what you're arguing for sounds like signed binary blobs.

How do you dynamically instrument things? How do you write programs which decide, at run time, to move compute closer to the hardware?



> How can the kernel trust your offline verification?

My thinking was that I would trust the offline verification, and this would be enough for me (as superuser) to load the precompiled module into the kernel. I believe LLVM does something vaguely comparable, where it can verify that bitcode modules are well-formed, to protect against certain classes of compiler bugs. (Java of course does its class-verification at runtime.)

I don't think this idea is all that different though. If the JIT implements caching of its generated native-code (assuming it can do this securely) then we'd get the best of both worlds: I don't need to be a superuser, and we avoid needless recompilation.

> How do you write programs which decide, at run time, to move compute closer to the hardware?

When would this make sense? If you've got a working kernel implementation, which is robust and trusted, why would you not use it?


If you're writing a router, or high speed trading system where you want to respond to packets on the wire with lower latency.

The point of a safely programmable kernel is that the user gets to inject third party code into their kernel without needing to know if it's safe, because the kernel will take care of it.


I think you misread my question. I asked why you wouldn't move the code to run in the kernel, if you have that ability.

Anyway, if I'm understanding things correctly, the point of using JIT is to handle the compilation in a trusted context rather than having it run as the user.


You wouldn't write the code in the kernel because writing safe code is almost impossible for humans without a lot of tooling help, and that tooling looks a lot like a sandbox.

I think you're fundamentally not understanding why we have virtual machines in language implementations, or process boundaries in operating systems, and why these things are good and useful. Because if you did, you'd see that a sandbox in the kernel is a hybrid of the two ideas.


> I think you're fundamentally not understanding why we have virtual machines in language implementations, or process boundaries in operating systems, and why these things are good and useful.

No. My understanding is fine.

> You wouldn't write the code in the kernel because writing safe code is almost impossible for humans without a lot of tooling help, and that tooling looks a lot like a sandbox.

Right, but with EBPF, you have exactly that.

My question was in response to your How do you write programs which decide, at run time, to move compute closer to the hardware?

To rephrase, my question was this: If you have the ability to move code into the kernel without concerns of stability or security, why would you wait until runtime to decide whether to do it? Why wouldn't you just do it unconditionally?

> You wouldn't write the code in the kernel because writing safe code is almost impossible for humans without a lot of tooling help, and that tooling looks a lot like a sandbox.

Of course. My question there was about the use of JIT rather than ahead-of-time compilation. As we've now both said, the answer is that EBPF is able to move the compilation out of the hands of the user, avoiding having to trust the user. It may also be helpful that the input to the JIT can be built up at runtime, as with the routing example you mentioned, but this could be done even if we trusted the user to handle the compilation.

This doesn't mean my suggestion is unworkable. You could entrust the user with the compilation process, and you'd still get the robustness guarantees, but, well, you'd have to trust the user. Better to have the kernel handle the compilation (and ideally caching).


> How can the kernel trust your offline verification?

You can use proof-carrying code. There is a residual "online" verification of course, but it ought to be quick and efficient.


You're right, but you're way ahead of me. I'd misunderstood the emphasis of the project, and was thinking I'd be a superuser, trusted by the kernel.


Well, you would still need "superuser" privileges for things like adding new capabilities to the proof verifier. Of course this might open you up to security problems if you're relying on incorrect assumptions while doing that. But then, this project also has trusted components of its own, such as the JIT. A proof verifier can be a lot simpler than a JIT.


> you would still need "superuser" privileges for things like adding new capabilities to the proof verifier.

You mean to upgrade EBPF itself? Well of course. Same as any kernel upgrade.

> Of course this might open you up to security problems if you're relying on incorrect assumptions while doing that.

I don't follow. It's giving the system a full proof of safety. What assumptions are there? It seems very similar to Java's class verification, which doesn't suffer from issues with ungrounded assumptions.

> this project also has trusted components of its own, such as the JIT. A proof verifier can be a lot simpler than a JIT.

Interesting point. It might reduce the total amount of highly-trusted kernel code to approach things that way.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: