I am not sure if the self-driving software weights scenarios based on casualties. It should just follow what it has been trained for... mimicking human driving that have been labelled as correct.
To go back to your 1st scenario, as a human, seeing a family jumping in front of my car, in a split second, I would have probably try to avoid them, even if as a result it would have kill myself. Now, someone reviewing my driving, could decide in the insight that it was not the right decision and not train the self-driving with my sacrificial driving.
But at the end, it would just be one brain ("self-driving AI") that would make split-second decisions, for the better or worse... hopefully statistically better than human in general.
If cars talk to each other (again assuming trust, and that cars do not lie to each other), it could be used as one extra input for the car's self driving brain. Indirectly, you could consider that as controlling a car's self driving brain, if you know for sure how the car is going to react. Each car would make its own decisions, and adjust according to the consequences.