1) What do you mean? It depends on field, but in most mathematical papers I read, the ideas are explained with words, not with symbols -- they only serve as helpful shortcuts. Anyway, I believe that properly used symbolism usually clarify the issue at hand, especially when word explanation is very long. For instance, what is faster to read and comprehend:
"Let V1 and V2 be a subspaces of W, such that their intersection is zero. Let f be a mapping from a direct sum of V1 and V2, such that it takes a vector, whose first component is x and second y to a difference of x and y multiplied by two."
And this was easy example, I can think of _a lot_ harder.
There is no regulating body of mathematical notation -- it can be (and usually is) created by introducing it in some paper or book by some mathematician who invented it and regards as useful. Frequently, there are more than notation introduced, but usually only one survives -- hopefully the best one. The only possibilities of encountering several different notations in use at once are either reading very old works, which is not good anyway, or the most recent ones, but I presume that people who are able to read them are also able to get over such a minor problem.
Seriously, I believe that the mathematic notation is a lot clearer, more intuitive and easier to understand than syntactic rules of many programming languages, for instance C++. Symbol overloading almost never pose a problem, since the intended meaning is usually obvious from the context. If one frequently misunderstands the intended meaning, it is a sign he does not really get the concepts involved, and the fact he is confused by notation is his smallest problem.
Even symbol overloading most often takes places only if the sign represents the same idea in all contexts. For instance, one usually uses '+' sign to represent a binary commutative operation whatever structure we all talking about, because, well, it represents similar idea. One can go even further and say that symbols like \oplus and \times in most contexts they are used in (Cartesian product of sets, direct product/sum of rings/groups/modules/vector spaces/mappings) are actually representing exactly the same idea -- namely, the notion of product/coproduct in some category.
There are a lot of different symbols in use in math. If we abandoned symbol overloading, we would need to introduce many, many new symbols, and this would create real confusion.
Symbol overloading almost never pose a problem, since the intended meaning is usually obvious from the context.
Heh. Heh. Heh.
So I would have believed until I tried to learn differential geometry. The default is to eliminate all parts of the notation that are unambiguous. Proving that they are unambiguous is left as an exercise to the reader, and the exercise is often non-trivial. Furthermore widely used constants vary by factors of 2 pi depending on who is using it.
About the relation of the "computer science" approach to symbolic expression, and the "mathematics" approach: remember that lambda-calculus was originally introduced to make functional substitution, composition, etc., completely rigorous. Part of the early/mid 20th century project to turn mathematics into a completely formal system.
Of course, I agree that any idea of a "symbol standardization committee" for mathematics is both crazy and stupid.
Maybe, but I don't think that's where the OP was going. My understanding of the OP was that instead of, say using the large S symbol to mean integral, why not use this type of notation:
integrate(start, end, function)
Of course, I can think of a few problems with this. The large 'S' symbol is understood by everybody, regardless of their language, is one advantage of symbols that comes immediately to mind.
http://mathbin.net/62276
or
"Let V1 and V2 be a subspaces of W, such that their intersection is zero. Let f be a mapping from a direct sum of V1 and V2, such that it takes a vector, whose first component is x and second y to a difference of x and y multiplied by two."
And this was easy example, I can think of _a lot_ harder.
There is no regulating body of mathematical notation -- it can be (and usually is) created by introducing it in some paper or book by some mathematician who invented it and regards as useful. Frequently, there are more than notation introduced, but usually only one survives -- hopefully the best one. The only possibilities of encountering several different notations in use at once are either reading very old works, which is not good anyway, or the most recent ones, but I presume that people who are able to read them are also able to get over such a minor problem.
Seriously, I believe that the mathematic notation is a lot clearer, more intuitive and easier to understand than syntactic rules of many programming languages, for instance C++. Symbol overloading almost never pose a problem, since the intended meaning is usually obvious from the context. If one frequently misunderstands the intended meaning, it is a sign he does not really get the concepts involved, and the fact he is confused by notation is his smallest problem.
Even symbol overloading most often takes places only if the sign represents the same idea in all contexts. For instance, one usually uses '+' sign to represent a binary commutative operation whatever structure we all talking about, because, well, it represents similar idea. One can go even further and say that symbols like \oplus and \times in most contexts they are used in (Cartesian product of sets, direct product/sum of rings/groups/modules/vector spaces/mappings) are actually representing exactly the same idea -- namely, the notion of product/coproduct in some category.
There are a lot of different symbols in use in math. If we abandoned symbol overloading, we would need to introduce many, many new symbols, and this would create real confusion.