Hacker Newsnew | past | comments | ask | show | jobs | submit | jhaywood's commentslogin

Don't they know that if you want to open source something but don't want any contributors you're supposed to use GPL v3 with a CLA?


Except for experimental, which isn't even a coherent system, all of debian's branches are relatively stable. For personal desktop use I find unstable (Sid) to be just fine. I know it sounds bad but my rule of thumb for avoiding major problems is this: If synaptic wants to delete a bunch of important looking packages during an update - hold off a few days for things to settle down.


This is always a canonical example for Erlang. For example the game servers for Call of Duty rely on Erlang. http://www.slideshare.net/eonblast/an-erlang-game-stack


Unless they actually are a VB programmer and And doesn't short circuit.


`AndAlso` short circuits.


Fantastic! If only I could write Excel macros in vb.net


That's not true. At least in SML every function only takes one argument. If a function has an arity higher than 1 it is because it take a single tuple as an argument. But you can't use the sugar for currying and tuple arguments interchangeably.


But the semantics of CHAR are not what most people expect and almost never what you actually want. If you want an actual fixed length of non-blank data you need an additional check constraint to enforce the min-length.

CHAR semantically represents fixed length text fields from old data file formats not this data always has n (non-blank) characters.

If you do have different length then a VARCHAR is more appropriate. Also a lot of application frameworks that interact with the database only deal with VARCHAR so then as soon as you use a CHAR you have to start worrying about trimming your text data because one of the biggest database text type gotchas is accidentally trying to compare a VARCHAR and a CHAR improperly.

While I can see good reasons to include length checks there is never a good reason to use a CHAR unless you're trying to interoperate with COBOL programs written in the 80's


> But the semantics of CHAR are not what most people expect

how do you know that? did you take a survey? I've been working with DBAs for years, in my world everyone knows that's how CHAR works. The padding behavior is nothing new, and is intuitive - the value must be N characters in length, so if you stored less, you get back a right-padded string. This is exactly what I'd expect.

> CHAR semantically represents fixed length text fields from old data file formats

and two or three letter character codes like country codes, state codes, etc. are what we use CHAR for, these are fixed length text fields. They are still in modern use today. Plus lots of us still have to write apps that actually read old files too - CHAR is appropriate for these as well, assuming you are storing fields that aren't right-padded in the source datafile (such as social security numbers, etc.).

Your app will of course work with a VARCHAR instead, but the point of CHAR is that it's self-documenting as to the type of data to be stored in the field - fixed length, as opposed to variable length.

> If you do have different length then a VARCHAR is more appropriate.

if you are storing variable length, then you should absolutely use VARCHAR. That's why it's called "VAR", it means, "variable".

> Also a lot of application frameworks that interact with the database only deal with VARCHAR so then as soon as you use a CHAR you have to start worrying about trimming your text data

If you are using CHAR correctly, you don't have to trim anything, because you are storing a string that is exactly the length of the CHAR type. I'm not familiar with how an application framework would only know how to deal with VARCHAR and not CHAR, database adapters return strings for both types. And if there were such a framework, I'd not be using it.

> one of the biggest database text type gotchas is accidentally trying to compare a VARCHAR and a CHAR improperly.

all of which stems from the same, singular mistake - don't store variable length data in a CHAR - plus if you are comparing VARCHAR to CHAR, that is also usually doing it wrong, as an adequately normalized database wouldn't be repurposing some kind of fixed length datatype out into a VARCHAR of some kind elsewhere.

The aforementioned CHECK constraint is a good way to enforce that if the developers/frameworks in question tend to be error-prone about this kind of thing (it's not an error I've had much issue with, since I know how CHAR behaves).

> While I can see good reasons to include length checks there is never a good reason to use a CHAR unless you're trying to interoperate with COBOL programs written in the 80's

as it turns out a vast portion of the world economy is supported by mainframes and old software that often spits out fixed length datafiles, there is even Python code in my current work project just written in the past six months which is tasked with parsing such files (they are actually quite simple to parse, since you just pull out each field based on an exact position). Not to mention that boring things like state codes, country codes, and the like are often fixed length fields.


I say it's not what people expect because everyone emphasizes the "fixed length" rather than "blank padded" nature of CHAR. CHAR is only actually a fixed length if you actually ensure that it is so yourself. That's possible but then you're just using a CHAR as a placeholder for those semantics not as something that naturally enforces them.

If you actually really really have fixed length fields then yes CHARs could be appropriate. But for many things even though you intend for them to be static length codes things change when you start having to interoperate with systems designed with different constraints. (For example after mergers or aquiring a competitor.) And I know that mainframes still exist but they aren't the use case in mind when many say "USE CHAR".

Also the database adapter that handles CHAR poorly is none other than JDBC on oracle http://stackoverflow.com/questions/5332845/oracle-jdbc-and-o... (Yes that is hilariously bad.) But the mysql way of always ignoring trailing whitespace is not standard in all databases.


Science is never required to be "provable". Scientific theories should be "falsifiable". Those are not the same thing.


That is the difficulty. When your conclusions are based on models which seem to be infinitely malleable or at least have many "estimated" parameters, the models become unfalsifiable.

If you've got a conclusion which is invariant under all possible evidence, it stops being science.


The same criticisms are leveled at the standard model of physics. You're right that it's hard for a theory to be falsified if it can be molded to fit any data. But it still does need to be consistent with past measurements and future measurements and what we know of the physical principles that drive the climate.

It also isn't that difficult to demonstrate the underlying principles that are in action. The greenhouse gas effect is fairly easy to measure experimentally (it was first experimentally measured in 1859).

Modelling the climate accurately is a very hard problem but we don't have an untouched completely controllable alternate climate on which to perform experiments. It's still better than throwing up our hands and declaring understanding impossible.


Climate change models must predict future climate-related stats to be falsifiable, and those that don't make accurate predictions must be rejected; that's science. The future continues to arrive, so climate change models continue to be falsified (or not).

It's certainly difficult to do; that is not the same as saying they are producing no valid & valuable information.


Microsoft had trouble getting people to run Windows in their living room. They had no trouble getting millions to put PC hardware in their living room as evidenced by the XBOX.


Calling the XBox "PC hardware" is really picking nits and missing the point, in my opinion. The problem with PC hardware in the living room is that no matter how you dress up an HTPC to resemble a piece of consumer electronics, it's all cosmetic, and it's still a PC in that it's not a fixed hardware configuration like a console. This makes the software more complicated, adds driver issues, support costs, hardware reliability is unknown, etc. The result is a tradeoff in how polished the product can ever really be. Geeks will put up with it. Regular folks probably won't. Again, my opinion.


I guess nobody ever thinks of Excel in these cases? I've heard it's used sometimes for calculations. It doesn't need to do destructive updates to intermediate values.


I think the argument would be that, though Excel is awesome for small data sets and so validates that it is indeed a great way to think about things at a level, it does not scale well. Indeed, one of the first things people suggest is to run away from large scale solutions that were done in a spreadsheet.


Excel strengths come from the fact that the main mode of operation is generally declarative.

Excel's weaknesses for use in large scale systems largely come from the fact that it isn't easily maintainable because, in the same main mode of operation, the actual code is hidden in cells that need to be accessed individually (in terms of usability for someone trying to maintain it, its almost like having one line of code per source file, since you have to open each cell to bring the code in it into the "editor".)

The most common way to circumvent the problem posed by the second feature in Excel is to resort to coding in an imperative language, which negates the benefit of the first.

FP languages share the first feature, but not the second.


I would argue that Excel's sole strength is that it directly features the data to the user. It is not that things are declarative that is nice, it is that the data is all you see. Because that is all a lot of folks care about.

Indeed, the way in which many people populate spreadsheets is far from the functional way. It is typically much more imperative. They literally have a list of commands from someone for how to make the chart or tabulation they are interested in seeing appear.


The fact that spreadsheet regular users are programmed by spreadsheet power users (the ones building charts, formulas, etc.) in an imperative way does not mean that the creation of the spreadsheet by the power user is not declarative, and is really completely irrelevant to what is being discussed.


I think you're hitting up against something that is very subtle and it speaks to the fundamental nature of what it means to compute something. I think you can gain a bit of insight into what you're trying to get at if you expand the discussion to the interplay between mathematics and physics.

Let's look at your example of baking. Indeed it seems like an imperative (physical) process. When we mix eggs and flour to form a batter we can't get the eggs and flour back! Indeed even cracking the eggs seems like destructive update to me. Think about why that is. We've performed a non-reversible transformation on our ingredients AND we can't go back in time! In this case time is implicit in our understanding.

Now consider trying to model baking mathematically. We want to build an equation that will tell us the state of our cake. This will be function - that is dependent on time - time is now explicit in our model. Now we can say at t0 our eggs are whole. At some time in the future t1 our eggs are cracked. This doesn't negate the fact at t0 the eggs are whole. We can't go back in time and update what the state of the eggs then - that makes no sense! So here we see that when time is an explicit parameter we have time symmetry. The equations don't care if time goes backwards or forwards.

And we can always do this. If you have some process that has a state that changes through time you can model that process with a static model that includes time as an explicit parameter. This means that our model is a higher dimensional look at the real thing. And our actual computation (or experiment or baking etc.) is a degenerate case of the more general model where the time parameter is actual time.

How does this relate to functional programming versus imperative programming. Well it is the difference between theory and experiment. But in this case we also have to deal with the fact that our program is computed (a physical process happening in time) and so we're using a physical computation which follows the rules of mathematics to model mathematical rules using a degenerate case of those rules. So if it seems like there's an abstraction leaking it's because it kinda is.

I like to think about it in this way because it highlights the very tight knit nature of physics and computation. Indeed a general theory of what it means to compute something could lead to a unified theory of physics and vice-versa.


That's an insightful comment and an important point. Because the underlying computation is physical, there's no such thing as side-effect-free computation. Any attempt to treat programming as purely mathematical, i.e. pure FP, is going to run up against this physical substrate. You can't abstract it away completely, and it gets complicated if you try. So I don't think pure FP is likely to be an ultimate solution to the core problems of minimizing complexity and building scalable abstractions. It would be, if programming were math, but math is not executable.

I believe one can even see this in the OP. Some of those examples seem more complicated than the side-effecty, loopy code that they're replacing. Maybe it is all habit and my mind has been warped by years of exposure to side effects and loops. But what if those things are more "native" to programming than pure FP can easily allow, and we need a more physical model of computation?

I've often thought it would be interesting to try teaching programming as a kind of mechanics in which values are physical commodities that get moved around and modified—to see whether it would be more accessible to beginners that way. I also wonder what sort of programming languages one might come up with out of such a model. It would be an interesting experiment. One would use mathematical abstractions to reason about such a system, but would not think of the system itself as a mathematical one.


> That's an insightful comment and an important point. Because the underlying computation is physical, there's no such thing as side-effect-free computation. Any attempt to treat programming as purely mathematical, i.e. pure FP, is going to run up against this physical substrate. You can't abstract it away completely, and it gets complicated if you try. So I don't think pure FP is likely to be an ultimate solution to the core problems of minimizing complexity and building scalable abstractions. It would be, if programming were math, but math is not executable.

Oh come on. Is the fact that your processor emits slightly more heat from computing your pure functions, and once in a while a cosmic ray may shift a bit or two, really such a big hindrance? You sound like a die-hard purist, ironically. We might as well ditch the whole idea of applied mathematics since there is always going to be a mismatch between the real world and mathematics. Triangulate? You can't do that, because the degrees you measure isn't going to be measured to the 'infinitely' precise decimal point. The whole theory behind geometry is based on practically unatainable ideals; surely Plato would be turning in his grave if he knew that cartographers had been using geometry to make those (fairly inaccurate) maps. shudder


I think that is an uncharitable interpretation of gruseom's comment. I assumed he was referring to the fact that some programming tasks are inherently stateful, such as writing OS kernels and device drivers.

However, I think that we should be able to, in principle, provide abstractions on top of inherently stateful things that hides their nature. In the same vein, we should be able to architect programs so that we can separate the inherently stateful stuff from the functionally pure stuff. For some applications, this will help. For others, not so much.


> I think that is an uncharitable interpretation of gruseom's comment. I assumed he was referring to the fact that some programming tasks are inherently stateful, such as writing OS kernels and device drivers.

No. The underlying computation is physical is just as true for any computer program, OS kernel or not. What was referred to was clearly the fact that it's silicon at the bottom level of any computation, just as his parent was talking about ("leaky abstraction" and all that).


I agree, mathematics is still the best tool we have for actually understanding these things.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: