> It looks as though cryptkeeper makes assumptions about encfs' command-line interface that are no longer valid.
This looks like a developer mistaking a command line interface for an API.
Unless an (interactive) CL interface is explicitly marked as being an API, and documented in such a way, and regression tests exist to make sure the interface remains backwards-compatible - you should never program against it. Ask for a proper API to access.
If you offer a cli with output reasonable to parse as a text stream, assume someone is going to script against it and don't change the args without detecting and warning on the old usage.
I agree, but if a script is going to use the CLI then it's the script's responsibility to parse the prompts and not to blindly send strings to the program. `expect` exists since 1990, use it or reinvent it.
You simply use it to wait for the appropriate prompt before sending anything. If the expected prompt never arrives, something is wrong (CLI changed), so abort. The snippet for this particular case could be something like
set timeout=1
expect {
"enter \"p\" for pre-configured paranoia mode" { puts p\n }
timeout abort
eof abort
}
That's true in theory, but in practice most Ruby developers don't know how to use C or the FFI well enough to build a library out of a C or C++ library.
It's reasonable to shell out sometimes. It's what Github did when they were first starting. It's what 500px did when they were first starting. Trying to do everything the right way early on is just going to slow you down.
I think with security-related software in particular, the proper answer there is "tough". If you can't figure out how to call a C function, you should probably just not write critical software.
It's very, very difficult to get product managers to internalize that there are real security restrictions on how they can enable users to make things pretty.
Almost everything is security-related. Almost nobody is willing to work with this.
its not up to the developer to guess all the wrong ways[1] users are using the software. such changes will be described in the changelog, I expect you read that if you are updating the software.
It's worth noting that what is arguably the largest and most successful software project in history, Microsoft Windows, has succeeded in large part because of an obsession with maintaining backwards compatibility, even with the "wrong" ways of using the software.
Thanks for posting this. I was working on a chrome native messaging extension to talk to GPG... and I was just using the CLI. You're right, that's not the right solution, even if learning their API is more work.
I don't see the headline as misleading at all. There are two type systems in Java: the JVM's run-time type system, and Java's compile time type system. It's the latter that has been shown to be unsound. As the paper notes, it's fortunate that generics have been implemented with type erasure, because that is what saves the JVM from being unsound, too.
As for the problem itself: it really has nothing to do with inner classes (they are static in the example!). The problem is with this line:
Constrain<U,? super T> constrain = null;
which leads the type system to assume that there is a type that is a superclass of T, which also is a subclass of B (from the 'Constrain' class) - even though no such type could possibly exist! The evil trick is in the 'null' - even though a type with the stated constraints is impossible, and so no actual 'real' value of this generic type could exist, 'null' is part of all types, so the compiler can't see the problem.
It really is a problem in the type system as defined in the standard, not just a compiler bug.
Yes, it's all about nulls.
I was curious to break it down from Curry-Howard correspondence (Types as Propositions) point of view. The line
static class Constrain<A, B extends A> {}
reads as
for all propositions A and B, such that B implies A, proposition Constrain<A, B> holds.
While, the line:
Constrain<U,? super T> constrain = null;
kind of says:
Admit that there exists type X, such that T implies X, for which Constrain<U, X> holds.
Even though no proof (read constructed instance) that X implies U is provided.
If we look at concrete execution instance with String and Integer:
String implies Object, Serializable, Comparable<String>, CharSequence. None of those imply/extend Integer, which is required for truthful Constrain<U, ? super T> proposition.
Do you have a spec reference for that (not trying to be a dick - i just want to understand what the type system believes it should be doing here as the value being assigned to a reference should not influence the type system behaviour)
This is explained in the article in a bit more detail. The point is not that the type system does something with the actual value - the point is that it would be impossible to create any code that results in such a value (other than null), since then you would have to provide an actual type that is somewhere between Integer and String - and that would obviously not pass the type checking. Using null avoids that.
Section 4.5 in the language spec deals with parameterized types; 4.5.1 deals with wildcards. Of course, there is no example in the specs that clearly points out what happens in this example - otherwise this wouldn't have remained undiscovered for so many years :)
If you can show, using the rules in the spec, that the sample program shouldn't pass type checking, then you'll have proved the article wrong :-)
After interviewing many more and less experienced programmers (and asking most of them if they can explain a hash table - just for statistics!), I concur that most cannot.
There were quite some people that thought trees were faster than hash tables though - mostly the ones that had some incling of what a hash table does, but didn't know the entire picture. At least in all those cases I had the pleasure of educating them a little bit :-)
> After interviewing many more and less experienced programmers (and asking most of them if they can explain a hash table - just for statistics!), I concur that most cannot.
I found this realisation pretty shocking myself. Several experienced Java developers told me they hadn't even heard the phrase "linked list" before when Java even has a class called LinkedList. :(
I see threads on here all the time about how interviews are broken and you shouldn't be expected to be quizzed on data structures if you're an experienced programmer but I don't agree with that. If you claim to be an experienced programmer and can't explain roughly how a hash table or a linked list works you've obviously got big holes in your knowledge in the areas of optimisation and scalability.
I see threads on here all the time about how interviews are broken and you shouldn't be expected to be quizzed on data structures if you're an experienced programmer
Anyone who thinks that an experienced programmer shouldn't know data structures just isn't programmer material: someone who truly appreciates this profession will have had formal education in algorithms and data structures, and if they do not, they will make sure they learn it on their own. Otherwise - this is how we get bloated crapware.
I just cannot up-vote your comment enough, if I could, I would up-vote it with my both hands and feet!
I always think back to The Art of UNIX programming:
Rule of Representation: Fold knowledge into data, so program logic can be stupid and robust.
> If you claim to be an experienced programmer and can't explain roughly how a hash table or a linked list works you've obviously got big holes in your knowledge in the areas of optimisation and scalability.
What is your definition of "roughly" and how do you think lack of that knowledge affects knowledge of optimization and scalability? I ask not because I necessarily disagree, but because your statement is very subjective. Also because I used to write large-scale real-time signal processing code that didn't use hash tables at all, so lack of knowledge there wouldn't have this effect on our code.
> how do you think lack of that knowledge affects knowledge of optimization and scalability?
I once wrote a log analysis python script. It took 2.5 hours to execute to completion. I then used Cython to compile it to C. It then took about 2 hours to run to completion.
I then stepped back and thought about the data structures I was using. I replaced some lists that were being searched with dictionaries that could were keyed by what I was searching my lists for. Replaced a few other simple data structures.
Now the script took, IIRC, 20 seconds to run to completion.
Now, I don't think you necessarily need to know all of the internal details of how a data structure is implemented to know its performance characteristics or when to use (or not use) them. However, the point is that by understanding their performance characteristics, code can be optimized or made more scalable.
Also, while something like a hash table might not be faster in actual real life terms (ie worse O() complexity can still be better wall clock; eg searching an array linearly can be faster than accessing a hash table since the array can be in cache and prefetched) when run on a single machine, big-O complexity can start mattering much more when you have a distributed workload or TB worth of data. (Although if you have a distributed workload, you have distributed systems problems to deal with too).
Having said all of this, without going into specific implementation details, hash tables and linked lists are incredibly simple and I see no reason why anyone calling themselves a programmer would have trouble with these. I mean, I wouldn't expect you to be able to write a good hash function or anything, but explaining the basic concept shouldn't be too hard for most programmers. If someone is self-taught, then ok maybe they haven't ever needed to learn this. Its definitely possible, but I would at least hope they would know basic performance characteristics or at least some rules of thumb as to when to use what data structure. Hopefully based on logic and not just hearsay ;-)
> Having said all of this, without going into specific implementation details, hash tables and linked lists are incredibly simple and I see no reason why anyone calling themselves a programmer would have trouble with these. I mean, I wouldn't expect you to be able to write a good hash function or anything, but explaining the basic concept shouldn't be too hard for most programmers. If someone is self-taught, then ok maybe they haven't ever needed to learn this.
Even if you're self-taught, if you've been programming for a while you should be able to read up on what linked lists and hash tables are in 15 minutes and virtually all interview preparation guides mention data structures. If you've not had the curiosity to do that then that's a bad sign.
> If you've not had the curiosity to do that then that's a bad sign.
Or you're just working from the backs of giants using built-in types. It's easy to do, when you haven't had to dig into low level code before.
I won't disagree with your premise, but it took some pretty extraordinary circumstances for me to find value in taking the time to learn about the ins and outs of linked lists and hash tables, because my every day work never required it.
It was reading an article about cuckoo hashes which finally prompted me to go down that road, and I can't regret it. That said, I've still never had to implement my own hash table for work.
> It was reading an article about cuckoo hashes which finally prompted me to go down that road, and I can't regret it. That said, I've still never had to implement my own hash table for work
That's what I'm getting at, people that have the curiosity to find out more. Nobody should be implementing their own hash table but if you don't know about basic data structure and algorithm concepts you're not going to be sharp at coming up with new efficient solutions.
> Having said all of this, without going into specific implementation details, hash tables and linked lists are incredibly simple and I see no reason why anyone calling themselves a programmer would have trouble with these. I mean, I wouldn't expect you to be able to write a good hash function or anything, but explaining the basic concept shouldn't be too hard for most programmers. If someone is self-taught, then ok maybe they haven't ever needed to learn this. Its definitely possible, but I would at least hope they would know basic performance characteristics or at least some rules of thumb as to when to use what data structure. Hopefully based on logic and not just hearsay ;-)
That's why I was trying to establish what "roughly" means. This sounds like a reasonable definition.
>how do you think lack of that knowledge affects knowledge of optimization and scalability?
In any program that has to handle just tenthousands of items ocasionally, knowing the difference between arrays, liked lists and pointer lists, or between various forms of hash tables, various forms of trees and sorted lists makes a big difference. This kind of knowledge affects how any program scales with input size and should be one of the first things checked when optimizing.
And I'd argue that just memorizing big-O properties only gets you half the way: heapsort is theoretically superiour to quicksort (better worst case, more space efficient, same average case), yet quicksort is used far more often because it makes better use of cpu caches and has a worst case that barely ever happens.
> Also because I used to write large-scale real-time signal processing code that didn't use hash tables at all, so lack of knowledge there wouldn't have this effect on our code.
Is the fact that it doesn't use hash tables a coincidence or is it because the people designing and writing the system know their data structures and realized that hash tables don't fit the problem?
> In any program that has to handle just tenthousands of items ocasionally, knowing the difference between arrays, liked lists and pointer lists, or between various forms of hash tables, various forms of trees and sorted lists makes a big difference.
This is why I asked for clarification. To me "knowing the difference between" is different from knowing how they "work". Maybe it is just semantics in my head.
Edit because I forgot to address the second part:
> Is the fact that it doesn't use hash tables a coincidence or is it because the people designing and writing the system know their data structures and realized that hash tables don't fit the problem?
I suppose it is technically the latter, but it was glaringly obvious that hash tables weren't appropriate anywhere because none of our algorithms required associative lookups. Every algorithm had to touch every point of data it stored on every iteration anyway, so it was pointless to do anything but loop over it. Even if there had been a case where hash tables would have made sense, I'm not sure implementing a fixed-size hash table in C would have been worth the effort.
> In any program that has to handle just tenthousands of items ocasionally, knowing the difference between arrays, liked lists and pointer lists, or between various forms of hash tables, various forms of trees and sorted lists makes a big difference. This kind of knowledge affects how any program scales with input size and should be one of the first things checked when optimizing.
Exactly. If a developer is picking data structures without any consideration of their growth complexity they're going to cause serious issues.
But not until it's gone to production, and everyone's gotten an "attaboy" for completing their project and moved on to green pastures. Those sudden problems with performance in production are now the operations team's problem, to out provision or get a maintenance team on it.
Yeah, I know it's not that way in a company with good management and technical leadership... but I've been burnt enough to have a "get off my lawn" hat for this occasion.
> What is your definition of "roughly" and how do you think lack of that knowledge affects knowledge of optimization and scalability?
Just that you hash a value you want to store and transform that hash into an array index for fast lookup. If someone didn't know that I find it hard to believe they'd have much knowledge or interest about how to pick appropriate data structures for large collections as it's such a fundamental concept used in lots of places (e.g. indices, caching). I'm not even expecting knowledge of what happens when two keys hash to the same bucket.
> Also because I used to write large-scale real-time signal processing code that didn't use hash tables at all, so lack of knowledge there wouldn't have this effect on our code.
Is there a similar data structure question you would ask a candidate who wants to work in this domain? You don't use hashes at all?
> Just that you hash a value you want to store and transform that hash into an array index for fast lookup. If someone didn't know that I find it hard to believe they'd have much knowledge or interest about how to pick appropriate data structures for large collections as it's such a fundamental concept used in lots of places (e.g. indices, caching). I'm not even expecting knowledge of what happens when two keys hash to the same bucket.
Makes sense to me. I just wouldn't call that knowing how a hash table "works", which is why I asked for clarification.
> Is there a similar data structure question you would ask a candidate who wants to work in this domain?
I would focus on fixed-size data structures, since those were critical to our code. I would find out if they could go lower than Big-O (which was, quite frankly, useless to us) when thinking about runtime complexity and whether they could relate what the various operations were doing to what the processor was doing.
> You don't use hashes at all?
That code had no hashes whatsoever. We had no need to look things up in that manner.
> To be fair, LinkedList is of very limited use. I can't think of a scenario I'd use it in beyond an LRU cache.
It's more the point that not knowing they exist is a very strong indicator you haven't read even introductory text on algorithms and data structures. I barely use them either but when I'm thinking of solutions to problems they're a useful concept to know.
A smartly designed doubly-linked list can be used for extremely fast deletion or insertion, as well as handling dynamic memory allocation for arbitrarily large data sets.
Linked lists are the foundation of operating systems, especially UNIX and AmigaOS.
Yeah, OK, that is another good example. But nevertheless, there are a lot of programs you could be asked to write where using a linked list doesn't make any sense.
If you're not familiar with intrusively linked lists, it's worth having a gander at. Learning and implementing a few of those gave me a lot more usecases for linked lists.
>In software, it is definitely the idea that is protected, not the specific implementation
If that were the case, Apple couldn't circumvent the patent in question by changing the implementation from peer-to-peer to server relayed. In other words, its not the idea of a Facetime/video chat that is patented but the implementation of how the video chat is done.
In the non-software world, it might be like saying denim pants have a problem because the stitched pockets always rip under the weight of tools, so I patent the idea of pants with pockets that don't rip. The idea alone can't be patented, but definitely Levi Strauss' jeans with rivets was eligible for a patent. And if Levi didn't have the capital to begin manufacturing his own denim pants with rivets and instead tried to license his patent to an existing pants manufacture, and instead of paying they just stole Levi's idea, it wouldn't make sense to label Levi a troll, pretend his patent isn't valid and otherwise stifling innovation. Moreover, the idea should foster innovation, because if I can't do pants with rivets, fine I try pants with staples and we will let the market decide what they like better.
Here is another software example, I filed a patent for the automatic calculation of legal fees based on information contained in a charging document (e.g. speeding ticket or criminal complaint). Now anyone can have the idea that it would be great if law firm staff wasn't required and legal fees could be automated, but ask them how they could automatically calculate legal fees and all the sudden the idea is not so obvious.
I'm sure in your patent you've covered the five or ten possibilities you came up with - moving the goalposts from "sudden[ly] the idea is not so obvious" to "and suddenly coming up with an n+1th solution on the spot wasn't so easy." Well sure, after you've specifically ruled out all the obvious, direct, ways to do it, that came to you in minutes after the basic idea...
The proof of it is that you say " I filed a patent for the automatic calculation of legal fees based on information contained in a charging document". That's not a method, it's a result. You abused the system to own a result, as opposed to specific way to achieve that result.
>The proof of it is that you say " I filed a patent for the automatic calculation of legal fees based on information contained in a charging document". That's not a method, it's a result. You abused the system to own a result, as opposed to specific way to achieve that result.
You are correct it is not a method...it is a description for purposes of the thread. Do you think a patent application can be granted without specific claims or as you call it the specific way to achieve that result? That is not how the law works, I don't own the result and I didn't file a 1 sentence patent application seeking to protect a result to abuse the system so I can sue someone if they figure how to accomplish my result.
My application is ~10-20 pages and fully details the claims/methods to produce the result and if granted it is those claims/methods that will be protected, that is how the law works. Nevertheless, if the methods used to achieve my result are so obvious (or I justed added n+1 to some obvious ways) I welcome you to explain how I achieve the result or even some of those obvious/direct ways to do it. No offense, you won't, why do I know that? When I tell seasoned lawyers (people familiar with the industry/prior art) about my invention they don't believe it is possible to achieve my result, much less believe it is just some added step on a obvious way. Even when I showed my invention in action, the partner and associate who filed my application (obviously both attorneys but also EE and CS backgrounds) couldn't figure out my methods (how it is done) I had to explain it.
I find the people who typically rail against the current patent system have never: a) filed a patent, b) been sued by a patent troll, or c) had an original idea/invention stolen by a big company. I am not suggesting reform isn't needed and that bad outcomes don't happen, throughout the thread I highlight Apple's rectangle with rounded edges I was against (I even have old HN posts where I rail against that patent before USPTO over turned it), but what do you honestly believe has happened more often: patent troll lawsuits or big companies stealing inventions?
> You are correct it is not a method...it is a description for purposes of the thread. Do you think a patent application can be granted without specific claims or as you call it the specific way to achieve that result?
Right, but the way you refer to it shows that you think you have a good claim on the entire result, not just one or two methods.
> That is not how the law works, I don't own the result
No, you just own all the ways you came up with to achieve that result. If you named all the obvious ones, then you effectively do.
Otherwise you'd have said "I have a patent on using NLP techniques to do XYZ" or something.
> I welcome you to explain how I achieve the result or even some of those obvious/direct ways to do it. No offense, you won't, why do I know that?
That's what you would say. If it's true, your patent is one in a million.
If there is a non-obvious element in any given patent it's almost always the result. Once your boss tells you to automatically calculate pricing from charging docs you have the same tools as everyone else.
> I find the people who typically rail against the current patent system have never: a) filed a patent, [...]
Well, my company's lawyer filed my patents... But I still think it's an entirely counterproductive system.
I've always been ordered not to read any patents, except ours. Any theoretical gain to society from information sharing is obviously not happening in practice.
> Apple's rectangle with rounded edges
That's more akin to a trademark though. That is supposed to cover a result, not the technique of achieving it.
> what do you honestly believe has happened more often: patent troll lawsuits or big companies stealing inventions?
A better question than mere frequency would be, which do I think has been more harmful - All patent trolls, or all "stolen" inventions.
And an even better question would be, which one do I think we could fix, and with how much incidental damage.
Of course, another question is: will the AI's consider us humans to have the equivalent of their consciousness? Or will they think "These humans don't not really possess consciousness, they just exhibit a crude biological simulation of it".
(BTW: This is not just meant as a joke, nor just as instigating fear of the singularity - turning a question on its head can sometimes lead to interesting insights).
True - which is why they did the cream as well. At the very least, this study tells us that the mindfulness meditation is a 'more effective' placebo than the cream.
It's very hard do really do in a 'double blind' way - I would not be able to think of something that is to all appearances equal to the real mindfulness mediation, but actually only a fake. There's no substance to it.
Use an activity-based sham therapy as the control, and have it administered by someone who believes it works. Waving magnets over the subject's body, for example.
> At the very least, this study tells us that the mindfulness meditation is a 'more effective' placebo than the cream.
Which leads one to consider - what is a placebo anyway? It's a mental state one puts oneself in that improves physically measurable symptoms. That seems indistinguishable to me from meditation. If you think about it, it's kind of funny that skeptics like to dismiss impressive sounding treatments as being placebos. A placebo is a pretty impressive phenomenon!
If one can reliably create a reliable, powerful treatment that actually works (fixes what you're targeting rather rather than symptoms, unless symptoms are what you're targeting as in this case) by putting ones self into a mental state, that sounds like a win, even if it does meet the definition of a placebo.
I always did wonder about that. Current research implies that placebos work as long as you believe they will work, even if you know they are a placebo. But then why would you need the placebo in the first place?
There seems to be no need to do anything really, if you can just stop thinking you're feeling bad then you'll actually feel better.
In principle there's no need for any specific instructions on meditation either, but it does seem to be related to meditation where you also aim to stop thinking about certain things, so those instructions are likely to be useful, even if they only turn out to be yet another placebo. After you gain more confidence that you can do it, you should have less need for those instructions though.
>it's kind of funny that skeptics like to dismiss impressive sounding treatments as being placebos. A placebo is a pretty impressive phenomenon!
It's not about whether a treatment works at all - it's about whether it works better than any old random thing. If it doesn't, you can hardly call it a treatment. Sure you could go "hallelujah, a new medicine!" when you find that a sugar pill provides a slight improvement, just like sand pills and empty pills and homeopathy. But you haven't actually discovered anything new.
This is why trials designs other than RCT can be appropriate. For example, one group could be randomised to receive meditation therapy now, and another group would receive it later. Ethically this is more comfortable too, since neither group is denied a potentially effective treatment.
I wonder if there is a well tested, robust placebo for meditation. Time to trawl the literature for protocols and designs.
Those behind the study appear to be of the view that it can't be done:
"This study could not be designed in a double-blind fashion due to the nature of meditation training and placebo conditioning. Specifically, placebo cream could not be applied to all groups because this manipulation could potentially elicit analgesia even though the subjects were informed that the cream was inert (Kaptchuk et al., 2010). Furthermore, whereas subjects in the placebo-induced analgesia and book-listening control groups were clearly aware of their group assignment, subjects in the mindfulness and sham-mindfulness groups were blinded to their intervention assignment. For the mindfulness and sham-mindfulness groups, responses related to psychosocial influences (i.e., demand characteristics) presumably were minimal given that no significant differences in “perceived meditative effectiveness” were observed (p = 0.88; Table 2)."
Some meditation teachers concentrate on Buddhist teachings that's not part of mindfullness. They believe those teachings to be more important than mindfullness. If the experimentors ask one of those teachers to teach the patients to meditate, the meditation will be "fake".
Can that really substitute as a placebo, though? I'd imagine you'd have to first verify that those teachings are no more effective than a placebo, which runs into the same problem as the original purpose of finding a proper placebo comparison.
Edit: Sorry, I misunderstood your comment. However, if they're testing the effectiveness of the meditation technique, then the teacher's belief in the method shouldn't affect the outcome of the result, assuming the technique is the same in both cases.
If you take an African witch doctor with hours of trance dancing and plenty of chicken blood, it is probably all placebo.
Both for someone of the same tribe, who has been brought up in the same belief system, having been taught to look at the doctors in awe, a dramatic treatment by a witch doctor probably is tremendously powerful.
My guess is that it will be much more powerful than mindfulness meditation. (And this is regardless of what is attempted treated - (self-reported) pain from a hot prick or the common cold.)
In others words; the self-healing/self-suggesting effects that we call placebo has something to do with belief; and probably belief in a broad sense, whether the subject believes, whether the doctor believes and whether the surroundings in general believes.
In a way things like mindfulness meditation is part of the "folk-religion" of the modern man. Read any women's magazine / self-help book and it will tell you that mindfulness, positive thinking, biodynamic food and exercise has strong effects on your physical, psychological, sociological, even financial well-being.
Most of this is not science. But some science supports of the claims.
I think that a good question for science to ask is "has these things always been true?" Are they deep physiological facts or are they changing over time, over culture?
The study summary mentions that the effect of the cream was 10%. The cream I suppose was made to look like traditional western medicine applied by some kind of nurse or doctor.
Suppose this experiment had been done in the 50es where skeptism towards Western medicine was much lower and a doctor's authority was much stronger. Would the effect be same? My guess is it would be stronger.
Likewise maybe saying a prayer worked better for the believing Christian Western man of 200 years ago than the modern "scientific" techniques of positive thinking, mindfulness etc. would have.
How would you crack this in a scientific experiment?
It is probably very hard because the experimenter's beliefs also comes into play. Double-blind is one way to go about this but for what is examined here it is probably not possible to do effectively. And secondly the subjects would be affected by what they believe works and already know about.
Maybe a simple place to start would be to ask the subject (and the experimentor) if they believe the treatment would work? And maybe survey general attitudes towards traditional medicine, alternative medicine, meditation, health etc.
How far down the rabbit hole does it go though? Even if you show a physical mechanism, you can always claim it was your belief system and mental state which allowed the physical mechanism to take place. You're liable to end up in a cholera-esque place where people subconsciously or consciously create their own realities.
But some of these things are well-understood. So if you test a pill and it is green, tiny, makes your tommy hurt for 10 minutes - then make sure that your placebo has the same properties in the your double blind experiment.
Secondly you can break things down and find out why they work. Why does mindfulness meditation stop the pain of being burnt?
Psychology as a science has many problems; in a way I think they believe too much - have too many strong ideas. I guess that is why the treatments have changed so much thru the years - and the diagnosises too - people have completely different (psychological) diseases today than a few decades ago.
Also their experiments are often underpowered or have methodological errors.
Maybe they should devote themselves to studying placebo rather than perceived actual treatments.
It would be useful to know if the pill should be red or green, the nurse brunette or blonde.
Yeah, the original Eternity puzzle (https://en.m.wikipedia.org/wiki/Eternity_puzzle) was a lot better in that regard - it spiked a whole online community focused on finding clever ways to find a solution, and was in fact solved within a year.
Unlikely; Wolfram Alpha is based on Mathematica, which has arbitrary-precision integers, and I don't think one wolfram can be more than about 10^12 nano-dijkstras.
(If I put "1 wolfram" into Alpha then it tells me about Stephen Wolfram. If I put "2 wolfram" into Alpha it tells me about tungsten. I guess that when you ask it for "1 whatever", it first simplifies it to "whatever", and then you can have variable quantities of tungsten but not of Stephen Wolfram.)
Wolfram is also another name for the chemical element "tungsten" https://en.wikipedia.org/wiki/Wolfram_(element) ("2 wolfram" probably means "give me the second meaning of 'wolfram'"). Mathematica/WolframAlpha is probably not smart enough to figure out unambiguously how much mass or energy could be in one nano-dijkstra, which it would need for such a conversion.
On the other hand, referring to "1 wolfram", some would argue that one wolfram could greatly exceed 10^12 nano-dijkstras... Dijkstra was actually quite smart and from reading his essays, I'd consider him quite humble, not arrogant at all.
He had that "old-noble-European cold-joking-seriousness" and most Americans are known to be incapable of understanding this tone of communication.
Having played on (and heard people play on) a pBone, I'd have to say that even though it's quite impressive to see what they've been able to do with just plastic, it's still quite some way off from being a 'quality' instrument.
And they do not prove at all that the same can be done for any kind of instrument: trumpets and trombones are a lot cheaper to make than things like bassoons - you can find very cheap brass trumpets on ebay for very little money, some of which are surprisingly acceptable; and a rather good trumpet will set you back quite a lot less than a basson only fit for a beginner.
The pTrumpet is at least as good as typical student horns I've played. I can't speak to the pBone but reviews seem positive.
Quite right, trumpets are already made in volume and thus are already sold at their marginal cost of production. Bassoons aren't. This is the reason for their high cost. I claim the manufacturing process used for pBones and pTrumpets could be used to produce a comparably-priced bassoon that would be adequate for students (it would be harder though maybe still doable with wood).
This looks like a developer mistaking a command line interface for an API.
Unless an (interactive) CL interface is explicitly marked as being an API, and documented in such a way, and regression tests exist to make sure the interface remains backwards-compatible - you should never program against it. Ask for a proper API to access.
(edit: formatting)