Hacker Newsnew | past | comments | ask | show | jobs | submit | conbandit's commentslogin


Here's a cooler (imo) visualization: https://willymaps.github.io/nyctrees/


Minor correction, but you probably mean Blackrock-type firms. Blackwater was a private military company.


This is such a common confusion, someone made a handy diagram for telling apart the firms comprising the cross product of {"Black", "Bridge"} x {"rock", "water", "stone"}.

https://i.redd.it/x0jeiofl7j471.jpg


Kinda the point, no?


Correction to the minor correction, Blackrock, the asset manager/ETF provider, does not engage in institutional purchasing of SFH homes. For a little while BlackStone, the Private Equity firm, had an ownership interest in company that did this, but I’m not sure they still do.

https://www.blackrock.com/us/individual/insights/buying-hous...

Edited to change “ownership” to “purchasing”


Not to be confused with other investment firms Blackstone, Bridgerock, and Bridgewater. Or tire manufacturer Bridgestone, for that matter.


Funny picture imagining blackwater kicking down some doors and becoming a landlord


I wonder how coaching can be applied to software engineering. As a developer, I undergo code reviews and participate in design discussions, but there isn't a lot of oversight or feedback given on the actual development process.


You may enjoy reading the book "The Coaching Habit" which, while not directly talking about coding and code reviews, has a lot of good guidance on how to bring a coaching mindset to them.


Great book


For a broader view on this topic, I recommend "A Prehistory of the Cloud"


Thanks for that suggestion! Can be downloaded free from here, which is great: https://ieeexplore.ieee.org/book/7288688


What is the long term growth strategy for worker cooperatives? How do you scale them?


There doesn't have to be one I don't think. Like a local farm co-op, it's meant to be sustainable so growth should be slow and natural, and should realistically find a maximum.

That means that certain products/services aren't a good fit. So I think if you are part of one and you have lofty aspirations for what you are working on, you should plan for the day you outgrow the co-op and need to hire a bunch of people.


A lot of people are saying that you don’t have to grow workers coops and they can (with the implication that they should) remain small. A counter example is the Mondragon Corporation, which is a worker coop/federation of worker coops which employs 75,000 people.


Also here in Argentina we have Banco Credicoop, which is a credit union with more than 670,000 members.

https://en.wikipedia.org/wiki/Banco_Credicoop


Thanks for the tip. That's really neat.

https://en.wikipedia.org/wiki/Mondragon_Corporation


I like the idea of a coop for many reasons, one is that’s it’s not tied to the “must have a huge possibility for growth” any VC backed venture would have


I think the question was just how to handle more people joining. Or perhaps how to scale the practice across society. Both interesting questions IMO, even if not what GP meant.


Depends on your definition of "scale" but mostly the answer is - as others have pointed out - that they are not designed to "scale" in the way many people use that word.

However there are some co-ops who make platform software and questions of "scale" might apply there, so maybe some interesting reading? Check out https://www.loomio.org/ - their company handbook is online somewhere.


You don't. They are there to provide a human friendly system that supports it's members from birth to death.


I’d investigate https://en.wikipedia.org/wiki/Mondragon_Corporation for answers to these questions. It’s interesting for an answer precisely because (as far as I know) it’s an outlier in this space.


While mondragon is a great example of a "scaled" cooperative. There are other (somewhat smaller, but still meaningful) examples. Take Cooperative Home Care Associates (http://www.chcany.org) around 1500 homecare workers in NYC, or the Arizmendi bakeries https://www.arizmendi.coop/ that have 5 bakeries and other businesses all growing and providing living wage jobs in the Bay Area.

We chose to use a holding company model for our cooperative https://www.staffing.coop so that we could have many startups and conversions underneath one holding company worker cooperative.


Imho, The long term goals may be similar to what "normal" companies may have. Cooperatives are just means to an end but generally one may achieve lofty ambitions in any type of organization. The document sums up very nicely the pros and cons of cooperative targeting freelancers. The only thing one has to decide is wether one will reach sustainability in a normal company or in a cooperative - the journeys are two different wild adventures.

In terms of scaling: You may either form a network or join one. I'm part of Europe based digital cooperative and we're starting to do just that https://medium.com/camplight/accelerating-a-global-movement-... :)


Local cooperatives can form networks which operate similar to a conglomerate business, with the network helping to bring in business and direct it to participating coops, or even form supply chains where local coops are providing goods and services to each other.

This allows the complex efficiencies of a capitalist corporation, but allows the people doing the work to retain direct control over their local workplaces and not be beholden to investors.

Mondragon in Spain is the most famous example of this.

As such networks develop, starting a new cooperative venture may become more attractive to some entrepreneurs than a traditional startup, which in turn drives growth of the network.


Cooperatives are a self organizing network effect scaling pattern. They self scale because members are intrinsically motivated to join, humans are pack animals and we like to work together in a village. Very few people want to be entirely alone.

Capitalism is actually very unhealthy from a mental health perspective, so the cooperative model creates the safety net opportunity to take risks and try new things.


You don’t. If you want to grow you either convert into a partnership with the normal up or out mechanism where the only people with a vote are the partners or elsewhere into a for profit company. Worker cooperatives are by and for the day ideologically committed.


If you've never found anything with it, why do you keep doing it (see: the definition of insanity)?


Not the parent, but probably because

1. It's notorious that hard drives have a higher failure rate at the beginning of their lives than in the middle (see bathtub curve [0]). So it's not absurd to test them hard early on before writing any useful data and to do an early RMA.

2. The failure rate on drives is low enough that his methodology may be right but he still never has any failure in his life. Doesn't it make insane.

[0] https://en.wikipedia.org/wiki/Bathtub_curve


I once bought a new 1TB Drive when they were fairly new. MOVED about 500gb if data to the new drive. Checked it. Seemed fine. Turned computer off and went to bed.

Next day the HDD didn’t turn on. Completely dead. :( I’ve never had a failure since but I backup everything now.


It would depend on the effort to do the methodology vs. the expected return (savings of finding a failed drive times the probability).


This is a very strange invocation of the definition of insanity.

Apparently, quality assurance should never be a thing.


I have. Last time I bought a bunch of drives for a RAID array, I used the WD utility to test each of them. 1 of the 6 failed the test.


The first time I learned about burnin, it was because the drives for our RAID array showed up three days before we were supposed to start experimenting. Two of the ten were DOA. FED-EX got us replacements in two days. One of those was dead.

So then I started doing math on MTBF and lots of drives and things looked bleak. Just a couple years later MTBF had gone way up and continued to climb for a while after, but at that particular moment it was something like eight months between failure of you had any more drives than we had and just accelerated from there.


"Failed" as in "with an IO error" as in "got the wrong data back"?

Because if it's the latter, it could've been caused by some other part of IO stack and not necessarily the drive itself.


I believe the WD utility sends a command to the drive so it does a self test. That would eliminate any upper layers issues.


   insanity | inˈsanədē |
   noun
   the state of being seriously mentally ill; madness
I'm not sure when the definition of obsessive-compulsive started being used to describe insanity (it's been going on for at least a decade), but I don't like it (and cringe every time I hear someone repeat it).


Me too, one could argue that practicing anything is "doing the same thing over and over expecting different (getting better) results" ================================ I too have switched to HGST thanks to the backblaze stats.


Why do you back up drives if you've never had a failure?


For the same reason you buy fire insurance even though your home has never burned down.


While there is some amount of speculation, it represents a small portion of the entire financial space. If you sincerely hold this view, you should take a closer look at the purpose of financial instruments and markets.


I do not believe this counters the OP's point. While the core purpose of financial markets may be beneficial vehicles like hedging and price discovery, the fact that the pragmatic outcome of the law is to leave wall street as one of the only outlets for "legalized betting" cannot be ignored.

There is a sister post that in trying to argue against this only seems to support the point. "a house that always wins"; as has been echoed on HN ad infinitum, _don't stock pick_, (Edit; child is correct, this is more about day trading, but the broader advice probably still holds to some extent) because if you do, you're the dumb money handing it over to the HFT firms. So even if the intent is not for wallstreet to be a gambling house, if it looks like a duck and quacks like a duck and benefits from regulatory capture like a duck...


Gambling in the prediction market for the time-value-adjusted expected profits of a company is allowed due to the historical fact that stock markets used to be the way to raise capital for companies. I have only seen Tesla do this in the last 30 years (I'm sure there are others, but it is rare for a list company to actually sell stock to raise capital for building things). We should allow gambling in other prediction markets for things we would like to know about. Say the average surface temperature on Earth in 2100.


For those who believe in efficient markets, or markets that are rigged by hyper-intelligent people against the average investor, I'd like to call their attention to a bit of history recalled by Matt Levine in his column today:

"Google announced that it was buying a private company called Nest, for instance, and the entirely unrelated stock of Nestor Inc. (ticker: NEST) was up 1,900 percent"


To be precise. I do not in any way believe in perfectly intelligent or omniscient markets. I do believe that large professional financial entities have mastered short-term trading to force things such as margin calls and other behaviors that an individual such as myself has no real tools against. This is what I mean when I say "the house wins" not that the market is in any way always correct. I assert that in any situation where an individual could make a bet, they are inherently competing in an uphill information-and-tool-asymmetry battle against far better equipped entities.

This explanation should be very familiar to those who have frequented groups like bogleheads that drink the indexing koolaid. (I admittedly do, as one could probably tell from the above)


I wonder if someone could use this phenomenon to get away with insider trading...


> _don't stock pick_, because if you do, you're the dumb money handing it over to the HFT firms.

I think you mean _don't day trade_. HFT is a tax on changing your bets. If you pick a stock and stick with it for years, it's negligible.

It still might be good advice not to stock pick, but I don't think HFT is the reason.


Agreed. Like saying financial institutions gamble on lending money to customers.

I think the correct contrast would be legalized state lotteries.


What is the average number of miles per accident for humans? Is it less than 15000? If yes, this is still good.


The average driver drives like 13,500 miles a year [1].

The average driver files a claim for collision once every 17.9 years [2].

That makes human drivers 16 times safer than Uber self-driving cars. This is concerning for people who think self-driving cars are right around the corner! Improved algorithms would need to avoid 94% of the collisions that self-driving cars currently get into in order to have the same failure rate as humans.

[1] https://www.fhwa.dot.gov/ohim/onh00/bar8.htm

[2] https://www.forbes.com/sites/moneybuilder/2011/07/27/how-man...


this is the thing about most of what we're doing with machine learning. It gets exponentially harder the closer you get to 100%; most of the things we use machine learning for where it works pretty well, 99% is really pretty great. 99% won't cut it for self-driving cars.

Which isn't to say that self driving cars won't get there, just that the fact that it looks like we are almost there doesn't mean as much as you'd think; we still have the really hard bits in front of us.


> That makes human drivers 16 times safer than Uber self-driving cars.

While that's a difference large enough to make wide scale deployment of those cars irresponsible, it is not large enough to make one pessimist on self-driving cars existing soon. It's the kind of difference that often enough go away with a few years of engineering.

And I expect those to be the worse stats for all the self-driving car companies around, since others seem to be doing things in a less "move fast and break things" way.


Many accidents have no claim filed


"The average driver" isn't our cutoff for allowing people to drive. Once the self-driving cars are safer than the most dangerous drivers that we allow, there's theoretically an advantage moving those people to self-driving cars.


"The average driver files a claim for collision once every 17.9 years"

That leaves open the question of how much damage that driver is at fault for on average.


Looks like there's still a lot of work left on that front. I was trying to get some context for the 15000 miles number.


AI should be safer than humans... not vice versa..


Citation desprately needed.

I would theorize the opposite.


I think the idea about skipping 3 is that the applicant is choosing to apply to a particular company. That's implicitly saying that they're willing to work on that company's code. I don't know how valid this is nowadays with the common practice of throwing applications at the wall and seeing what sticks.


Well 3 is the job of the applicant to figure out. That's about having the right questions on your end.


I believe it means something like:

Given the definition of metre and the definition of second, the kilogram is whatever value that makes the Planck constant h precisely 6.62607015×10−34 kg m^2 s^(-1).


But isn't that a bit of a recursive definition? What might one use as the value of kg in that example?


There is only one value a kg could be in that example, everything else is constant. The question I think you're trying to ask is "why 6.62607015×10−34 instead of some other simpler value" and the answer is a bit longer.

Original 1 kg was the mass of a cubic decimeter of water at 4 C at 1 ATM. Why a cubic decimeter at 4 C? Water is densest at 4 C and a cubic decimeter of it is a weight that people can work with on a day-to-day scale. Unfortunately this was a bit hard to measure so they made the IPK (international prototype kilogram) which was a lump of metal. Fast forward 100 years and the lump of metal proved to be too unreliable for modern standards as it kept losing very small amounts of mass, also it Earth's gravity isn't even so it requires you average it out and then calculate the local offset and a whole bunch of other weird things that can mean a microgram or two. This is inconvenient but we still needed a way to say "1 usefull measurement of mass" but unfortunately in the universe 1 Planck's constant is far too small to ever use daily. Thankfully it has become easier to accurately measure Planck's constant against the IPK so now we have solidified 1 kg to be exactly what we measured.

The nice thing is 1 kg will forever be the same thing now and is easy to measure to extraordinary accuracy. The downside I think you're asking about is 1 kg by itself isn't some significant relation of the physical world, it's just a useful-in-daily-life multiple of mass as defined by Planck's constant.


1


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: