Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Margaret Mead, technocracy, and the origins of AI's ideological divide (resobscura.substack.com)
149 points by benbreen on Nov 21, 2023 | hide | past | favorite | 81 comments


We failed when we killed Cybernetics and AI became part of Computer Science instead of the multidisciplinary field it actually is

Weiner was an outspoken socialist and was persecuted for it, to such an extent that Cybernetics got needled to death and AI became “simply a matter of computing.”

Hopefully we all see now how wrong that was


Author here. Thank you, I thought this was an interesting point. Researching the book this post is based on, I was really struck by who was actually attending the Macy cybernetic conferences — it was incredibly eclectic. Anthropologists, physiologists, psychiatrists... and, of course, also Claude Shannon and von Neumann.

Carving off computer science as a separate realm with less interdisciplinary input was definitely a fork in the road for the history of science.

By the way, if anyone is interested, you can find the list of cybernetics conference attendees here: https://www.asc-cybernetics.org/foundations/history/MacyPeop...

And if anyone is really interested, here's my book, which is coming out in January: https://www.hachettebookgroup.com/titles/benjamin-breen/trip...


Awesome. Glad to see there’s other people out there that care and understand that “AI” fully understood, is marginally about the computer, and primarily about society.


I’m just waiting to see who starts hiring English majors first. I’ll join or invest.


I've always assumed the various three letter agencies would need those sorts of people but I've never looked into it...I definitely agree with you though.


This article about the Ratio Club cybernetics meetings in London is interesting http://users.sussex.ac.uk/~philh/pubs/Ratio2.pdf


Isn't 'Cognitive Science' (at least as I remember it) about bringing together again all those domains?


Yes, but the focus is more on human cognition, where cybernetics has more focus on (business, complex, control) systems, (automated, biological) processes, and human collectives.

It is interesting how cognitive science sees AI as a sub-discipline, where AI sees cognitive science as its sub-discipline.

McCarthy was not too enamoured with the bombastic nature of Wiener's persona and research, and may have birthed the field of AI, in part, to carve out his own field of study, and move away from an existing and defined field. As a result, control theory and reinforcement learning have a big overlap, yet use different words, approaches, and concepts.


Interesting. I always thought of cybernetics as robot hands and such, but your description of it is exactly the thing I am most interested in right now.


I agree and feel sad for the vanishing of Cybernetics.

Transdisciplinary fields are usually more delicate; they're often the momentary confluences of scientifically-inclined humanists and prominent scientific fields of their time. Before cybernetics there was various ambitious/systematic philosophy/evolution crossovers like Herbert Spencer.

From what I've researched and worked with individuals on the edge of the field, the contemporary debates seems to be going for "Second-order cybernetics"

Possible resources of interest are "IEEE Transactions on Cybernetics" (https://ieeexplore.ieee.org/xpl/RecentIssue.jsp?reload=true&...), as well as the work of Hugh Dubberly (Apple II, DNS) from Dubberly Design Office.


BTW the only CS class I took in college (in the mid-90s) was called Cybernetics, taught by Huffman (a Wiener disciple) and I can't say it had anything to do with AI at all; it was all control theory and information theory (compression, message integrity, boolean circuit design). I see it as complementary to ML in CS. Personally I took the class to learn how to make robots and was very disappointed we didn't cover that.


Sounds like you learned a lot about AI actually :)

Especially for the time


In the 90s, we taught neural networks, and statistical machine learning in AI. Cybernetics (as a term) had been out of fashion for decades by then, if only because it was taken to be how Wikipedia describsed it: the study of "circular causal and feedback mechanisms in biological and social systems" and was the domain of applied mathematics and PDEs, whereas neural networks weren't even recurrent.


No, I learned about ML in another way: working with a CS professor who worked on ML at the time. This was back in the days when autodifferentiation was not well known, so we had to come up with analytic derivatives of our loss function to do gradient descent.

What I learned in Cybernetics was really: how prefix-free codes work (unsurprising given that Huffman was the instructor), how karnaugh maps work, and something about sphere packing being easy in 8 dimensions. Oh, and that Huffman was an arrogant jerk but also an excellent teacher.


Wait... unless I'm missing a lot here, the concrete parts of cybernetics became known as control theory (and the more wishy-washy parts are known as "systems thinking"), and in both cases, the primary idea behind it all is exactly to turn dynamic problems into "simply a matter of computing".

That's literally what the whole thing is about - you model a problem as a bunch of cybernetics/control theory equations (with or without helper diagrams of boxes and arrows), do some computing on it, and implement the result in the physical world. In many cases, the computational model is partially included in the result itself - the physical control system includes a computer (whether it's an analog or digital one) that keeps the physical system stable by means of computation, i.e. "simply a matter of computing".

(In general, I feel many people didn't get the memo: "computing" isn't something "techbros" do; computing is a fundamental part of how we understand the world works - up there with energy conservation, thermodynamics, etc.)


Yeah, feels like this discussion takes the "computer" part too serious when talking about computer science. Computing isn't necessarily something with a computer. CompSci is about solving problems in general. I'd argue that we just went the normal way a science does. You start by having everyone involved, then you narrow down and focus. In CompSci, focus isn't really what's happening though, as it got broader and broader. The core principle (solving problems) still remains. We'll probably see a split down the line as well.


Penrose disagrees!


Penrose is obviously wrong and has been repeatedly criticized for his shoddy thinking when it comes to cognition.



For what it's worth Cybernetics did try to have a second life here in Eastern Europe going into the '70s and even past that, but the US/Silicon Valley pull was too powerful and by the '90s all of that had been left in the past (the communist governments here falling at the end of the '80s didn't help, of course).

I'm from Bucharest myself, and I live just a couple of blocks away from an institution called the Faculty of Cybernetics, meaning the computer-focused faculty of the local school/university of economics.

If I'm not mistaken, and I hope that I'm not, it had been set up sometime in the early '70s when many of the technocracy higher-ups in here were really into Cybernetics and into Wiener's works (some of which had been translated before Ceausescu fell, so when the communists were still in power). Most, if not all, of those technocrats were not engineers and also not "classical" computer-science people, afaik the majority of them were economists.

But, as a I said, when the '90s came the "classical" computer-science paradigm (and especially the engineering-focused part) took off for good, so that all that "side-quest" about Cybernetics was left in the past (apart from the some entities still being attached to it only by name).

The same can be said about the concept of linear programming as applied to economics and to economic forecasting, that is a quite cool concept that had been almost completely forgotten starting with the '90s but which now is having a small resurgence (I've "re-discovered" it myself by first reading about it in Schumpeter's History of Economic Analysis and then by going through a Soviet dictionary of economics published sometime in the '70s).


Indeed I studied computational economics because that’s the closest we get to Cybernetics today IMO


Ironically enough, the people who followed Weiner’s work were persecuted in the USSR.


I've left a related comment somewhere else in in the thread, but it's interesting that here in neighbouring Romania the practice/science of Cybernetics was quite well regarded almost until the communist government fell.

Its heyday had certainly been back in the '70s, as in the '80s the focus shifted to some other stuff thanks to energy inputs getting too expensive and us becoming bankrupt because of that (someone should write a history of the alternative energy sources tried by the communist government back then, some of which are now getting tested by the West, too), but for sure people weren't getting sent to prison because of it.

Here's [1] a list of Cybernetics-related books published before 1989 that I can find at an old-books store here in Bucharest, and that search reminded me that one of the main proponents of Cybernetics around these parts was Manea Manescu [2], a guy which had been prime-minister of communist Romania in the '70s (just before that he had been in charge of the State Planning Committee) and who stood by Ceausescu's side until the very end (that list of books I linked to includes a book with his name on it).

[1] https://www.targulcartii.ro/cauta/cibernetica?filter_name=ci...

[2] https://en.wikipedia.org/wiki/Manea_M%C4%83nescu


If you say you can use fancy maths to make tanks roll faster off the assembly lines, or guide an ICBM towards Moscow/Washington, then the regime will happily support you. But if you get in your head that you can apply the same fancy math to run the economy, the regime will object - obviously - and label you as too communist/not communist enough, and persecute you/ship you off to a gulag.


I dont understand why you get down voted. Economics are ultimately about physical power and any new emerging model is a thread to the dominant power structure aka the current model.

The conflicting models were the origin of the cold war (capitalism vs communism) and today between china and the west.


Isn’t China essentially capitalist now? The main difference in my mind is authoritarianism vs liberalism.


The book How not to Network a Nation is good on this history


Yep. If we don't get paperclipped then we're getting collectively BORG'ed. Free will? More like a tool by the system that controls the information reality around you.


Wiener


The article references Hugo Gernsback - and one of the best literary criticisms of the techno-optimist utopia ideal can be found in William Gibson's short story, "The Gernsback Continuum".

As far as the 'ideological divide' at OpenAI, the fact that they're all silent about the military contracting arm of their partner Microsoft as well as potential applications of LLMs to the military arena should be proof enough that their 'do-gooder' PR operation is nothing more than a branding and marketing game.

I suppose you could argue that military dominance of the planet in the name of do-gooder agendas is just what we need, but that's always been little more than a justification for robbery of others, ever since the dawn of recorded history.


What’re some of the military applications of LLMs?


AIUI, a Javelin missile ( https://en.wikipedia.org/wiki/FGM-148_Javelin ) works by pointing its camera at your target and holding it in-frame there for a few seconds while the missile learns what your target looks like well enough to home in on it once launched.

I imagine the Pentagon would pay billions for an LLM that could shave a couple of seconds off that target-acquisition time.


One high priority for the military has been using AIs to analyze massive volumes of data. A common example is analyzing satellite photos for interesting objects.

They say the want to output a shortlist to aid human decision-making, but of course they might decide to skip that step.


Propaganda - now any language on earth. Coming soon - personalized with the voice of your loved ones, voice application are currently happily harvesting.


I strongly feel that this is where the really destructive potential of military LLM applications lies. We've already seen examples of faked images, voice clips and videos being disseminated of prominent political figures. It's not difficult to imagine a well planned disinfo attack could paralyse or misdirect the political apparatus and public sentiment of a state during some critical moment in time (IE, during a coup attempt or invasion).

Everyone might be able to work out which snippets of media were true and which were not in a few days, but if it happens during a crisis point the loss of time and momentum could be catastrophic.


Influence ops, propaganda, etc


Another thread of intellectual history: the EA people who tend to speak most loudly about AI safety (two of them are on OpenAI's board) owe much to the writings of Peter Singer and Derek Parfit, leading quickly back to GE Moore; the engineers owe a lot to Turing, and the computational linguists to Chomsky. You can quickly go through Turing and Chomsky to GE Moore's classmate and friend Bertrand Russell. It's like six degrees of Russell and Moore. Or six degrees of Sidgwick and Frege if you want to go back a little further. And so on...


Most computational linguists I know of hold disdainful views of Chomsky‘s work.


EA is basically Christian charity for people who live in a city that views Christianity as a dirty thing.

It is employed to resolve the cognitive dissonance that highly talented people struggle with, when they realize they could do anything they set their minds to (including making the world a better place), but still want to work as a quant or optimize for ad clicks, because this pays well.

Like Goedel stated "most religions are bad, but religion is not", most people vocally identifying with EA are bad, but EA is not. To judge EA by the character flaws of prominent people like SBF, is like judging Christianity for Jim Jones's massacre. EA is, in essence, about effectively allocating charity. Noble and good-hearted.

Surely, grifters and frauds would abuse EA to virtue signal or trick venture capitalists into thinking their investment also builds wells in Africa. That should reflect badly on them. Elizabeth Holmes got so far, in part, because venture capitalists were attracted to her due to her being a young female. Merely Goodhart's Law in progress, not young female entrepreneurs being bad or without merit.


> EA is basically Christian charity for people who live in a city that views Christianity as a dirty thing.

I don't see it. They're pretty much opposite approaches.

Christianity is deontological and focused on God. Christianity says that's what is important is following the rules, and the rules exist to make God happy, and that outcomes are irrelevant.

EA is an utilitarian frame work, and focused on the real world. Utilitarianism says what is important is obtaining utility, and that outcomes are the ultimate measure of goodness.

The main difference is that from an utilitarian standpoint, Christian charity only ever works by accident. From its point of view what's important is that you do it. The how and why, and what happens as a result is unimportant. So giving huge amounts of money to a megachurch for the pastor's Ferrari while the poor starve is perfectly fine, because you're not doing it for the poor people, you're doing it for God, and you did what was asked of you.


I think you missed the point of the Godel quote, though.


No, I just ignored it because it seemed irrelevant to the point I wanted to make.

My intent was to disagree and say that no, EA isn't come sort of of rebranding of a Christian concept for people who dislike religion, but a fundamentally different thing altogether, with different mechanics and motivations.

For that matter, atheists in general don't believe Christianity has any claim on charity, marriage or even Christmas.


Isn’t it equivalent insofar as Christian conceptions of charity aren’t prescriptive? Besides tithings, “love thy neighbour” or other Christian ideas can be interpreted in infinitely many ways, similar to EA.

I think the morality of Christianity is the old-testament part and the charity/universal love is the New Testament part and thus more the focus of Christianity (obviously this depends on your particular sects interpretations of the scriptures).


Religion is whatever its adherents tend to believe.

Without a consistent formal system of inference, every moral proposition and its negation are consequences of the religion, so it is now capable of providing moral justification of any behavior. There is a powerful evolutionary incentive for religions to provide simple "justifications" for behaving selfishly, while disguising the inconsistency of the systems they put forth.

Effective altruism is particularly guilty, i think.


EA is basically Rotary Club for rich and cool millenials.


> [Sam] Altman’s dismissal by OpenAI’s board on Friday was the culmination of a power struggle between the company’s two ideological extremes—one group born from Silicon Valley techno optimism… the other steeped in fears that AI represents an existential risk to humanity and must be controlled with extreme caution.

Wait, I thought Altman got canned because he was starting businesses behind the board’s back making conflicts of interest.


The exact reason was (is?) obscured by the board saying nothing publicly beyond "lack of candour", we've all been trying to guess in the meantime.


The real reason is that Agents of Eliezer stabbed him in the back (metaphorically), and they did it so smoothly that I can't even credit them with this - nobody will believe me.


I find it interesting that the real problem we have in our world is technocrats (original definition: skill with power), yet we keep worrying about the made up redefinition of the term, rule made effective through technology (skill with knowledge). The people that are experts with power are far more dangerous, especially as we arm them with more and more technology.

Wikipedia ‘The term technocracy is derived from the Greek words τέχνη, tekhne meaning skill and κράτος, kratos meaning power, as in governance, or rule. William Henry Smyth, a California engineer, is usually credited with inventing the word technocracy in 1919 to describe "the rule of the people made effective through the agency of their servants, the scientists and engineers", although the word had been used before on several occasions.’


Mead was famous, but only one of the many voices in very long debate. Whatever people were thinking 90 years ago, it had already lost its influence by the time of AI's societal debut in the 1980s and is totally irrelevant to AI's impact today. While both may share a mindset, linking Mead to today's techno-bros is a bit of a stretch.


> Mead was famous, but only one of the many voices in very long debate.

Yes, this is how voices work.

> Whatever people were thinking 90 years ago, it had already lost its influence by the time of AI's societal debut in the 1980s and is totally irrelevant to AI's impact today.

Are you making this argument simply based on the passage of time? I can think of a lot of ideas whose longevity makes 90 years seem a sneeze.

> While both may share a mindset, linking Mead to today's techno-bros is a bit of a stretch.

If they share a mindset, would that not be an obvious link? Additionally, this article is drawing a connection not just between Mead and techno-optimists (not the same as tech-bros), but between Mead and both techno-optimism and existential risk. That's sort of what the article's preamble describes as the interesting thing.


Why are you so intent on linking this woman to modern tech workers?


> Mead was famous, but only one of the many voices in very long debate. Whatever people were thinking 90 years ago, it had already lost its influence by the time of AI's societal debut in the 1980s and is totally irrelevant to AI's impact today. While both may share a mindset, linking Mead to today's techno-bros is a bit of a stretch.

Plato was famous, but only one of the many voices in very long debate. Whatever people were thinking 2000+ years ago, it had already lost its influence by the time of AI's societal debut in the 1980s and is totally irrelevant to AI's impact today. While both may share a mindset, linking Plato to today's techno-bros is a bit of a stretch.

The allegory of the cave still has resonance today. Abstract ideas are timeless. That's the point of abstraction.


But Plato gets taught, and these writings by Mead don't. Linking Plato to tech-bros is indeed a stretch.


This is kinda how history of ideas work. People just kind of think they think and know things independently from within themselves or something.

Historians who study the origins of idea traces them back and make connections to thoughts and ideas of the past. Often, the inheritors of those ideas knows nothing of how they get them and think themselves clever for coming up with them all by themselves.


This is unfalsifiable speculation though. I could make that claim about anyone (eg, I could say actually it was H. G. Wells planted the seeds of techno-optimism in the present day AI community) and you would have no way to prove me wrong.

Remove Margaret Mead from the timeline and the AI camps today would probably be the exact same. Those are basically the two options with anything in life that's both useful and dangerous: optimism or pessimism, hope or fear.


True, but that trace here is missing. There is no link apart from "Mead said something vaguely utopian", and that doesn't even seem specific for technology. In 1939, technology was not the most threatening thing.


You're joking, right ? By WW1 already, technology had changed war to the point that it became extremely economically destructive :

https://www.wearethemighty.com/popular/world-war-i-battlefie...


Laughs in Cultural Studies. Seriously, Mead is an integral part of the curriculum in the social sciences.


As someone with a degree in a social science whose entire contact with Mead was outside of that study, I think you are at least overgeneralizing.


> AI's societal debut in the 1980s

The debut would seem to be at least 1968, with HAL9000's starring role in 2001: A Space Odyssey.

(If you watch it now, you will see that Arthur C. Clarke and Stanley Kubrick made a film that is right on-target today. Even the most prominent feature of their 1968 fictional AI is its polite, conversational, very human output and input - and some very bad analysis.)


I'm a tech bro.


Very interesting article on a figure I’ve never really read about. Timely too with all the EA drama. I look forward to reading the book


I think the best summary of the whole devise that I came across is this one from today

> „The OpenAI tussle is between the faction who think Skynet will kill them if they build it, and the faction who think Roko's Basilisk(‚s Monster)will torture them if they don't build it hard enough.“

Source: https://mastodon.social/@jef/111443214445962022


Except the people who worry about Roko's Basilisk are the same faction as those who believe paperclip maximizers will kill everyone. The "other side" isn't part of this semi-cult, and nor do they believe in zero-safety.


Some of us do wish for zero-safety.


While we’re on Mead.

Shes famously attributed with saying: “Never doubt that a small group of thoughtful, committed, citizens can change the world. Indeed, it is the only thing that ever has.”

But it’s never been able to be sourced and it can’t be found in any of her writings. Her family however continues to claim it as something she said.

Anyone here that can shed more light on if she said this or not?


I think the ambitions and political views of Adriano Olivetti deserve to be included in this discourse.

https://dicastri.club/2020/07/16/the-legacy-of-adriano-olive...


Technology that may cause existential risk or risk of systemic collapse are one category including nuclear weapons, engineered pathogens, global warming, etc.

Robotics disrupting the workforce and devaluing labor are another category continuing the process of mechanization and automation that started with small electrical motors and the electrification of factories.

Artificial intelligence is in a third category because it threatens the disruption and impending devaluation of broad categories of intellectual work.


You're listing sources of systemic collapse, but perhaps the greater danger is the reverse: Ilya Sutskever's fear that it has "the potential to create infinitely stable dictatorships."

We flourish in a middle ground between collapse and stability, and a cheap source of intelligence threatens to shift the ground one way or the other.


> We flourish in a middle ground between collapse and stability

Citation needed? If that were true I would expect a long history of alternating societal growth and collapse. That doesn’t seem to be generally true?


See Stuart Kauffman, At Home in the Universe, for a discussion in depth. It's a property of all life. I think of it as a zone between inanimate and immolated. We need energy flow to live, but too much kills.


I guess it’s true if you look far enough back to the Bronze Age collapse or the growth and collapse of various empires.


I can think of just two examples in western history: Late Bronze Age collapse and the decline of Rome. Current scholarly belief is that both are the result of not well documented (being prehistorical) immigration pressures due to conquest at the fringe of empire, and/or a climate induced famine double-whammy. The idea of “internal societal collapse” has fallen out of favor.


Has it ?

The decline of Rome does seem to be mostly about civil wars :

https://acoup.blog/2022/01/14/collections-rome-decline-and-f...

(Immigration pressures do play a role in the subsequent fall.)

I'm also not aware of climate being behind the civil wars (though that doesn't seem impossible), any sources ?


I mean the "civil wars" in the decline of the western Roman Empire were brought on by factions at the frontier of the empire being drawn in by external pressures. Basically the Huns forced the Goths and other north-east European peoples to flee west and south. For about a generation or so there was a refugee crisis not unlike what we see with Syria today. Then things came to ahead and the Visigoths sacked Rome.

It's not wrong to say that it was a civil war. Alaric, the king of the Visigoths, was also a roman legion commander. So I guess you could say that as it was a Roman commander that sacked Rome, it was therefore a civil war? But the actual soldiers of the army that took Rome, plundered it, and as a result ended the western empire was an ethnic army composed of people who didn't really see themselves as roman... so "it's complicated" is probably best.

But as far as this thread is concerned, the Visigoths would never have been pushed out of their territory and the refugee crisis wouldn't have existed if the Huns hadn't been expanding their empire into Eastern Europe. It wasn't a cycle of Rome getting fat and lazy and asleep at the wheel--it was, like the late Bronze Age, a series of famines in Central Asia that kicked off a bunch of wars, that created a refugee crisis, which the existing political system was ill-equipped to handle.


After looking up further into it, looks like we were both wrong ?

I was talking about the Crisis of the Third Century, when you can see the debasement of currency getting really bad. The timeline seems to be something like

plague + climate change + weak institutions => civil wars => civil wars + plague + famine + barbarian invasions => almost miraculous stop to the collapse

Alaric and the Huns were a couple of centuries later, though it does seem like Sarmathians played a similar role at that point ? But Roman legions were still far off from recruiting non-citizens at that point, and even when barbarians invaded, they did not stay for long ?

My confusion probably came from that debasement - that was a big drop, but the debasement had started significantly earlier !

So if we look at indicators like denarius silver content, lead pollution, shipwrecks... they all seem to point to the beginning of the decline around 125 AD, so about 125 years before that big penultimate crisis ? I will have to look at what then some other day though...


Depends then on your definition of "collapse." The Roman Empire was in decline for two centuries prior to the sacking of Rome by Alaric. But I wouldn't go so far as to call that systemic collapse. And the point I was making was very specifically about that apocalyptic scenario. User progne claimed that we're in some sort middle ground between cycles of collapse and stability, but I just don't see that being true historically. Business cycles, yes, but systemic collapse? No, collapse is still an exceptional, aperiodic event.


Decline precedes collapse a bit like old age precedes death.

That would be three centuries prior.

The 3rd century crisis, two centuries prior to the (2nd) sack of Rome (1rst one was 8 centuries before that), already looks like a stopped, and partially recovered from, systemic collapse to me.

(Also even collapse isn't instant - Rome got sacked 3 times in a span of 62 years !)

I'm not progne, so I would rather say that the cycles are about growth and decline, with "collapse" being notable because things can decline much faster than they can grow (again, consider gestation - birth - early years vs death, especially accidental death). (And yes, business cycles are also like this, on a shorter timescale.)

As for guessing were WE are at, consider how the combination of growth then of decline also results in a semistable brief peak : we seem to have had it in the 90s. Unless of course you also consider the (ex-)URSS, which had a first collapse at that point, at which point the peak was probably more something between 1945-1970-1991. But then previously we were exclusively talked about the Western Roman Empire, while the Eastern Roman Empire survived for another millennium !! (Including recapturing Rome, which then was sieged/sacked two more times.) I guess you might want to look into that for counter-examples ?


Robotisation predates electric motors : see Luddites vs automatic looms : https://en.m.wikipedia.org/wiki/Power_loom


I came to these comments for a summary, but stayed for the crazy...




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: