> Like their human counterparts, Aaru’s bots are fallible; for instance, they wrongly forecast a Kamala Harris victory in the 2024 election. Fink said the firm has significantly improved its models since then.
Anyone else do a mental double take after reading this bit? What other big things has it gotten wrong, then?
Anyone else do a mental double take after reading this bit? What other big things has it gotten wrong, then?