The conservative-radical split over AI

This was written less than 24 hours after Sam Altman was fired as CEO of OpenAI and no meaningful public information is available (not that this has stopped the speculation), and just after I'd reached the point in the Cambridge Union video of Sam Altman accepting the 2023 Hawking Fellow award on behalf of OpenAI where the protestors drop signs saying "OpenAI’s race threatens democracy and humanity" and "Say No".

If you ask ten people for their views on AI, you'll probably get 20 different answers.

Of such patterns are witty remarks often made, but unfortunately in this instance it also feels correct, and given the strength of feelings, that's bad.

On one end, arguments from e/acc (Effective Acceleration) resemble the fervor of fundamentalist Christians trying to bring about the apocalypse in Revelations, both believing it leads to a 'Good Ending'. At the other end? I can't tell how serious anyone was about Roko's basilisk (same idea, but assuming Satan wins the battles in Revelations and asking who Satan will punish least), but we definitely have Yudkowsky worried about paperclip optimisers (basically the reductio ad absurdum of how capitalism can go wrong, except there's no recognisable human mind behind it so it loses the "ad absurdum" part). Even at lesser extremes, the same axis also has those who fear other countries (or companies) are racing to make their own AI, vs. those who fear the short-term dangers from the AI.

But what of those short-term benefits and dangers? On employment issues, critics of the AI that already exists place it in a Schrödinger's cat state of managing to be simultaneously "not a real artist" and "taking artists' jobs" (mirroring a common argument used against immigrants), while proponents want to automate as much as possible to boost the productivity of our economy to stave off the negative impacts of things like the demographic shift in developed nations that has seen birth rates collapse at the same time as lifespans increase.

Even without future AI, we have people arguing about the ability level of state-of-the-art AI, with some dismissing the best as a "stochastic parrot" or "autocomplete on steroids" (including Yann LeCun who said this would be an insult to parrots, he is a Turing Award winner so his opinions absolutely shouldn't be dismissed), to those praising it as already being a weak form of AGI that passes all the IQ tests we throw at it (the gpt-4 family of models do in fact pass those tests, but writing as someone who scored "148" on an (online) IQ test and "off the charts" on a Cognitive Abilities Test when I was at school, trust me when I say that IQ tests are only loosely correlated with the thing people care about when they talk about intelligence even in humans, and that AI are weird even by human standards).

Screenshot_2023-11-18_at_19 59 09
Try explaining this with a flow chart

Discussions about fairness in AI trace back to the era when "AI" involved simple hand-written flow-chart-style algorithms — people could folllow how they worked, and wanted to be empowered to dispute decisions they disagreed with; now that the AI is a magical mystery box of barely comprehensible matrix multiplication that somehow can translate poorly written requests in English into mostly correct JavaScript with German comments, we generally can't explain how it reached a conclusion. We can ask, but if we ask after the event instead of before hand, it's going to confabulate a reason (much like real humans) giving the illusion of reasonableness without actually being reasonable. But it gets worse: if the people collecting the training set either miss out an important category of examples (which happens all the time even with humans, this is what led to the development of the practice of tokenism after which the South Park character formerly known as "Token" was initially named) then we get automatic soap dispensers that don't work if you have too much melanin in your skin; likewise when the training set includes unnecessary categories that confuse the AI, which is what happened in 2015 when Google's automated image labelling misclassified black people as "gorillas". But it gets even worse: the default training set is broadly the public internet, so if you can't filter out the toxic content ahead of time, your AI will be learning from example to mimic the toxicity of the internet, repeating all the slurs (and worse) that it had learned with exactly the same attention that it had been paying to all the things you actually wanted it to know.

I have no idea what the truth is. I do find it weird that I'm on the "go slow" side of things, which is the opposite of my usual excitement for what technology tomorrow will bring.


Original post: https://kitsunesoftware.wordpress.com/2023/11/19/the-conservative-radical-split-over-ai/

Original post timestamp: Sun, 19 Nov 2023 19:53:55 +0000

Tags: AI, Politics, Technological Singularity

Categories: Futurology, Technology, Transhumanism


© Ben Wheatley — Licence: Attribution-NonCommercial-NoDerivs 4.0 International