10 Comments

Not just that, the technical terms too add to the confusion. Deep learning, neural networks etc. seem to hint at something so human or brain-like, it primes you for anthropomorphising.

I have a friend who keeps talking about how the human brain is just neurons in the state of 0 or 1, and therefore how it's just a matter of time before AI starts learning and encoding "everything" in its 0 and 1 brain.

I don't know if this is how it actually is, but this simplistic 0 and 1 thinking seems off to me. Alas, my friend and I are just two dunderheads arguing about things we know little about. And the eerily neurosciencey terminology does not help at all.

Expand full comment

The brain is not a computer, it's a transceiver. Consciousness exists apart from the brain, and is probably non-local.

These twaddle-brained transhumanists that think they can upload their consciousness into a computer are so high on their own farts they never stopped to consider what exactly, they are trying to store.

Expand full comment

If you assume that the brain is a transceiver, what's to say that a computer version of the transceiver wouldn't be able to be built that could tune into the same "frequency" that the brain is tuning into?

Expand full comment
Comment deleted
Feb 22, 2023Edited
Comment deleted
Expand full comment

Thank you so much for sharing this article.

Expand full comment

Please check out https://betterimagesofai.org one the role of stock images of AI adding to this problem !

Expand full comment

Is there a marketing/commercial aspect to the anthropomorphizing? Car commercials come to mind.

Expand full comment

I'm working on https://syntheticusers.com/ and I believe this is exactly one of the use cases in which anthropomorphizing is a benefit, not an hindrance

Expand full comment

Very interesting idea! Good luck to you.

Synthetic users might be one way to pass the Mom Test. Do factor this in as well, if you haven't already done that.

https://fourminutebooks.com/the-mom-test-summary/

Expand full comment

Hard to say, given we understand so little about consciousness and the brain.

But, I'd hazard a guess that if it's possible, it's quite a ways off.

Expand full comment
Comment deleted
Feb 22, 2023
Comment deleted
Expand full comment

Notwithstanding that I'm not a programmer or expert in compusci or linguistics, I cosign on much of this. Particularly on (b.2); I've also suspected that LLM behaviors apparently stumping their researchers could be revealing emergent properties of language heretofore unknown or not properly understood (that might yield some good insights!), which the models are free-riding for the test.

A much dumber and clumsier analogy for laypeople like me: imagine you went to a conference for a profession completely out of your field. If you stuck around long enough, listened well enough, and were good enough at inferring things from context clues to feel out the professionals' language, you could potentially pass yourself off as one of them to someone else who knows less than you—maybe even long enough to scam them. But an actual pro could sniff you out pretty fast, as soon as you were prompted for a response that tested your actual trained understanding and reasoning abilities in the field. So in that way, LLMs are the ultimate dilettantes!

Another good reason IMO to be skeptical about the prophesied imminent mad rush to replace skilled workers with generative AI tools or unskilled button-pushers using them. For reliable and trustworthy performance on complex specialized tasks you kinda still need to know how to do the thing you're asking the tool to do for you. The desired way in which the tools are "hacks" only extend so far, beyond which they are a very different kind of hack.

Expand full comment