All the videos are up for Causal Islands, a “future of computing” conference my company put on in Toronto. There are a ton of amazing talks, I highly recommend you check them out.
I would like to take a moment to focus on Maggie Appleton’s excellent “The Expanding Dark Forest and Generative AI”, exploring the problems and possible futures of flooding the web with generative AI content. She posted a transcript here and you can watch this excellent talk on YouTube:
Maggie paints a convincing picture about the absolute deluge of content we are about to face with LLM-generated text on the web. Near the end of the talk she proposes a possible scenario where we may have to create a sort of “reverse” Turing Test to determine if a given text is human generated. This seems prescient – and makes me sad on two fronts. Let me explain.
A while back I heard someone describe Tools For Thought are like “a prosthetic for thinking.” This is evocative of Steve Jobs’ quote that a computer is like “a bicycle for the mind.” This line of thought goes way back, see Licklider’s “symbiosis” paper in 1960.
When ChatGPT came out my wife was amazed. You see, as an immigrant woman doing all sorts of nonprofit work in her second language, having a “prosthesis” to help her compose English-language emails, fill out complex forms, etc etc is like a godsend. She immediately recognized the utility of it, and how it could level the playing field when dealing with English-primary government and staid foundation bureaucracies that often don’t take minority communities into consideration when designing their “support” programs and grant application processes. Now she can type in Japanese and then prompt “explain it in English.” Her English is not strong enough to start from a blank piece of paper, but like many people it certainly is good enough to read LLM output and fix any errors. What an amazing prosthetic for those with limited language ability!
But lo! The Dark Forest looms. With so much raw content being shovelled onto this new generative web via generative LLMs, causing us to set up spam filters, then you can imagine that prosthetically-assisted text made by people that struggle with their second language might be relegated to the junk box, effectively not considered “human.” Lo, the field hath only been level for such a short time.
Le sad. 😢
And of course, of those groups that are building the “reverse Turing test” spam filters of the future, how many of them do you think are consulting immigrants/second language speakers/other minority populations while developing their algos? Based digital technology’s historical pattern of American ivy league white men in hoodies… I am just not super confident.
Sad, part deux. 😢
The future is not here yet, so please let’s do better.
<chad points at the sign... again>