Skip to content

Reading list week 42

Published: at 06:03 PM

A small number of samples can poison LLMs of any size

Interesting and worrying how easy it is to poison an LLM. As few as 250 documents are needed to produce a backdoor in an LLM, regardless of how big the model or the training data is.

In Praise of RSS and Controlled Feeds of Information

Another post about using RSS to avoid algorithmic rabbitholes. Better than the post I have on my site. Although this one is longer!

The Majority AI View

I miss nuanced opinions on AI. People are either 100% drinking the coolaid, or a complete luddite. Where are the lukewarm takes. This post tries to find them.