How Private Are Your Chats With Ai?
Please evaluate and comment on the following: “ this paper "Language Models Are Injective and Hence Invertible", and it has deep consequences that we need to chat about! Paper: https://www.arxiv.org/pdf/2510.15511 What this paper shows is that for every input prompt, the last token’s hidden representation vector inside a language model is unique. In other words, the mapping from text to that final hidden state, the vector the model uses to predict the next word, is effectively lossless. The model doesn’t discard information; it just re-encodes the entire prompt into a high-dimensional numerical form. That has an important implication: IF you use this last-token representation as an embedding, you’re not storing an abstract summary of the input! You’re storing an almost exact numerical fingerprint of the original text. The paper even demonstrates that the prompt can be reconstructed from this vector alone! So while we often treat embeddings as compressed, privacy-safe ...