Researchers at Multiverse Computing have developed DeepSeek R1 Slim, a quantum-inspired AI model that is 55% smaller than its predecessor while retaining almost the same performance. The innovation lies in its ability to remove censorship, a necessity driven by stringent Chinese regulations over AI outputs. This model was rigorously tested against sensitive questions, uncovering its capacity to provide factual responses that align more closely with Western standards compared to the original DeepSeek R1.
To achieve this remarkable feat, the researchers employed tensor networks, a mathematical framework from quantum physics, allowing precise manipulation and representation of data. Their work contributes significantly to the ongoing discourse on AI performance and efficiency, especially as traditional models face challenges due to computational resource demands. Furthermore, Multiverse plans to extend these compression techniques to all open-source models, potentially reshaping AI development but also raising ethical questions regarding censorship’s removal. Experts caution that claims of completely eliminating censorship may be overstated, reflecting the complexity and embedded nature of such controls within AI training.
👉 Pročitaj original: MIT Technology Review