Welcome to Tensors & Quarks
Exploring the cosmos of Physics & the depths of Machine Learning.
Latest Posts
-
The Random Illusion: Why Adversarial Defenses Aren’t as Robust as They Seem
The field of adversarial machine learning is built on a paradox: models that perform impressively on natural data can be shockingly vulnerable to small, human-imperceptible perturbations. These adversarial examples expose a fragility in deep networks that could have serious consequences in security-critical domains like autonomous driving, medical imaging, or biometric authentication. Naturally, defenses against these attacks have been the subject of intense research. Among them, a seemingly simple strategy has gained popularity: random transformations. By applying random, often non-differentiable perturbations to input images—such as resizing, padding, cropping, JPEG compression, or color quantization—these methods hope to break the adversary’s control over the gradients that guide attacks. At first glance, it seems effective. Robust accuracy increases. Attacks fail. But is this robustness genuine?
Read more → -
Block Geometry & Everything-Bagel Neurons: Decoding Polysemanticity
When Neurons Speak in Tongues: Why Polysemanticity Demands a Theory of Capacity
Crack open a modern vision or language model and you’ll run into a curious spectacle: the same unit flares for “cat ears,” “striped shirts,” and “the Eiffel Tower.” This phenomenon—polysemanticity—is more than a party trick. It frustrates attribution, muddies interpretability dashboards, and complicates any safety guarantee that relies on isolating the “terrorism neuron” or “privacy-violation neuron.”
Read more → -
Geometry vs Quantum Damping: Two Roads to a Smooth Big Bang
Imagine rewinding the Universe until every galaxy, atom and photon collapses into a single blinding flash. Is that primal flash a howling chaos or an eerie stillness? In 1979 Roger Penrose wagered on stillness, proposing that the Weyl tensor—the slice of curvature that stores tidal distortions and gravitational waves—was precisely zero at the Big Bang. Four decades later two very different papers revisit his bet. One rewrites Einstein’s equations so the zero-Weyl state drops out of geometry itself; the other unleashes quantum back-reaction that actively damps any distortion away. Which path makes a smooth dawn more believable?
Read more → -
From Heads to Factors: A Deep Dive into Tensor Product Attention and the T6 Transformer
A Transformer layer must preserve every key–value pair for every head, layer, and past token—a memory bill that rises linearly with context length.
Read more → -
The Hidden Danger of AI Oversight: Why Model Similarity Might Undermine Reliability
Artificial Intelligence, particularly Large Language Models (LLMs) like ChatGPT, Llama, and Gemini, has witnessed extraordinary progress. These powerful models can effortlessly handle tasks from writing articles to solving complex reasoning problems. Yet, as these models become smarter, ensuring they’re behaving as intended is becoming harder for humans alone.
Read more →