Physics • Machine Learning • Curiosity
Welcome to Tensors & Quarks
Exploring the cosmos of physics and the depths of machine learning with hands-on experiments, notes, and essays.
Latest Posts
-
How AlphaEvolve Found What Mathematicians Missed for 56 Years
Abstract / Overview
AlphaEvolve, developed by Google DeepMind, is an evolutionary coding agent designed to autonomously discover and optimize algorithms. Unlike single-model LLM setups, it orchestrates a pipeline of large language models that iteratively modify, test, and refine code. Each iteration is evaluated automatically, forming a feedback loop that resembles biological evolution: mutation, evaluation, and selection. This self-improving process has yielded breakthroughs across mathematics, engineering, and AI infrastructure. Notably, AlphaEvolve discovered the first improvement in matrix-multiplication algorithms in 56 years, reducing the number of scalar multiplications for 4Ă—4 complex matrices from 49 to 48.
Read more → -
From Intuition to Axiom: The Story of an AI That Learned to Prove
Mathematics has always been the ultimate test of structured reasoning. While large language models (LLMs) like GPT-4 or DeepSeek can reason through complex text, they often stumble where human mathematicians shine — constructing airtight, verifiable proofs.
Read more → -
Hidden Whispers: How AI Models Secretly Pass On Their Traits
What if your AI model could inherit its parent’s quirks — even through meaningless data? Anthropic’s 2025 paper “Subliminal Learning” reveals how that happens — and why it changes everything about AI safety.
Read more → -
From Prompts to Proofs: Can ChatGPT Pass the Gödel Test?
ChatGPT has become a part of our daily lives in ways we could not have imagined just a few years ago. From writing emails and polishing presentations to generating working code for side projects, it has become a universal assistant. But beyond these everyday tasks, what are the true capabilities of models like ChatGPT and its successors? Can they go beyond imitating human output and actually contribute to fields that demand creativity and rigor—like mathematics? This question is at the heart of a recent research paper, Gödel Test: Can Large Language Models Solve Easy Conjectures? published just a week ago. The paper does not ask whether large language models can memorize or recall results, but whether they can engage in something far more ambitious: creating new mathematics. In this blog, I want to walk through what the paper does, why it matters, and what it means for the future of artificial intelligence and mathematical discovery.
Read more → -
Why the Higgs Discovery Was Physics’ Greatest Detective Story
Precursor: Why the Higgs Story Matters
For more than half a century, physicists chased one gap in an otherwise triumphant theory. The Standard Model (SM) precisely describes quarks and leptons and the forces among them, yet it left a conceptual hole: why are the W and Z bosons heavy while the photon is massless? The Higgs mechanism answered this by positing a scalar field that permeates all of space. Particles interacting with this field acquire mass; particles that do not remain massless. Fluctuations of the field appear as a new particle—the Higgs boson.
Read more →