P01
Introducing the Takens-Based Transformer (MARINA)
MARINA is a generative language architecture that replaces transformer attention with explicit Takens delay embedding, reducing complexity from O(N²) to O(log N) and replacing the O(N) KV-cache with an O(1) circular buffer. Trained as a 15M parameter proof-of-concept, it achieves validation perplexity of 1.1 on factual Q&A and demonstrates 100% basin separation between discourse channels — proving that language is traced, not sampled.