Anthropic just released Claude 2.1, an improvement on its flagship large language model that keeps it competitive with the GPT series — and now has the useful added feature of "being developed by a company not actively at war with itself." This new update to Claude has three major improvements: context window, accuracy and extensibility. On the context window front, meaning how much data the model can pay attention to at once, Anthropic has leapfrogged OpenAI: The embattled Sam Altman announced a 128,000-token window back at the company's Dev Day (seems so long ago!), and Claude 2.1 now can handle 200,000 tokens.