Microsoft’s Copilot chatbot is rolling out the ‘Think Deeper’ feature to all users, including free accounts. It allows Copilot to handle more complex questions, like DeepSeek or ChatGPT’s o1 model.

Microsoft AI CEO Mustafa Suleyman announced on LinkedIn that Copilot’s Think Deeper feature is now available for free, dropping the requirement of a paid Copilot Pro subscription. It uses OpenAI’s o1 reasoning model, which breaks down questions into multiple steps to reduce hallucinations and other common LLM issues.

You can access it by typing a question in the Copilot app and clicking the ‘Think Deeper’ button in the text box. If you don’t see it, you might need to wait a bit longer or try refreshing/reopening Copilot. Prompts using Think Deeper/o1 require much more time to process, usually around 30 seconds.

Copilot Think Deeper screenshot

OpenAI said in September, “Similar to how a human may think for a long time before responding to a difficult question, o1 uses a chain of thought when attempting to solve a problem. Through reinforcement learning, o1 learns to hone its chain of thought and refine the strategies it uses. It learns to recognize and correct its mistakes. It learns to break down tricky steps into simpler ones. It learns to try a different approach when the current one isn’t working. This process dramatically improves the model’s ability to reason.”

This might, at least in part, be a response to the rapid rise in popularity around DeepSeek, a new AI model that also uses multi-step reasoning to output answers and is available for free. An official cloud-based version of DeepSeek is available for free as web and mobile apps, and it can also run locally on many PCs. Either way, this is the first time OpenAI’s o1 model has been available for free, as it previously required Copilot Pro or a ChatGPT Plus subscription.

Responses from Think Deeper and o1 seem to be a lot better than typical AI chatbot answers, especially in writing style and explanations, but they’re still generative AI responses. It will mess up, sometimes in ways that are difficult to notice unless you are an expert in the question’s subject matter, so don’t rely on it for everything.

Source: Mustafa Suleyman (LinkedIn) via The Verge