AI Newsletter

AI Newsletter

Share this post

AI Newsletter
AI Newsletter
🤖 AI Agents Weekly: Software 3.0, Gemini 2.5 Updates, Safer AI Agents, Deep Research Tutorial & Benchmark

🤖 AI Agents Weekly: Software 3.0, Gemini 2.5 Updates, Safer AI Agents, Deep Research Tutorial & Benchmark

Software 3.0, Gemini 2.5 Updates, Safer AI Agents, Deep Research Tutorial & Benchmark

Jun 21, 2025
∙ Paid
16

Share this post

AI Newsletter
AI Newsletter
🤖 AI Agents Weekly: Software 3.0, Gemini 2.5 Updates, Safer AI Agents, Deep Research Tutorial & Benchmark
3
Share

In today’s issue:

  • Gemini 2.5 Updates

  • Andrej Karpathy's new talk on Software 3.0

  • New research on the future of work with AI agents

  • The lethal trifecta for AI agents

  • Claude Code now supports remote MCP server connections

  • Mistral releases Mistral-Small-3.2-24B-Instruct-2506

  • New FlowiseAI’s Deep Research tutorial

  • Hybrid Multi-Agent Prompting

  • New DeepResearch benchmark

  • Top AI dev news, research, product updates, and much more.



Top Stories


Gemini 2.5 Updates

Overview of our family of Gemini 2.5 thinking models

Google has made its Gemini 2.5 model family broadly available, emphasizing thinking models that can reason before responding, offering developers fine-grained control over latency, cost, and intelligence. Three key variants are now active:

  • Gemini 2.5 Pro is now generally available with no changes from the 06-05 preview. Optimized for high-intelligence tasks like coding and agentic workflows, it maintains the same pareto frontier price point and is already powering tools like Cursor, Replit, Windsurf, and GitHub Copilot competitors. Google highlights it as their most adopted model to date.

  • Gemini 2.5 Flash (GA) remains the same as its 05-20 preview but sees new pricing: input cost has doubled to $0.30/1M tokens, while output cost dropped to $2.50/1M tokens. The split pricing based on “thinking” has been removed for simplicity.

  • Gemini 2.5 Flash-Lite enters preview as the lowest-cost, lowest-latency option. It's designed as a successor to Flash 1.5/2.0, ideal for classification or summarization tasks at scale. Flash-Lite has no default thinking, but users can enable it optionally. It supports Google-native tools like Grounded Search, Code Execution, URL context, and function calling.

Blog

Keep reading with a 7-day free trial

Subscribe to AI Newsletter to keep reading this post and get 7 days of free access to the full post archives.

Already a paid subscriber? Sign in
© 2025 elvis
Privacy ∙ Terms ∙ Collection notice
Start writingGet the app
Substack is the home for great culture

Share