Self-Improving Agents

Thoughts of Algorithms

Thoughts of Algorithms

How a self-evolving AI learns to reflect, score, and rewrite its own reasoning

🧪 Summary

What if an AI could think not just solve problems, but reevaluate its beliefs in the face of new information?

In this post, we introduce a system that does exactly that. At the core of our pipeline is a lightweight scoring model called MR.Q, responsible for evaluating ideas and choosing the best ones. But when it encounters a new domain, a new goal, or a shift in task format, it doesn’t freeze it adapts.

Learning to Learn: A LATS-Based Framework for Self-Aware AI Pipelines

Learning to Learn: A LATS-Based Framework for Self-Aware AI Pipelines

📖 Summary

In this post, we introduce the LATSAgent, an implementation of LATS: Language Agent Tree Search Unifies Reasoning.. within the co_ai framework. Unlike prior agents that followed a single reasoning chain, this agent explores multiple reasoning paths in parallel, evaluates them using multidimensional scoring, and learns symbolic refinements over time. This is our most complete integration yet of search, simulation, scoring, and symbolic tuning bringing together all of our previous work on sharpening, pipeline reflection, and symbolic rules into a unified, intelligent reasoning loop.

Programming Intelligence: Using Symbolic Rules to Steer and Evolve AI

Programming Intelligence: Using Symbolic Rules to Steer and Evolve AI

🧪 Summary

“What if AI systems could learn how to improve themselves not just at the level of weights or prompts, but at the level of strategy itself? In this post, we show how to build such a system, powered by symbolic rules and reflection.

The paper Symbolic Agents: Symbolic Learning Enables Self-Evolving Agents introduces a framework where symbolic rules guide, evaluate, and evolve agent behavior.

Adaptive Reasoning with ARM: Teaching AI the Right Way to Think

Adaptive Reasoning with ARM: Teaching AI the Right Way to Think

Summary

Chain-of-thought is powerful, but which chain? Short explanations work for easy tasks, long reflections help on hard ones, and code sometimes beats them both. What if your model could adaptively pick the best strategy, per task, and improve as it learns?

The Adaptive Reasoning Model (ARM) is a framework for teaching language models how to choose the right reasoning format direct answers, chain-of-thoughts, or code depending on the task. It works by evaluating responses, scoring them based on rarity, conciseness, and difficulty alignment, and then updating model behavior over time.

General Reasoner: The smarter Local Agent

General Reasoner: The smarter Local Agent

🔧 Summary

The General Reasoner paper shows how we can train LLMs to reason across domains using diverse data and a generative verifier. In this post, I walk through our open-source implementation showing how we built a modular reasoning agent capable of generating multiple hypotheses, evaluating them with an LLM-based judge, and selecting the best answer.


🧠 What We Built

We built a GeneralReasonerAgent that:

  • Dynamically generates multiple hypotheses using different reasoning strategies (e.g., cot, debate, verify_then_answer, etc.)
  • Evaluates each pair of hypotheses using either a local LLM judge or our custom MR.Q evaluator
  • Classifies the winning hypothesis using rubric dimensions
  • Logs structured results to a PostgreSQL-backed system

All of this was integrated with our existing co_ai framework, which includes: