• Optimizing Prompt Generation with MARS and DSPy

    đź•’ TL;DR

    • We explore MARS, a multi-agent prompt optimizer using Socratic dialogue.
    • We implement it using DSPy + Fin-R1 + EDGAR giving us an end-to-end financial reasoning pipeline.
    • We deploy the whole thing to Hugging Face Spaces with a Gradio UI.

    🌟 Introduction

    Prompt engineering has become the defining skill of the Large Language Model (LLM) era a delicate balance between science and art. Crafting the perfect prompt often feels like an exercise in intuition, trial, and error. But what if we could take the guesswork out of the process? What if prompts could optimize themselves?

  • Fin-R1: a Financial Reasoning LLM with Reinforcement Learning and CoT

    Introduction

    Fin-R1 is a new model specifically fine-tuned for financial reasoning, with performance that beats much larger models like DeepSeek-R1.

    This post will use this model and compare it with phi3 across various tasks.

    • phi3 for comparison

    Phi-3: a lightweight, general-purpose model known for its efficiency and strong reasoning performance at smaller parameter scales. It serves as a great baseline for assessing how domain-specific tuning in Fin-R1 improves financial understanding and response structure.