AI News & Library Index
New AI libraries, framework releases, and model announcements. Curated for developers and learners, with source links for each item.
What is this index?
This page tracks new AI and ML library releases (left) and AI news (right). Each library entry includes a short summary and a source link; each news item includes a detailed summary and official source. We update it regularly to help you stay current with Python AI frameworks, RAG tools, agent libraries, and model releases.
New library
- 15 Feb 2026
Recursive Language Models (RLMs) — MIT CSAIL
Novel inference paradigm for long-context AI: process 10M+ tokens via symbolic recursion and code execution. RLM-Qwen3-8B outperforms base by 28.3% on long-context tasks. Paper and code available.
Source: arXiv paper - 1 Feb 2026
LiteMind v2026.2 — Unified multimodal AI framework
Unified API for OpenAI, Anthropic, Google Gemini, and Ollama. Agentic ReAct-style framework, built-in RAG, tool integration. Native support for text, images, audio, video, and PDFs. Python 3.10+.
Source: PyPI - 1 Feb 2026
Helix — Production agent framework with budget limits
Semantic caching (40–70% API cost reduction), persistent memory, multi-agent teams, YAML pipelines. Supports OpenAI, Anthropic, Gemini, Groq, Mistral, and 8+ providers.
Source: GitHub - 1 Feb 2026
PageIndex — Vectorless, reasoning-based RAG
RAG without vector DBs: builds a tree-structured index (table-of-contents) from documents and uses LLM reasoning + tree search for retrieval. 98.7% on FinanceBench; explainable, section-level references. Chat, API, MCP. By VectifyAI.
Source: GitHub - 1 Feb 2026
LlamaIndex 0.14 — RAG & agent updates
Security and crash fixes, TokenBudgetHandler for cost control, agent retry logic for empty LLM responses, LangChain 1.x support. RAG and workflow framework for production.
Source: LlamaIndex changelog - 1 Feb 2026
AIST aiaccel — ML research acceleration
Toolkit for HPC clusters: PyTorch/Lightning training, hyperparameter optimization, OmegaConf config. For large-scale ML research.
Source: PyPI - 15 Jan 2026
Voyage 4 — Embedding models & multimodal
voyage-4-large (RTEB leaderboard), voyage-4-lite, voyage-4-nano (open-weights). Shared embedding space; voyage-multimodal-3.5 with video retrieval. On Azure, AWS, GCP, MongoDB Atlas.
Source: Voyage AI blog - 1 Jan 2026
RAGdb — Embeddable SQLite RAG (no vector DB)
Single-file .ragdb SQLite database: ingestion, multimodal extraction, hybrid retrieval (TF-IDF + keyword) in one portable file. No Docker/cloud; ~99.5% smaller than typical RAG stacks. Python 3.9+, pip install ragdb.
Source: GitHub - 1 Jan 2026
Orca AI SDK — Unified LLM interface
Provider-agnostic library for OpenAI, Anthropic, Google Gemini, OpenRouter. Full async/sync and streaming. Simplifies multi-provider apps.
Source: PyPI - 1 Jan 2026
Trinity-RFT — Reinforcement fine-tuning for LLMs
Framework for training LLMs with reinforcement fine-tuning (RFT). Python 3.10+. For researchers and practitioners scaling RFT.
Source: PyPI - 30 Sept 2025
Model Context Protocol (MCP) — Agent tool standard
Open standard for connecting AI assistants to tools and data. Adopted by major vendors. Safer, auditable integrations for agents.
Source: MCP site
AI news
- 19 Feb 2026
Google Gemini 3.1 Pro — Flagship model release
Google launched Gemini 3.1 Pro in February 2026 as its most capable model to date. It delivers roughly twice the reasoning performance of Gemini 3 Pro and scores 77.1% on the ARC-AGI-2 benchmark. The model supports a 1 million token context window and can output up to 65K tokens, making it suitable for long-document and code-generation tasks. It ranks first on 12 of 18 tracked benchmarks and excels at software engineering (80.6% on SWE-Bench Verified). Developers can access it via the Gemini API, Google AI Studio, Android Studio, and consumer-facing products.
Source: Google AI for Developers - 17 Feb 2026
Anthropic Sonnet 4.6 — 1M context, stronger coding & computer use
Anthropic released Claude Sonnet 4.6 in February 2026 with a doubled context window of 1 million tokens (up from 200K). The model scores 60.4% on ARC-AGI-2, a benchmark aimed at human-like reasoning. Improvements focus on coding, instruction-following, and computer use (screen understanding and control). Sonnet 4.6 became the default model for both Free and Pro plan users on claude.ai and via the API, offering a strong balance of speed and capability for developers and power users.
Source: Anthropic - 16 Feb 2026
Alibaba Qwen 3.5 — Agentic AI model with vision
Alibaba unveiled Qwen 3.5 in February 2026, positioning it for the "agentic AI era." The company claims around 60% lower cost and up to 8× better performance on large workloads compared to the previous generation. The model includes visual agentic capabilities, allowing it to understand screens and take actions across applications independently. It targets enterprise and developer use with stronger reasoning and tool use while reducing inference cost, and is available through Alibaba Cloud and open-weight variants.
Source: Alibaba Cloud - 1 Feb 2026
HyperNova 60B — Compressed open LLM on Hugging Face
Spanish startup Multiverse Computing released HyperNova 60B 2602 in February 2026, a 50% compressed version of OpenAI's gpt-oss-120B model. Memory footprint drops from 61GB to 32GB using the company's quantum-inspired CompactifAI compression technology. The model shows significant gains in tool-calling and agentic coding, with around 1.5× improvement on the BFCL v4 benchmark. It is freely available on Hugging Face, offering a smaller, faster alternative for teams that need strong reasoning and tool use without the full 120B footprint.
Source: Hugging Face - 20 Jan 2026
India AI Summit 2026 — $1.1B fund, 7-Sutra governance
The India AI Summit (India AI Impact Summit) in January 2026 set the tone for India's "AI for All" push. The government announced a $1.1 billion state-backed venture capital fund targeting AI and advanced manufacturing startups, with a goal to attract over $200 billion in AI infrastructure investment within two years. Compute will expand by 20,000 GPUs on top of the existing 38,000. India also released AI Governance Guidelines built around seven principles (the "7 Sutras"): Trust is the Foundation, People First, Innovation over Restraint, Fairness & Equity, Accountability, Understandable by Design, and Safety, Resilience & Sustainability. New institutions include the AI Governance Group, Technology & Policy Expert Committee, and AI Safety Institute. OpenAI will open offices in Bengaluru and Mumbai; Anthropic opened its first Indian office in Bengaluru. Eighty-eight countries signed the New Delhi AI Declaration, and India joined the Pax Silica group for AI infrastructure supply chain resilience.
Source: PIB
Learn Python & AI with us
Stay ahead with live 1:1 classes on Python, ML, RAG, and modern AI. Book a free demo.
Book a Free Demo Session