I77537 StackDocsData Science
Related
Data Scientists Unlock New Python Method to Validate Scoring Model ConsistencyHow to Leverage AI for Chaos Engineering in Production: A Step-by-Step GuideiPhone Push Notification Database Exposed Signal Messages Despite App Deletion, FBI Investigation RevealsMeta Unveils AI Swarm That Decodes Hidden 'Tribal Knowledge' in Massive CodebasesMeta's AI Swarm Maps 'Tribal Knowledge' in Massive Codebase, Slashes Errors by 40%Building a Real-Time Hallucination Correction Layer for RAG SystemsMeta AI Unveils NeuralBench: A Unifying Benchmark to End Chaos in Brain Signal AI Evaluation7 Key Facts About Apache Arrow Support in mssql-python

Crafting an Intelligent Conference Assistant with .NET's Modular AI Toolkit

Last updated: 2026-05-09 01:34:25 · Data Science

Introduction

Integrating artificial intelligence into .NET applications has traditionally been a fragmented process. Developers often cobble together different models, vector databases, ingestion pipelines, and agent frameworks from disparate ecosystems, each with its own patterns and client libraries. The result is a brittle stack that can break with every new version. To address this, the .NET team has developed a set of composable, extensible building blocks that provide stable abstractions across the entire AI pipeline. This article demonstrates how these components come together in a real-world scenario: an interactive conference assistant built for MVP Summit.

Crafting an Intelligent Conference Assistant with .NET's Modular AI Toolkit
Source: devblogs.microsoft.com

The ConferencePulse Application: An Overview

ConferencePulse is a Blazor Server application that transforms traditional conference sessions into dynamic, AI-powered experiences. Attendees scan a QR code to join a session, then interact with the presenter through live polls and a Q&A system. Behind the scenes, artificial intelligence enhances every aspect of the engagement. The AI generates poll questions based on session content, provides real-time answers to audience queries using a retrieval-augmented generation (RAG) pipeline, surfaces emerging patterns from poll results and questions, and automatically produces a comprehensive session summary when the presenter ends the session.

The goal was to create an interactive session without relying on static slides. Polls, audience insights, and even the preparation process are automated: the application can point to a GitHub repository, download its markdown content, process it through a pipeline, and build a searchable knowledge base. All poll questions, talking points, and Q&A answers are grounded in that specific content.

Live Polls and Q&A

During a session, the AI engine dynamically generates polling questions that align with the discussion topics. Attendees vote in real time, and the results are displayed instantly. For Q&A, the system employs a RAG pipeline that searches the session knowledge base, Microsoft Learn documentation, and GitHub wiki content to deliver accurate, context-aware answers. This ensures that no question goes unanswered, even when the presenter is away from the podium.

Automated Insights and Summaries

As the session progresses, the AI analyzes incoming poll data and audience questions to identify patterns and trends. This provides the presenter with real-time insights into audience sentiment and understanding. When the session concludes, multiple AI agents collaboratively analyze the collected polls, questions, and insights, then merge their findings into a concise, actionable summary. This post-session report can be shared with attendees or used for future improvements.

The Composable AI Building Blocks

ConferencePulse is built on five key .NET libraries: Microsoft.Extensions.AI, Microsoft.Extensions.DataIngestion, Microsoft.Extensions.VectorData, the Model Context Protocol (MCP), and the Microsoft Agent Framework. Each plays a distinct role in the application's AI functionality.

Microsoft.Extensions.AI: Unified AI Abstractions

At the heart of all AI calls is IChatClient, a unified abstraction provided by Microsoft.Extensions.AI. This interface works with OpenAI, Azure OpenAI, Ollama, Foundry Local, and many other providers. It standardizes chat completion, embedding generation, and tool calling, allowing developers to switch providers without rewriting code. In ConferencePulse, every interaction—from generating poll questions to answering audience queries—uses this single interface, simplifying the codebase and future-proofing the application against provider changes.

Data Ingestion and Vector Management

To build the knowledge base, the application relies on Microsoft.Extensions.DataIngestion and Microsoft.Extensions.VectorData. The ingestion pipeline downloads markdown files from a specified GitHub repository, cleans and chunks the content, and then indexes it into a vector database. Microsoft.Extensions.VectorData provides an abstraction over vector stores like Qdrant, enabling semantic search across the ingested content. This powers the RAG pipeline, so when an attendee asks a question, the system retrieves the most relevant passages and passes them to the AI model for answer generation.

Crafting an Intelligent Conference Assistant with .NET's Modular AI Toolkit
Source: devblogs.microsoft.com

Model Context Protocol and Agent Framework

The Model Context Protocol (MCP) standardizes how AI models interact with external tools and resources. In ConferencePulse, MCP servers expose tools for querying the knowledge base, fetching session metadata, and orchestrating workflows. The Microsoft Agent Framework builds on top of MCP, enabling the creation of autonomous agents that can plan, execute steps, and collaborate. For the session summary feature, multiple agents work in parallel—one analyzing polls, another reviewing questions, a third extracting insights—before a synthesis agent merges their outputs into a final report.

Architecture and Implementation

The application runs on .NET 10 with Blazor Server and .NET Aspire for orchestration. The solution is organized into five projects:

  • ConferenceAssistant.Web – The Blazor Server UI and orchestration layer.
  • ConferenceAssistant.Core – Contains models, interfaces, and session state management.
  • ConferenceAssistant.Ingestion – Handles data ingestion and vector search operations.
  • ConferenceAssistant.Agents – Implements AI agents, workflows, and tools.
  • ConferenceAssistant.Mcp – Hosts MCP server tools and the MCP client.
  • ConferenceAssistant.AppHost – Orchestrates external dependencies like Qdrant, PostgreSQL, and Azure OpenAI via Aspire.

How the Pieces Fit Together

The workflow begins when the session is created. The Ingestion project points to a GitHub repository, downloads markdown content, processes it through the data ingestion pipeline, and loads the resulting vectors into a Qdrant store. During the session, the Web project uses Microsoft.Extensions.AI to call the AI model, which uses MCP tools to search the vector store and retrieve relevant context. The Agent Framework orchestrates more complex tasks like generating insights in real time and merging agent outputs for the final summary. Aspire handles the lifecycle of all services, from the vector database to the AI models, ensuring a smooth deployment experience.

This modular approach means that each component can be tested, replaced, or upgraded independently. The same building blocks that power a conference assistant can be reused for other AI-enhanced applications, from customer support chatbots to document analysis tools.

Conclusion

ConferencePulse demonstrates that building AI-powered .NET applications doesn't have to be a complex jumble of incompatible technologies. With composable building blocks like Microsoft.Extensions.AI, Microsoft.Extensions.DataIngestion, Microsoft.Extensions.VectorData, MCP, and the Agent Framework, developers can create intelligent, scalable applications that are easy to maintain and evolve. Whether you're building a conference assistant or any other AI-driven solution, this toolkit provides the abstractions and integrations needed to move fast while keeping your codebase clean and resilient.