Building Conversational AI with LLMs and Agents
Appendix P: Semantic Kernel: Enterprise AI Orchestration

Kernel Setup, Plugins, and Functions

Big Picture

Microsoft's Semantic Kernel: Enterprise AI Orchestration (SK) is an open-source SDK that lets you orchestrate LLM calls alongside conventional code. Unlike prompt-only frameworks, SK treats AI capabilities as plugins with well-typed function signatures, making them composable, testable, and easy to integrate into enterprise .NET and Python applications. This section walks through kernel creation, plugin architecture, native versus semantic functions, and dependency injection patterns that keep your AI code maintainable at scale.

1. Installing Semantic Kernel: Enterprise AI Orchestration

Semantic Kernel: Enterprise AI Orchestration is distributed as a Python package on PyPI. The core library has minimal dependencies, with optional extras for specific connectors such as Azure OpenAI, Hugging Face, and various vector stores.

# Install the core package and the OpenAI connector
pip install semantic-kernel

# Verify the installation
import semantic_kernel as sk
print(sk.__version__)  # e.g., "1.14.0"
Kernel created with OpenAI chat completion service Model: gpt-4o Services registered: 1

For Azure OpenAI users, the same package includes the Azure connector. You only need to supply different environment variables (or configuration objects) to switch between OpenAI and Azure OpenAI endpoints.

2. Creating and Configuring the Kernel

The Kernel object is the central orchestrator. It holds references to AI services, plugins, and configuration. You create one kernel per application (or per request scope in a web service) and register services on it.

import semantic_kernel as sk
from semantic_kernel.connectors.ai.open_ai import (
    OpenAIChatCompletion,
)

# Create the kernel
kernel = sk.Kernel()

# Register an OpenAI chat completion service
kernel.add_service(
    OpenAIChatCompletion(
        service_id="chat",
        ai_model_id="gpt-4o",
        api_key="sk-...",          # or use env var OPENAI_API_KEY
    )
)

print(kernel.services)  # {'chat': OpenAIChatCompletion(...)}
Response: Python is a high-level, interpreted programming language known for its readability, extensive standard library, and versatile ecosystem that supports web development, data science, AI, and automation.

The service_id string is an arbitrary label you choose. When you later invoke a function, SK resolves which service to use based on this identifier. You can register multiple services (for example, a fast model for drafts and a capable model for final outputs) and select between them at call time.

Key Insight

The kernel itself is stateless with respect to conversation history. It does not automatically remember prior turns. If you need multi-turn chat, you manage a ChatHistory object and pass it on each invocation. This explicit design avoids hidden state bugs in production systems.

3. Plugins: The Organizational Unit

A plugin in Semantic Kernel: Enterprise AI Orchestration is a named collection of related functions. Think of it like a Python module or a REST controller: it groups capabilities by domain. For example, a "WriterPlugin" might contain functions for summarizing, translating, and rewriting text.

Plugins can contain two kinds of functions: native functions (regular Python code) and semantic functions (prompt templates that call an LLM). Both are registered in the same plugin namespace and can call each other.

from semantic_kernel.functions import kernel_function

class MathPlugin:
    """A plugin with native (code-only) functions."""

    @kernel_function(
        name="add",
        description="Adds two numbers together.",
    )
    def add(self, a: float, b: float) -> float:
        return a + b

    @kernel_function(
        name="multiply",
        description="Multiplies two numbers.",
    )
    def multiply(self, a: float, b: float) -> float:
        return a * b

# Register the plugin on the kernel
kernel.add_plugin(MathPlugin(), plugin_name="Math")

# List registered functions
for name, fn in kernel.get_list_of_function_metadata():
    print(f"{fn.plugin_name}.{fn.name}: {fn.description}")
Chat history: User: What is Python? Assistant: Python is a high-level programming language... User: What are its main libraries? Assistant: Python's main libraries include NumPy for numerical computing, pandas for data analysis, and scikit-learn for ML...

The @kernel_function decorator marks a method as callable by the kernel (and by planners, which we cover in Section P.3). The description parameter is critical: planners and function-calling models use it to decide when to invoke your function.

4. Native Functions in Detail

Native functions are ordinary Python methods decorated with @kernel_function. They execute deterministically on your CPU, making them ideal for data validation, API calls, database queries, and mathematical operations. The kernel inspects type annotations to generate the function schema automatically.

import httpx
from semantic_kernel.functions import kernel_function

class WeatherPlugin:
    """Fetches current weather data from an external API."""

    def __init__(self, api_key: str):
        self.api_key = api_key
        self.client = httpx.AsyncClient()

    @kernel_function(
        name="get_current_weather",
        description="Gets the current weather for a given city.",
    )
    async def get_current_weather(self, city: str) -> str:
        resp = await self.client.get(
            "https://api.weatherapi.com/v1/current.json",
            params={"key": self.api_key, "q": city},
        )
        data = resp.json()
        temp = data["current"]["temp_c"]
        condition = data["current"]["condition"]["text"]
        return f"{city}: {temp}C, {condition}"

# Register with constructor arguments
kernel.add_plugin(
    WeatherPlugin(api_key="your-weather-key"),
    plugin_name="Weather",
)
Streaming response: Semantic Kernel: Enterprise AI Orchestration is a lightweight SDK that integrates large language models into conventional programming languages... [streamed 89 tokens in 1.2s]

Notice that native functions can be asynchronous. Semantic Kernel: Enterprise AI Orchestration's invocation pipeline is async-first, so async def is the preferred signature for I/O-bound work. Synchronous functions are also supported; the kernel wraps them internally.

5. Semantic Functions: Prompt-Driven AI Calls

A semantic function is a prompt template that gets sent to an LLM service. You define it either inline (as a string) or from a directory structure. Semantic functions live alongside native functions inside the same plugin.

from semantic_kernel.prompt_template import PromptTemplateConfig

# Define a semantic function inline
prompt = """Summarize the following text in exactly three bullet points.

Text: {{$input}}

Summary:"""

summary_fn = kernel.add_function(
    plugin_name="Writer",
    function_name="summarize",
    prompt=prompt,
    prompt_template_settings=PromptTemplateConfig(
        template_format="semantic-kernel",
        input_variables=[
            {"name": "input", "description": "Text to summarize"}
        ],
    ),
)

# Invoke the function
result = await kernel.invoke(summary_fn, input="Semantic Kernel: Enterprise AI Orchestration is...")
print(result)
- Semantic Kernel: Enterprise AI Orchestration is an open-source SDK that integrates LLMs into applications - It provides a plugin architecture for combining AI services with native code - The framework supports multiple LLM providers and offers built-in prompt management

The {{$input}} syntax uses SK's template language (covered in depth in Section P.2). Variables prefixed with $ are resolved from the arguments you pass at invocation time.

Tip

For projects with many semantic functions, organize them in a directory structure: one folder per plugin, one subfolder per function, each containing a skprompt.txt and a config.json. Then load the entire plugin with kernel.add_plugin(parent_directory="./plugins/WriterPlugin"). This keeps prompts version-controlled and separate from application code.

6. Dependency Injection and Service Registration

In production applications, you rarely hard-code API keys or instantiate services directly. Semantic Kernel: Enterprise AI Orchestration integrates with Python's dependency injection patterns. You can construct plugins with injected dependencies (database connections, HTTP clients, configuration objects) and register them on the kernel.

from dataclasses import dataclass

@dataclass
class AppConfig:
    weather_api_key: str
    db_connection_string: str
    default_model: str = "gpt-4o"

def build_kernel(config: AppConfig) -> sk.Kernel:
    """Factory function that wires up the kernel with dependencies."""
    kernel = sk.Kernel()

    # Register AI service
    kernel.add_service(
        OpenAIChatCompletion(
            service_id="chat",
            ai_model_id=config.default_model,
        )
    )

    # Register plugins with injected config
    kernel.add_plugin(
        WeatherPlugin(api_key=config.weather_api_key),
        plugin_name="Weather",
    )
    kernel.add_plugin(MathPlugin(), plugin_name="Math")

    return kernel

# In your application entry point
config = AppConfig(
    weather_api_key="...",
    db_connection_string="...",
)
kernel = build_kernel(config)
Function registered: summarize_text Result: The article discusses recent advances in transformer architectures, focusing on efficiency improvements and scaling laws.

This factory pattern makes testing straightforward. In unit tests, you can pass a mock configuration that points to stub services instead of real APIs. The kernel does not care where its services come from; it only needs objects that satisfy the expected interfaces.

7. Invoking Functions and Handling Results

Every function invocation returns a FunctionResult object. This object carries the raw output, metadata about token usage, and any errors that occurred during execution.

# Invoke a native function
result = await kernel.invoke(
    kernel.get_function("Math", "add"),
    a=3.0,
    b=7.0,
)
print(result.value)  # 10.0

# Invoke a semantic function
result = await kernel.invoke(
    kernel.get_function("Writer", "summarize"),
    input="Large Language Models are neural networks trained on...",
)
print(str(result))         # The summary text
print(result.metadata)     # Token counts, model info, latency
Plugin loaded: MathPlugin Functions: ['add', 'subtract', 'multiply', 'divide'] Result of add(3, 5): 8
Warning

Semantic function results are strings by default. If you need structured output (JSON, typed objects), use SK's structured output support or parse the result in a follow-up native function. Do not assume the LLM will always return valid JSON without explicit instructions and validation.

8. Putting It All Together: A Complete Example

The following example creates a kernel with both native and semantic functions, then chains them together to answer a question that requires both computation and language generation.

import asyncio
import semantic_kernel as sk
from semantic_kernel.connectors.ai.open_ai import OpenAIChatCompletion
from semantic_kernel.functions import kernel_function

class UnitConverter:
    @kernel_function(name="celsius_to_fahrenheit",
                     description="Converts Celsius to Fahrenheit.")
    def celsius_to_fahrenheit(self, celsius: float) -> float:
        return celsius * 9 / 5 + 32

async def main():
    kernel = sk.Kernel()
    kernel.add_service(
        OpenAIChatCompletion(service_id="chat", ai_model_id="gpt-4o")
    )
    kernel.add_plugin(UnitConverter(), plugin_name="Units")

    # Add a semantic function that uses the conversion result
    explain_prompt = """The temperature {{$celsius}}C equals {{$fahrenheit}}F.
Write a brief, friendly explanation of what this temperature feels like
and what clothing someone should wear."""

    kernel.add_function(
        plugin_name="Writer",
        function_name="explain_temp",
        prompt=explain_prompt,
    )

    # Step 1: Convert
    conv_result = await kernel.invoke(
        kernel.get_function("Units", "celsius_to_fahrenheit"),
        celsius=28.0,
    )

    # Step 2: Explain
    explanation = await kernel.invoke(
        kernel.get_function("Writer", "explain_temp"),
        celsius="28",
        fahrenheit=str(conv_result.value),
    )
    print(explanation)

asyncio.run(main())
At 28C (82.4F), it's a warm and pleasant day! You'll be comfortable in a light t-shirt or blouse with shorts or a summer dress. If you're spending time outdoors, bring sunglasses and consider sunscreen. A light breeze would make this temperature feel perfect for a walk in the park.

This two-step pattern (compute, then narrate) is fundamental to Semantic Kernel: Enterprise AI Orchestration applications. In Section P.3, you will learn how planners automate this sequencing, letting the LLM decide which functions to call and in what order.