Skip to main content
This guide provides a quickstart for integrating the ai-coustics filter into your Pipecat applications.

Prerequisites

Before you start, make sure you have a valid license key from the developer portal.

Installation

To use AICFilter, you need to install the aic extra for pipecat-ai (not needed when using uv):
pip install pipecat-ai[aic,local,webrtc] loguru pyaudio fastapi uvicorn dotenv

Usage

The AICFilter can be easily integrated into a Pipecat pipeline between an audio input transport (e.g., microphone) and an audio output transport (e.g., speaker). Here’s a complete example of a simple Pipecat application that uses the AICFilter.
# /// script
# requires-python = ">=3.10,<3.14"
# dependencies = [
#     "pipecat-ai[aic,local,webrtc]",
#     "loguru",
#     "llvmlite",
#     "pyaudio",
#     "fastapi",
#     "uvicorn",
#     "dotenv",
#     "pipecat-ai-small-webrtc-prebuilt",
# ]
# ///

import asyncio
import os
import http
import sys
from loguru import logger

from pipecat.frames.frames import InputAudioRawFrame, OutputAudioRawFrame, Frame
from pipecat.pipeline.pipeline import Pipeline
from pipecat.pipeline.runner import PipelineRunner
from pipecat.pipeline.task import PipelineParams, PipelineTask
from pipecat.processors.frame_processor import FrameDirection, FrameProcessor
from pipecat.runner.types import RunnerArguments
from pipecat.runner.utils import create_transport
from pipecat.transports.base_transport import BaseTransport, TransportParams

# Import the AICFilter
from pipecat.audio.filters.aic_filter import AICFilter, AICModelType


# Loopback Processor
class AudioFrameConverter(FrameProcessor):
    async def process_frame(self, frame: Frame, direction: FrameDirection):
        await super().process_frame(frame, direction)
        if isinstance(frame, InputAudioRawFrame):
            output_frame = OutputAudioRawFrame(
                audio=frame.audio,
                sample_rate=frame.sample_rate,
                num_channels=frame.num_channels,
            )
            await self.push_frame(output_frame, direction)
        else:
            await self.push_frame(frame, direction)


# Bot Logic
async def run_bot(transport: BaseTransport, runner_args: RunnerArguments):
    logger.info("Bot starting: Direct Audio Loopback with AIC Filter")

    converter = AudioFrameConverter()

    pipeline = Pipeline(
        [
            transport.input(),
            converter,
            transport.output(),
        ]
    )

    task = PipelineTask(
        pipeline,
        params=PipelineParams(),
    )

    @transport.event_handler("on_client_connected")
    async def on_client_connected(transport, client):
        logger.info("✅ WebRTC Client Connected")

    @transport.event_handler("on_client_disconnected")
    async def on_client_disconnected(transport, client):
        await task.cancel()

    runner = PipelineRunner(handle_sigint=runner_args.handle_sigint)
    await runner.run(task)


async def bot(runner_args: RunnerArguments):
    """Main bot entry point for the bot starter."""

    # Initialize the AIC Filter
    aic_filter = AICFilter(license_key=os.getenv("AIC_SDK_LICENSE", ""), model_type=AICModelType.QUAIL_STT)

    transport_params = {
        "webrtc": lambda: TransportParams(
            audio_in_enabled=True, audio_out_enabled=True, audio_in_filter=aic_filter
        )
    }

    transport = await create_transport(runner_args, transport_params)

    await run_bot(transport, runner_args)


if __name__ == "__main__":
    from pipecat.runner.run import main

    main()

Running the Example

1

Save the code

Save the code above as bot.py.
2

Set Environment Variables

Set the necessary environment variable in your terminal:
export AIC_SDK_LICENSE="YOUR_AIC_LICENSE_KEY"
Replace the placeholder value with your actual license key.
3

Run the Application

Execute the script from your terminal:
python bot.py
Or use uv:
uv run bot.py
You can now navigate to http://localhost:7860 and click the green ‘Connect’ button in the top right corner. Please note that you will hear the filtered microphone output processed by Quail STT models; because these models are optimized specifically for human-to-machine interaction, the audio is tuned for speech-to-text accuracy rather than human listening comfort, and may sound distorted or unusual to your ears.More information on AICFilter is provided in the Pipecat Documentation.