Skip to main content
Version 2.0 separates model loading from audio processing, ships models via external downloads, and introduces breaking API changes. Use this guide to migrate safely and verify behavior before rolling to production.
License keys generated for version 1.3 do not work with 2.0. Generate a new key from the developer portal.

Quick migration checklist

1

Regenerate your license key

Create a new license key in the developer portal and store it in AIC_SDK_LICENSE.
2

Update model IDs

Replace 1.3 enums with 2.0 model IDs (for example QUAIL_VF_STT_L16quail-vf-l-16khz). Model Name Migration Guide.
3

Adopt the new architecture

Download models with aic.Model.download, load with aic.Model.from_file, then create a ProcessorConfig and Processor. Parameters are now set through ProcessorContext instead of the processor to enable safer multi-threaded usage.
4

Rename parameters

Update AICParameter/AICVadParameter to ProcessorParameter/VadParameter (CamelCase).
5

Validate async usage

Switch to ProcessorAsync for asynchronous processing and Model.download_async for downloads.

Breaking changes summary

  1. Import name changed from aic to aic_sdk (use import aic_sdk as aic for convenience).
  2. Model names changed (QuailSparrow, Quail-STTQuail).
  3. New license keys required; old keys fail in 2.0.
  4. New architecture splits Model, ProcessorConfig, Processor and ProcessorContext.
  5. Parameter enums renamed from SNAKE_CASE to CamelCase.
  6. Models are downloaded separately; no longer bundled.

Model naming changes

Models were renamed to clarify use cases, check the Model Name Migration Guide

Architecture changes

from aic import Model, AICModelType

model = Model(
    AICModelType.QUAIL_VF_STT_L16,
    license_key=license_key,
    sample_rate=16000,
    channels=1,
    frames=160,
)
Benefits
  • Reuse one model across multiple processors.
  • Smaller package footprint because models download separately.

API changes (quick reference)

Operation1.32.0
Importfrom aic import Model, AICModelType, AICParameterimport aic_sdk as aic
Model creationModel(AICModelType.QUAIL_VF_STT_L16, ...)Model.download() + Model.from_file()
ConfigurationInline constructor paramsProcessorConfig
Processingmodel.process(audio)processor.process(audio)
Set parametermodel.set_parameter(AICParameter.*, val)proc_ctx.set_parameter(ProcessorParameter.*, val)
Get parametermodel.get_parameter(AICParameter.*)proc_ctx.get_parameter(ProcessorParameter.*)
Get latencymodel.processing_latency()proc_ctx.get_output_delay()
Optimal framesmodel.optimal_num_frames()model.get_optimal_num_frames(sample_rate)
Optimal sample ratemodel.optimal_sample_rate()model.get_optimal_sample_rate()
Reset statemodel.reset()proc_ctx.reset()
Create VADmodel.create_vad()processor.get_vad_context()
VAD parametersvad.set_parameter(AICVadParameter.*, val)vad.set_parameter(VadParameter.*, val)
Cleanupmodel.close()Automatic
Parameters are now controlled through ProcessorContext (obtained via processor.get_processor_context()) instead of directly on the processor. This design enables safer multi-threaded usage where each thread can maintain its own processing context.

Parameter changes

ProcessorParameter (was AICParameter)

1.32.0
AICParameter.BYPASSProcessorParameter.Bypass
AICParameter.ENHANCEMENT_LEVELProcessorParameter.EnhancementLevel
AICParameter.VOICE_GAINProcessorParameter.VoiceGain
AICParameter.NOISE_GATE_ENABLERemoved (now used automatically by the VAD)

VadParameter (was AICVadParameter)

1.32.0
AICVadParameter.SPEECH_HOLD_DURATIONVadParameter.SpeechHoldDuration
AICVadParameter.SENSITIVITYVadParameter.Sensitivity
AICVadParameter.MINIMUM_SPEECH_DURATIONVadParameter.MinimumSpeechDuration

Complete migration example

import os
from aic import Model, AICModelType, AICParameter, AICVadParameter
import numpy as np

license_key = os.getenv("AIC_SDK_LICENSE")

model = Model(
    AICModelType.QUAIL_VF_STT_L16,
    license_key=license_key,
    sample_rate=16000,
    channels=1,
    frames=160,
)

print(f"Latency: {model.processing_latency() / 16000 * 1000:.2f}ms")
print(f"Optimal frames: {model.optimal_num_frames()}")

model.set_parameter(AICParameter.ENHANCEMENT_LEVEL, 0.8)

vad = model.create_vad()
vad.set_parameter(AICVadParameter.SENSITIVITY, 6.0)

audio = np.zeros((1, 160), dtype=np.float32)
model.process(audio)

if vad.is_speech_detected():
    print("Speech detected")

model.close()

Async processing

# 1.3 - async methods on Model
result = await model.process_async(audio)
future = model.process_submit(audio)
result = await model.process_interleaved_async(audio, channels)
1.32.0
model.process_async(audio)ProcessorAsync.process_async(audio)
model.process_submit(audio)Use asyncio primitives directly
model.process_interleaved_async()Not available (use process_async with numpy array (channels, frames))

Removed features

  • AICModelType enum (use model ID strings such as "quail-vf-l-16khz").
  • process_interleaved() and process_sequential() (use process with numpy array (channels, frames)).
  • Context manager with Model(...) as m:.
  • NOISE_GATE_ENABLE parameter (now used automatically by the VAD).
  • process_submit() future API (use asyncio with ProcessorAsync).

New features

Model downloading

import aic_sdk as aic

# Synchronous
model_path = aic.Model.download("sparrow-xxs-48khz", "./models")

# Asynchronous
model_path = await aic.Model.download_async("sparrow-xxs-48khz", "./models")

Model reuse

import aic_sdk as aic

model = aic.Model.from_file(model_path)

processor1 = aic.Processor(model, license_key, config)
processor2 = aic.Processor(model, license_key, config)

Configure the Processor

# Get optimal configuration for the model
config = aic.ProcessorConfig.optimal(model, num_channels=1, allow_variable_frames=False)
print(config)  # ProcessorConfig(sample_rate=48000, num_channels=1, num_frames=480, allow_variable_frames=False)

# Or create from scratch
config = aic.ProcessorConfig(
    sample_rate=48000,
    num_channels=2,
    num_frames=480,
    allow_variable_frames=False # up to num_frames
)

# Option 1: Create and initialize in one step
processor = aic.Processor(model, license_key, config)

# Option 2: Create first, then initialize separately
processor = aic.Processor(model, license_key)
processor.initialize(config)

Model information

# Get model ID
model_id = model.get_id()

# Get optimal sample rate for the model
optimal_rate = model.get_optimal_sample_rate()

# Get optimal frame count for a specific sample rate
optimal_frames = model.get_optimal_num_frames(48000)

Specific exception types

import aic_sdk as aic

try:
    processor = aic.Processor(model, license_key, config)
except aic.LicenseFormatInvalidError as e:
    print(f"Invalid license format: {e.message}")
except aic.LicenseExpiredError as e:
    print(f"License expired: {e.message}")
except aic.ModelInvalidError as e:
    print(f"Invalid model: {e.message}")
Check type stubs file for all error types and complete API.

Need help?