In-App Processor

Mimi provides an additional library which can be used alongside the MimiSDK to provide personalized audio processing within an application which comes in the form of MimiProcessor.framework.

This framework does a lot of the groundwork for integrating the MimiSDK and also provides Mimi Audio Processing. It is only recommended for use where a user would like to have personalized audio within an application environment.

Audio Integration

Processing of raw audio streams with the Mimi Processor framework is done via the C interface exposed in MimiAudioProcessor.h.

Setup & Teardown

For every stereo audio stream you want to process, you need an instance of a Mimi audio processor, represented by the MimiAudioProcessorPtr type. Audio processors can be created via mimi_ap_create and need to be set up for a specific sampling rate and the maximum number of expected frames to be passed into mimi_ap_process.

In case the sampling rate changes, or if the configured maximum number of frames will be exceeded, audio processors can be reconfigured with via mimi_ap_update.
Note that this function should not be used while audio processing is running.

When you don’t need an audio processor anymore, you can free its resources with mimi_ap_destroy.

Processing

The processing is done via mimi_ap_process. This function expects the audio data as a two-dimensional array of 2 audio buffers, pointing to 32-bit floating point data. Processing happens in-place, so the content of those buffers will be replaced with the processed data upon returning from mimi_ap_process. This function is of course real-time safe.

Parameter updates, i.e. turning the effect on/off, changing intensity or preset, are exclusively handled via the MSDK and automatically applied to the Mimi audio processor.

Latency & Debugging

The processing latency in effect is dependent on sample rate as configured via mimi_ap_create or mimi_ap_update and can be queried via mimi_ap_get_processing_latency as a sample count.

For debugging, you can get additional information via mimi_ap_get_debug_info and mimi_ap_log_debug_info. The information provided by those are the same, but are either returned as an AudioProcessingDebugInfo structure, or printed directly onto stdout, respectively. Note that printing to stdout is not suitable for the real-time audio thread.

Hooking up MimiSDK

As mentioned previously, the MimiProcessor framework does most of the integration work with the MimiSDK, so getting set up is easy:

import MimiSDK
import MimiProcessor

class AppDelegate: UIResponder, UIApplicationDelegate, MimiSDKDelegate {

    func application(_ application: UIApplication, 
                    didFinishLaunchingWithOptions launchOptions: [UIApplication.LaunchOptionsKey: Any]?) -> Bool {

        // Configure MimiSDK
        Mimi.start(credentials: .client(id: "YOUR_CLIENT_ID", secret: "YOUR_CLIENT_SECRET"),
                   delegate: self)

        // Activate Mimi Processing
        do {
            try MimiProcessor.shared.activate()
        } catch {
            // Handle activation error.
        }

        return true
    }
}

Important - for MimiProcessor to activate successfully, an instance of the Mimi Audio Processor must exist at the time of activation (via mimi_ap_create).

At this point, the MimiProcessor is hooked up to the MimiSDK and will recieve all and any updates and configuration automatically. You can now start integrating the audio processing into your audio stack.