How to apply audio effect to a file and write to filesystem – iOS

I’m building an app that should allow user to apply audio filters to a recorded audio, such as Reverb, Boost.

I was unable to find any viable source of information on how to apply filters to a file itself, because it’s needed to upload processed file to server later.

  • How to concatenate 2 or 3 audio files in iOS?
  • How to auto play music as soon as my app starts?
  • AVCaptureSession cancels background audio
  • How do I programmatically play an MP3 on an iPhone?
  • Sprite Kit & playing sound leads to app termination
  • iOS background audio stops when screen is locked
  • I’m currently using AudioKit for visualization, and I’m aware that it’s capable of doing audio processing, but only for playback. Please give any suggestions for further research.

    2 Solutions Collect From Internet About “How to apply audio effect to a file and write to filesystem – iOS”

    AudioKit has an offline render node that doesn’t require iOS 11. Here’s an example, the player.schedule(…) and player.start(at.) bits are required as AKAudioPlayer’s underlying AVAudioPlayerNode will block on the calling thread waiting for the next render if you start it with player.play().

    import UIKit
    import AudioKit
    
    class ViewController: UIViewController {
    
        var player: AKAudioPlayer?
        var reverb = AKReverb()
        var boost = AKBooster()
        var offlineRender = AKOfflineRenderNode()
    
        override func viewDidLoad() {
            super.viewDidLoad()
    
            guard let url = Bundle.main.url(forResource: "theFunkiestFunkingFunk", withExtension: "mp3") else {
                return
            }
            var audioFile: AKAudioFile?
            do {
                audioFile = try AKAudioFile.init(forReading: url)
                player = try AKAudioPlayer.init(file: audioFile!)
            } catch {
                print(error)
                return
            }
            guard let player = player else {
                return
            }
    
    
            player >>> reverb >>> boost >>> offlineRender
    
            AudioKit.output = offlineRender
            AudioKit.start()
    
    
            let docs = FileManager.default.urls(for: .documentDirectory, in: .userDomainMask).first!
            let dstURL = docs.appendingPathComponent("rendered.caf")
    
            offlineRender.internalRenderEnabled = false
            player.schedule(from: 0, to: player.duration, avTime: nil)
            let sampleTimeZero = AVAudioTime(sampleTime: 0, atRate: AudioKit.format.sampleRate)
            player.play(at: sampleTimeZero)
            do {
                try offlineRender.renderToURL(dstURL, seconds: player.duration)
            } catch {
                print(error)
                return
            }
            offlineRender.internalRenderEnabled = true
    
            print("Done! Rendered to " + dstURL.path)
        }
    }
    

    You can use the newly-introduced “manual rendering” features from Audio Unit plugins (see example below).

    If you need to support older macOS/iOS version, I would be surprised if you can’t achieve the same with AudioKit (even though I haven’t tried it myself). For instance, using an AKSamplePlayer as your first node (which will read your audio file), then building and connecting your effects and using an AKNodeRecorder as your last node.

    Example of manual rendering using the new audio unit features

    import AVFoundation
    
    //: ## Source File
    //: Open the audio file to process
    let sourceFile: AVAudioFile
    let format: AVAudioFormat
    do {
        let sourceFileURL = Bundle.main.url(forResource: "mixLoop", withExtension: "caf")!
        sourceFile = try AVAudioFile(forReading: sourceFileURL)
        format = sourceFile.processingFormat
    } catch {
        fatalError("could not open source audio file, \(error)")
    }
    
    //: ## Engine Setup
    //:    player -> reverb -> mainMixer -> output
    //: ### Create and configure the engine and its nodes
    let engine = AVAudioEngine()
    let player = AVAudioPlayerNode()
    let reverb = AVAudioUnitReverb()
    
    engine.attach(player)
    engine.attach(reverb)
    
    // set desired reverb parameters
    reverb.loadFactoryPreset(.mediumHall)
    reverb.wetDryMix = 50
    
    // make connections
    engine.connect(player, to: reverb, format: format)
    engine.connect(reverb, to: engine.mainMixerNode, format: format)
    
    // schedule source file
    player.scheduleFile(sourceFile, at: nil)
    //: ### Enable offline manual rendering mode
    do {
        let maxNumberOfFrames: AVAudioFrameCount = 4096 // maximum number of frames the engine will be asked to render in any single render call
        try engine.enableManualRenderingMode(.offline, format: format, maximumFrameCount: maxNumberOfFrames)
    } catch {
        fatalError("could not enable manual rendering mode, \(error)")
    }
    //: ### Start the engine and player
    do {
        try engine.start()
        player.play()
    } catch {
        fatalError("could not start engine, \(error)")
    }
    //: ## Offline Render
    //: ### Create an output buffer and an output file
    //: Output buffer format must be same as engine's manual rendering output format
    let outputFile: AVAudioFile
    do {
        let documentsPath = NSSearchPathForDirectoriesInDomains(.documentDirectory, .userDomainMask, true)[0]
        let outputURL = URL(fileURLWithPath: documentsPath + "/mixLoopProcessed.caf")
        outputFile = try AVAudioFile(forWriting: outputURL, settings: sourceFile.fileFormat.settings)
    } catch {
        fatalError("could not open output audio file, \(error)")
    }
    
    // buffer to which the engine will render the processed data
    let buffer: AVAudioPCMBuffer = AVAudioPCMBuffer(pcmFormat: engine.manualRenderingFormat, frameCapacity: engine.manualRenderingMaximumFrameCount)!
    //: ### Render loop
    //: Pull the engine for desired number of frames, write the output to the destination file
    while engine.manualRenderingSampleTime < sourceFile.length {
        do {
            let framesToRender = min(buffer.frameCapacity, AVAudioFrameCount(sourceFile.length - engine.manualRenderingSampleTime))
            let status = try engine.renderOffline(framesToRender, to: buffer)
            switch status {
            case .success:
                // data rendered successfully
                try outputFile.write(from: buffer)
    
            case .insufficientDataFromInputNode:
                // applicable only if using the input node as one of the sources
                break
    
            case .cannotDoInCurrentContext:
                // engine could not render in the current render call, retry in next iteration
                break
    
            case .error:
                // error occurred while rendering
                fatalError("render failed")
            }
        } catch {
            fatalError("render failed, \(error)")
        }
    }
    
    player.stop()
    engine.stop()
    
    print("Output \(outputFile.url)")
    print("AVAudioEngine offline rendering completed")
    

    You can find more docs and examples about the updates to the AudioUnit format there.