There are two questions that often arise for iOS developers new to Apple’s audio frameworks: (1) How do I play tracks from the iPod Library using the low level Core Audio APIs (as opposed to using an AVAudioPlayer or MPMediaPlayer) and (2) How do I play those tracks without first loading them into memory?
Taking the second question first: loading a file into memory in order to have immediate access to its sample data is the easy way to do things. Which is why you’ll see it done frequently in sample code and even in some venerable audio frameworks.
There’s a Dark Side, though: the bigger your file, the bigger your app’s memory footprint. A 3-minute MP3 music track can easily occupy 7MB or so, and if your app routinely deals with significantly larger files (maybe its an audio note-taking app?), well you’ve got yourself a problem.
So it would be nice to know how to, instead, pull those audio samples straight from disk – as you’re playing them!
Getting back to question #1: why should playing iPod Library tracks (as opposed to playing bundled audio files) present a special challenge? Because the iPod Library and its contents fall under the jurisdiction of Apple’s Media frameworks, meaning they are treated as special objects – media assets – rather than as simple audio files. So getting to them requires a different kind of dance, and getting them out of the library and into Core Audio Land is not especially straightforward.
Of course, Apple has already given us a handful of options for playing iPod tracks, starting with MPMediaPlayer and AVAudioPlayer. More recently (iOS 8) the AVAudio constellation has been made significantly more flexible by exposing additional features of the low level APIs. So, we’re given a range of choices right out of the box.
Still, sometimes it’s not enough.
So that’s our theme for this post: How do we stream audio directly from disk? and How do we do that with tracks from our iPod Library? Let’s tackle both at the same time!
Doing It Different(ly)
Going back to the player thing for a moment, why wouldn’t you use a Media Player or AVAudioPlayer for iPod Library playback? After all, that’s kinda what MediaPlayer and AVFoundation – and especially the new AVAudioEngine stuff – excel at. And for straight audio playback or sequencing of playlists, these APIs can be a great convenience indeed! Playing a track with an AVAudioPlayer, for example, usually boils down to something like this:
- Launch an MPMediaPickerController to select a file from the iPod library;
- Extract the file handle as an MPMediaItem from the returned MPMediaItemCollection;
- Retrieve the media item’s url;
- Use the url to init an AVAudioPlayer;
- Play your track!
So why would you ever need anything more than that?
Well, let’s say you’ve got a slightly masochistic streak – you actually like mucking around down at the Core Audio level – and maybe you’d like to hammer up a little AUGraph with an AUFilePlayer audio unit attached as a node, and you’d like to use that to play some tracks from your personal library?
To do that you’ll first need to access the file – as an AVAsset – using the MediaPicker -> Media Item -> AVAssetURL part of the flow I loosely described a moment ago; then you’ll need to convert the AVAsset at the end of that url to a new PCM file and save it to your Documents directory; finally, you’ll need to load the PCM file into your AUFilePlayer and start your graph a-rockin’. (Several years ago – when it was the only way to access tracks in the iPod Library – Chris Adamson posted code for accomplishing this very thing using an AVAssetReader-AVAssetWriter combo. His code still works great and is in fact quite easily adaptable to Swift.)
Gathering a bit more courage, you might then decide you’d like to load a track from the Library and loop a little section of it. Well sure, AUFilePlayer can handle that too – after you’ve converted the track to a new PCM file, of course – and then after you’ve gone through the additional, somewhat unintuitive steps of: defining a region of the file to loop, stopping the Audio Unit/AUGraph, scheduling the region for future (as in ‘the immediate future’) playback, resetting the unit/graph and (at last!) restarting the whole kit and caboodle. Naturally, this all happens more or less instantaneously and Yes: it works pretty well and it’s not too too horrible to set up at least not for battle-scarred Core Audio jockeys.
Right, so next you decide you’d like to load that track from the Library, loop a little section of it and play it backwards? Without loading bufferloads of samples into memory, making copies of the buffers, flipping the copies around and then reading from one set of in-memory buffers when you want forward playback and another set when you want reverse.
Well, here’s where you’re likely to hit a wall. AUFilePlayer – and AVAudioPlayer – are (for the most part) sealed, black – no, make that gray – boxes, with fairly specific use cases baked into their APIs. Yes, they allow you to conveniently perform the select set of tasks they were designed for (play, stop, seek… ho-hum) but for anything more adventurous you’re basically dead in the water.
Why? Because you don’t have access to the file reading process and you don’t have direct access to the stream of samples moving between the file on disk and the output hardware. This is their strength (convenience) but it’s also their weakness (if you’re a Mad Scientist they may not be general enough to do what you want them to do.)
No, for that that kind of power you’re going to need to roll your own reader/player, harnessing Extended Audio File Services to do the heavy lifting. The good news: this is surprisingly straightforward to get up and running.
The Demo Project
First things first: we’re going to be doing this in Objective-C rather than Swift.
Why? Because Swift is still evolving, and one of the areas where it’s definitely still evolving is in how it interoperates with C (and no matter how you dress it up using higher level APIs, audio processing – crunching boatloads of numbers 44100 times per second with no room for error – means you are are operating pretty darn close to the metal, with no time for method look-up tables, memory allocations and the like. And that means C.) So, Objective-C it is.
Also, in the interest of keeping the length of this post from spiraling out of control, I’m going to try to refrain from explaining every line of code. The demo project includes a fair amount of explanatory comments, so even if this is new territory for you, there should be enough there to give you a high level idea of what’s going on.
If you want to follow along tutorial-style, create a new Single View project in Xcode right now and call it whatever you like. If you’re happier reading code than a bunch of words, just grab a copy of the demo project here.
The Output class
Let’s start by wrapping our Audio Unit scaffolding in a new class. In Xcode, create a new Objective-C class file, call it “Output” and make sure you tell Xcode to create include a header file. Then open .h and edit it as follows:
@class Output;
@protocol OutputDataSource
- (void)readFrames:(UInt32)frames
audioBufferList:(AudioBufferList *)audioBufferList
bufferSize:(UInt32 *)bufferSize;
@end
@interface Output : NSObject
@property (strong, nonatomic) id outputDataSource;
- (void)startOutputUnit;
- (void)stopOutputUnit;
@end
First we set up a data source protocol, consisting of a single method that will read sample frames from an opened iPod file and then load those samples into buffers which have been provided by the output unit’s render callback. We also declare an outputDataSource property, to hold a pointer to our the adopter of our protocol (in our case, this will be the view controller) and a pair of public methods for starting and stopping the output audio unit.
Next, open .m and enter the following:
@interface Output()
@property (nonatomic) AudioUnit audioUnit;
@end
@implementation Output
- (id)init
{
self = [super init];
if (!self) {
return nil;
}
[self createAudioUnit];
return self;
}
Nothing too special here: we declare a single private property for our output audio unit and then in our init method we call createAudioUnit
, which will configure and initialize the output unit. Enter the following after init
:
- (void)createAudioUnit
{
// create a component description
AudioComponentDescription desc;
desc.componentType = kAudioUnitType_Output;
desc.componentSubType = kAudioUnitSubType_RemoteIO;
desc.componentManufacturer = kAudioUnitManufacturer_Apple;
desc.componentFlags = 0;
desc.componentFlagsMask = 0;
// use the description to find the component we're looking for
AudioComponent defaultOutput = AudioComponentFindNext(NULL, &desc);
// create an instance of the component and have our _audioUnit property point to it
CheckError(AudioComponentInstanceNew(defaultOutput, &_audioUnit),
"AudioComponentInstanceNew Failed");
// describe the output audio format... here we're using LPCM 32 bit floating point samples
AudioStreamBasicDescription outputFormat;
outputFormat.mFormatID = kAudioFormatLinearPCM;
outputFormat.mFormatFlags = kAudioFormatFlagIsBigEndian | kAudioFormatFlagIsPacked | kAudioFormatFlagIsFloat;
outputFormat.mSampleRate = 44100;
outputFormat.mChannelsPerFrame = 2;
outputFormat.mBitsPerChannel = 32;
outputFormat.mBytesPerPacket = (outputFormat.mBitsPerChannel / 8) * outputFormat.mChannelsPerFrame;
outputFormat.mFramesPerPacket = 1;
outputFormat.mBytesPerFrame = outputFormat.mBytesPerPacket;
// set the audio format on the input scope (kAudioUnitScope_Input) of the output bus (0) of the output unit - got that?
CheckError(AudioUnitSetProperty(_audioUnit, kAudioUnitProperty_StreamFormat, kAudioUnitScope_Input, 0, &outputFormat, sizeof(outputFormat)),
"AudioUnitSetProperty StreamFormat Failed");
// set up a render callback struct consisting of our output render callback (above) and a reference to self (so we can access our outputDataSource reference from within the callback)
AURenderCallbackStruct callbackStruct;
callbackStruct.inputProc = OutputRenderCallback;
callbackStruct.inputProcRefCon = (__bridge void*)self;
// add the callback struct to the output unit (again, that's to the input scope of the output bus)
CheckError(AudioUnitSetProperty(_audioUnit, kAudioUnitProperty_SetRenderCallback, kAudioUnitScope_Input, 0, &callbackStruct, sizeof(callbackStruct)),
"AudioUnitSetProperty SetRenderCallback Failed");
// initialize the unit
CheckError(AudioUnitInitialize(_audioUnit),
"AudioUnitInitializeFailed");
}
If you’ve worked with Audio Units or AUGraphs at all, this should be familiar boilerplate stuff. If you haven't, the comments should give you a sense of what's going on. In a nutshell: we're setting up a RemoteIO audio unit, telling it what kind of samples we intend to feed it, giving it a render callback for requesting those samples, and initializing the whole shebang.
Note our use of Kevin Avila/Chris Adamson's ubiquitous CheckError function to help us out if anything breaks along the way. I've put that function in a separate Utilities.m file, to help keep my other classes lean. Here's the function:
static void CheckError(OSStatus error, const char *operation)
{
if (error == noErr) return;
char errorString[20];
// see if it appears to be a 4-char code
*(UInt32 *)(errorString + 1) = CFSwapInt32HostToBig(error);
if (isprint(errorString[1]) && isprint(errorString[2]) &&
isprint(errorString[3]) && isprint(errorString[4])) {
errorString[0] = errorString[5] = '\'';
errorString[6] = '\0';
} else {
// No, format it as an integer
fprintf(stderr, "Error: %s (%s)\n", operation, errorString);
exit(1);
}
}
The render callback will automatically kick in when we start the output unit, and when it does it will begin requesting audio samples, as it needs them, from our outputDataSource delegate (again, to keep things simple for this demo, that's going to be our View Controller).
We'll come back to the render callback in a bit so let's just stub it out for now. At the top of the file, above the @interface line but below the #import statements, enter the following:
static OSStatus OutputRenderCallback (void *inRefCon,
AudioUnitRenderActionFlags * ioActionFlags,
const AudioTimeStamp * inTimeStamp,
UInt32 inOutputBusNumber,
UInt32 inNumberFrames,
AudioBufferList * ioData)
{
// Stuff's gonna happen here...
return noErr;
}
The callback is going to provide us with an empty *AudioBufferList
that can hold inNumberFrames
frames of sample data. The inRefCon
parameter is a reference to the Output class itself ('cause we said so back in callbackStruct.inputProcRefCon = (__bridge void*)self;
). We'll need this reference to be able to access Output's outputDataSource
property from within the callback. We're not using ioActionFlags
, inTimeStamp
or inOutputBusNumber
so those can be ignored. Finally, the callback is expected to return an OSStatus
value, and we dutifully comply by returning noErr
.
Lastly, we include a pair of methods for starting and stopping the output audio unit. Core Audio works by using a pull model - audio units get their data by requesting it from downstream sources. So to get the whole thing rolling you tell the last unit in the audio chain - the output unit - to start pulling. Add these two methods below the createAudioUnit method:
- (void)startOutputUnit
{
CheckError(AudioOutputUnitStart(_audioUnit), "Audio Output Unit Failed To Start");
}
- (void)stopOutputUnit
{
CheckError(AudioOutputUnitStop(_audioUnit), "Audio Output Unit Failed To Stop");
}
And that's about it for Output. Let's move on to our View Controller, where we begin by editing the header file like so:
#import
#import
#import
#import "Output.h"
@interface ViewController : UIViewController
@property (strong, nonatomic) Output* output;
@property (weak, nonatomic) IBOutlet UIButton* playPauseButton;
- (void)readFrames:(UInt32)frames
audioBufferList:(AudioBufferList *)audioBufferList
bufferSize:(UInt32 *)bufferSize;
@end
First we modify the @interface
declaration to announce our adoption of two protocols: one for our media picker (MPMediaPickerControllerDelegate) and the other for Output's data source. Next we declare two public properties here: one to an instance of the Output class we just created, and the other as an IBOutlet to the Play/Pause button we'll later add to the UI (we want an outlet to this button because we intend to communicate with it at runtime.) Finally, we declare our data source method.
On to the .m file, begin by modifying it as follows:
#import "ViewController.h"
#import "Utilities.m"
@interface ViewController() {
AudioStreamBasicDescription _clientFormat;
}
@property (assign, nonatomic) ExtAudioFileRef audioFile;
@property (assign, nonatomic) BOOL isPlaying;
@property (assign, nonatomic) SInt64 frameIndex;
@end
@implementation ViewController
- (void)viewDidLoad {
[super viewDidLoad];
self.isPlaying = NO;
self.output = [[Output alloc] init];
self.output.outputDataSource = self;
}
Pretty basic. The Utilities.m file contains the CheckError() function I mentioned earlier, so we import it here. We then declare a handful of private properties: an ExtAudioFileRef, which will hold a reference to our selected iPod track; a BOOL to act as a flag telling us if we're currently playing audio or not; and a SInt64 to act as a frame index, keeping track of our current location in the file. Then, once the view has loaded, we set the isPlaying flag to NO, create an Output object and set the view controller as that object's outputDataSource.
Next up we add a pair of actions:
- (IBAction)browseIpodLibrary {
MPMediaPickerController *pickerController = [[MPMediaPickerController alloc] initWithMediaTypes: MPMediaTypeMusic];
pickerController.showsCloudItems = NO;
pickerController.prompt = @"Pick Something To Play";
pickerController.allowsPickingMultipleItems = NO;
pickerController.delegate = self;
[self presentViewController:pickerController animated:YES completion:NULL];
}
- (IBAction)playPause:(UIButton*)sender
{
if (!self.isPlaying) {
[self.output startOutputUnit];
[sender setTitle:@"Pause" forState:UIControlStateNormal];
self.isPlaying = YES;
} else {
[self.output stopOutputUnit];
[sender setTitle:@"Play" forState:UIControlStateNormal];
self.isPlaying = NO;
}
}
The browseIpodLibrary: action launches a media picker controller so we can pick a track from the iPod Library (this is more or less boilerplate straight out of the Apple docs), and playPause: starts and stops the output unit (along with changing the title of our play button and flipping the value of our isPlaying flag.)
Now we get to the meat of the thing. Add the following method below the two actions we just created:
- (void)openFileAtURL:(NSURL*)url
{
// get a reference to the selected file and open it for reading by first (a) NULLing out our existing ExtAudioFileRef...
self.audioFile = NULL;
// then (b) casting the track's NSURL to a CFURLRef ('cause that's what ExtAudioFileOpenURL requires)...
CFURLRef cfurl = (__bridge CFURLRef)url;
// and finally (c) opening the file for reading.
CheckError(ExtAudioFileOpenURL(cfurl, &_audioFile),
"ExtAudioFileOpenURL Failed");
// get the total number of sample frames in the file (we're not actually doing anything with totalFrames in this demo, but you'll nearly always want to grab the file's length as soon as you open it for reading)
SInt64 totalFrames;
UInt32 dataSize = sizeof(totalFrames);
CheckError(ExtAudioFileGetProperty(_audioFile, kExtAudioFileProperty_FileLengthFrames, &dataSize, &totalFrames),
"ExtAudioFileGetProperty FileLengthFrames failed");
// get the file's native format (so ExtAudioFileRead knows what format it's converting _from_)
AudioStreamBasicDescription asbd;
dataSize = sizeof(asbd);
CheckError(ExtAudioFileGetProperty(_audioFile, kExtAudioFileProperty_FileDataFormat, &dataSize, &asbd), "ExtAudioFileGetProperty FileDataFormat failed");
// set up a client format (so ExtAudioFileRead knows what format it's converting _to_ - here we're converting to LPCM 32-bit floating point, which is what we've told the output audio unit to expect!)
AudioStreamBasicDescription clientFormat;
clientFormat.mFormatID = kAudioFormatLinearPCM;
clientFormat.mFormatFlags = kAudioFormatFlagIsBigEndian | kAudioFormatFlagIsPacked | kAudioFormatFlagIsFloat;
clientFormat.mSampleRate = 44100;
clientFormat.mChannelsPerFrame = 2;
clientFormat.mBitsPerChannel = 32;
clientFormat.mBytesPerPacket = (clientFormat.mBitsPerChannel / 8) * clientFormat.mChannelsPerFrame;
clientFormat.mFramesPerPacket = 1;
clientFormat.mBytesPerFrame = clientFormat.mBytesPerPacket;
// set the client format on our ExtAudioFileRef
CheckError(ExtAudioFileSetProperty(_audioFile, kExtAudioFileProperty_ClientDataFormat, sizeof(clientFormat), &clientFormat), "ExtAudioFileSetProperty ClientDataFormat failed");
_clientFormat = clientFormat;
// finally, set our _frameIndex property to 0 in prep for the first read
self.frameIndex = 0;
}
The comments tell all: we open the file that was selected with the picker (we'll get to the picker momentarily) and we prepare it for reading. Note that no reading is actually taking place here - that won't occur until we start the output unit and its render callback begins requesting samples.
Below the openFileAtURL:
method, add the following:
- (void)readFrames:(UInt32)frames
audioBufferList:(AudioBufferList *)audioBufferList
bufferSize:(UInt32 *)bufferSize
{
if (self.audioFile) {
// seek to the current frame index
CheckError(ExtAudioFileSeek(_audioFile, _frameIndex), nil);
// do the read
CheckError(ExtAudioFileRead(self.audioFile, &frames, audioBufferList),
"Failed to read audio data from audio file");
*bufferSize = audioBufferList->mBuffers[0].mDataByteSize/sizeof(float);
// update the frame index so we know where to pick up when the next request for samples comes in from the render callback
_frameIndex += frames;
}
}
This is our data source method, and its where we read sample frames from the file and get them over to the render callback. Extended Audio File Services makes this quite straightforward: we seek to _frameIndex, which is where the last read left off (or to 0, if this is the first read), read a given number of samples, shove them into the provided buffer list and update _frameIndex so we know where to pick up on the next read.
A Key Grokking Point: the incoming audioBufferList here is the audioBufferList (*ioData) from the callback - they both point to the same location in memory. In short: the render callback tells us where to find ioData (an AudioBufferList containing empty buffers ready to be filled), we fill those buffers from here with freshly read/converted samples and then the render callback ships those filled buffers on to the hardware, out to the speakers, through the air, into your ears, down your spine.
Likewise, 'frames' is 'inNumberFrames' (also from the callback) which tells ExtAudioFileRead how many frames there are in those empty buffers, and therefore how many samples to read from the file.
Almost done!
MPMediaPicker
Add the following below readFrames:
- (void)mediaPicker: (MPMediaPickerController *)mediaPicker
didPickMediaItems:(MPMediaItemCollection *)mediaItemCollection
{
[self dismissViewControllerAnimated:YES completion:NULL];
// if we come up dry, we bail
if ([mediaItemCollection count] < 1) {
NSLog(@"Sorry, the returned mediaItemCollection appears to be empty");
return;
}
// otherwise grab the first mediaItem in the returned mediaItemCollection
MPMediaItem* mediaItem = [[mediaItemCollection items] objectAtIndex:0];
[self logMediaItemAttributes:mediaItem]; // just logging some file attributes to the console
// get the internal url to the file
NSURL* assetURL = [mediaItem valueForProperty:MPMediaItemPropertyAssetURL];
// if the url comes back nil, throw up an alert view and return gracefully
if (!assetURL) {
[self createNilFileAlert];
}
// if the url points to an iCloud item, throw up an alert view and return gracefully
if ([[mediaItem valueForProperty:MPMediaItemPropertyIsCloudItem] integerValue] == 1) {
[self createICloudFileAlert];
}
// otherwise, we're ready to open the file for reading
[self openFileAtURL:assetURL];
}
- (void)mediaPickerDidCancel:(MPMediaPickerController *)mediaPicker {
[self dismissViewControllerAnimated:YES completion:NULL];
}
Again, mostly boilerplate stuff, this time for interacting with the Media Picker Controller.
The first method, mediaPickerDidPick, is called when we select a track from the library (using our earlier browseIpodLibrary: action.) First, we check to make sure the mediaItemCollection passed back to us when we selected a file has not come back nil
. If is hasn't, we extract the first mediaItem in the collection (this will point to the selected track) and we get its url.
We then check if that url is nil - you never know, right? - and also if the mediaItem is an iCloud item - meaning it shows up as a selectable track in the Library, but it isn't actually present on the device. If any of these checks fail, we throw up an alert to the user so they know what's happened, and return to normal running state gracefully.
Finally, assuming we get this far, we call openFileAtURL: and get on with opening the file for reading.
The other delegate method, mediaPickerDidCancel:,
handles the case where we cancel browsing without picking a file. Be sure to always include it when implementing the MPMediaPickerControllerDelegate protocol.
Whoops, almost forgot: here are those helper methods for launching the alerts and logging asset attributes to the console - add them to the bottom of ViewController.m:
#pragma mark - Helpers
- (void)createNilFileAlert
{
UIAlertController *nilFileAlert = [UIAlertController alertControllerWithTitle:@"File Not Available" message:@"The track you selected failed to load from the iPod Library. Please try loading another track." preferredStyle:UIAlertControllerStyleAlert];
UIAlertAction *OKAction = [UIAlertAction actionWithTitle:@"OK"
style:UIAlertActionStyleDefault
handler:^(UIAlertAction *action){
[nilFileAlert dismissViewControllerAnimated:YES completion:nil];
}];
[nilFileAlert addAction:OKAction];
[self presentViewController:nilFileAlert animated:YES completion:nil];
}
- (void)createICloudFileAlert
{
UIAlertController *iCloudItemAlert = [UIAlertController alertControllerWithTitle:@"iCloud File Not Available" message:@"Sorry, that selection appears to be an iCloud item and is not presently available on this device." preferredStyle:UIAlertControllerStyleAlert];
UIAlertAction *OKAction = [UIAlertAction actionWithTitle:@"OK"
style:UIAlertActionStyleDefault
handler:^(UIAlertAction *action){
[iCloudItemAlert dismissViewControllerAnimated:YES completion:nil];
}];
[iCloudItemAlert addAction:OKAction];
[self presentViewController:iCloudItemAlert animated:YES completion:nil];
}
- (void)logMediaItemAttributes:(MPMediaItem *)item
{
NSLog(@"Title: %@", [item valueForProperty:MPMediaItemPropertyTitle]);
NSLog(@"Artist: %@", [item valueForProperty:MPMediaItemPropertyArtist]);
NSLog(@"Album: %@", [item valueForProperty:MPMediaItemPropertyAlbumTitle]);
NSLog(@"Duration (in seconds): %@", [item valueForProperty:MPMediaItemPropertyPlaybackDuration]);
}
Render Callback
Finally, let's fill in the render callback as follows:
static OSStatus OutputRenderCallback (void *inRefCon,
AudioUnitRenderActionFlags * ioActionFlags,
const AudioTimeStamp * inTimeStamp,
UInt32 inOutputBusNumber,
UInt32 inNumberFrames,
AudioBufferList * ioData)
{
Output *self = (__bridge Output*)inRefCon;
if (self.outputDataSource)
{
if ([self.outputDataSource respondsToSelector:@selector(readFrames:audioBufferList:bufferSize:)])
{
@autoreleasepool
{
UInt32 bufferSize;
[self.outputDataSource readFrames:inNumberFrames audioBufferList:ioData bufferSize:&bufferSize];
}
}
}
return noErr;
}
The business end of the callback is a single line calling our dataSource’s readFrames:
method. We've wrapped that call in an @autoreleasepool block, to improve ARC performance on repeated calls, and we've then wrapped that block inside a couple of checks to ensure that (a) we actually have a data source, and (b) that data source implements our readFrames:
method.
The last thing you'll need to do is add a couple of buttons to your project's storyboard, one for calling browseIpodLibrary and one for calling playPause. Don't forget to set the outlet on the play button, so playPause can update the buttons text depending on play state.
And that wraps it up! We’ve now effectively got ourselves the beginnings of a custom file player - and one with a very light memory footprint at that!
To underscore this last point, here are some Xcode numbers (taken while running on an iPhone 6) which give the difference in memory usage before and after opening (and playing) a randomly selected iPod Library track (Sonny Terry's 'Airplane Blues' - a 7MB mp3 file - if you wanna know). My admittedly off-the-cuff logic being that this should provide some indication of the memory hit incurred by playback of a given file, for each reader/player scenario. The envelope please:
- Audio file converted to LPCM and loaded into an AUFilePlayer node: 4.2MB
- Audio file pulled in as an AVAudioFile and loaded into an AVAudioPlayerNode: 3MB
- Our version with audio file streamed from disk, as described above: 1MB
Not too shabby a gain, yah?
The Github repo can be found here.
Can We Do That Backwards And In Heels?
I speculated earlier that one might find occasion for wanting to play an audio file (or some section of it) in reverse. This is another frequent audio developer question and one oft-quoted solution is to load the file (or the section of it) into memory, store a reference to it, reverse the order of the samples and then read from that when you want reverse playback.
Nah! Now that we know how to grab the sample data stream as it makes its way from file to device output, we have it in us to cook up something a bit more elegant. We’ll take a swing at that next time around.