I’m trying to make a 16 operator additive synth using just audio for a game I’m slowly putting together… In general I’m finding the implementation for the audio tag a bit… lacking in capabilities for serious use on things like this.
The whole concept is to have EVERYTHING from the graphics to the audio 100% generated by the javascript.
My first idea was to simply loop a generated track… I’m generating it thus:
function makeWavFile(data,sampleRate,bitDepth,channels) {
var
byteDepth=Math.floor(bitDepth/8),
numChannels=channels,
calcSize=data.length*numChannels*byteDepth;
var dataStream=(
// RIFF HEADER
'RIFF' + // ChunkID, should always be 'RIFF'
dwordToString4(calcSize+36) + // ChunkSize
'WAVE' + // Format, should always be 'WAVE'
// Format Subchunk
'fmt ' + // SubChunk1ID, should always be 'fmt ' -- that extra space is important!
dwordToString4(16) + // SubChunk1Size, should always be 16 for PCM
wordToString2(1) + // AudioFormat 1=PCM
wordToString2(numChannels) + // NumChannels
dwordToString4(sampleRate) + // SampleRate
dwordToString4(sampleRate*numChannels*byteDepth) + // ByteRate
wordToString2(numChannels*byteDepth) + // BlockAlign
wordToString2(bitDepth) + // BitsPerSample
// Data SubChunk
'data' + // SubChunk2ID, should always be 'data'
dwordToString4(calcSize) // SubChunk2Size, aka actual DATA size
);
switch (bitDepth) {
case 16:
dataStream+=stringEncodeWords(data);
break;
case 8:
dataStream+=stringEncodeBytes(data);
break;
}
ccopy=dataStream;
return 'data:audio/wav;base64,'+btoa(dataStream);
}
Which is working fine in FF and Opera for creating my audio… but the problems are two-fold. The first problem is that Chrome doesn’t support RIFF (audio/wav) as a audio type NOR does it appear to even support generated content… So, does anyone have an idea on how I can take my raw script generated datastream and make webkit play it?
The second problem is that, well…
var noisePlayer=document.createElement('audio');
noisePlayer.controls=true;
noisePlayer.autoPlay=false;
noisePlayer.loop=true;
noisePlayer.src=noise;
document.body.appendChild(noisePlayer);
Doesn’t automatically loop. I’m able to trap the end and loop back, but in Opera there’s this 200ms+ delay that’s unacceptable for what I’m using it for. In FF there’s no delay.
I did discover that if I track position and force the play position to zero Opera has no delay, while Gecko does.
The audio being generated is all going to be flat predictable waveforms, so overbuffering and just looping a small inner section would work fine – if I can figure out how to make gecko and opera behave.
Gotta say, for VIDEO they’re ‘close’ to matching flash’s capabilities, but for AUDIO this is a ***** joke.
Been in “GIS mode” for the better part of a week on this, most of the examples online so far seem sketchy at best, overcomplicated train wrecks at worst… It shouldn’t be THIS difficult to do something so simple.
Oh, semi-unrelated, I tossed up the graphics side of things for people to have a look-see.
To sum up my questions:
-
script generated sound in Chrome – possible?
-
delays in looping in Opera/FF, am I stuck browser sniffing?
-
is there any DECENT documentation in plain English that actually lists the javascript properties for the HTML AUDIO tag? What I’m coming across is either gibberish (W3C) or code without explanations (mozilla’s page on the subject)
I feel like when I first started dealing with HTML before I discovered the WDG reference… or when I started with javascript before I picked up the old O’Reilly pocket reference… I already know how to program this ****, I just need a decent function reference!