开发者

DSP / Manual mixing and pan law

I am mixing four buffers and applying panning. However, when I trigger a change of pan I hear a clip. Can any body see what is potentially wrong with the following code:-

for (int i = 0 ; i < numFrames; i++) {
    //Convert buffer to float
    float s1 = track1[0][i] / 32768.0f;
    float s2 = track2[0][i] / 32768.0f;;
    float s3 = track3[0][i] / 32768.0f;;
    float s4 = track4[0][i] / 32768.0f;;

    //Apply pan on track one
    float s1R = s1 * sqrt( 1 - panA ); 
    float s1L = s1 * sqrt( panA 开发者_如何学JAVA); 

    //Apply pan on track two
    float s2R = s2 * sqrt( 1 - panB ); 
    float s2L = s2 * sqrt( panB ); 

    //Apply pan on track three
    float s3R = s3 * sqrt( 1 - panC ); 
    float s3L = s3 * sqrt( panC );

    //Apply pan on track four
    float s4R = s4 * sqrt( 1 - panD ); 
    float s4L = s4 * sqrt( panD );

    //Mix the right channel
    float mixedR = s1R + s2R + s3R + s4R;

    mixedR *= 0.6f;
    if(mixedR > 1.0f) mixedR = 1.0f; 
    if(mixedR < -1.0f) mixedR = -1.0f;

    //Mix the Left channel
    float mixedL = s1L + s2L + s3L + s4L;

    mixedL *= 0.6f;
    if(mixedL > 1.0f) mixedL = 1.0f; 
    if(mixedL < -1.0f) mixedL = -1.0f;

    //Apply the Left channel 
    audioIn[0][i] = (short) (mixedL * 32768.0f);

    //Apply the right channel
    audioIn[1][i] = (short) (mixedR * 32768.0f);
}

The panning algorithm could be improved, I lifted it from here:-

http://www.kvraudio.com/forum/viewtopic.php?t=181222&postdays=0&postorder=asc&start=0

NumFrames is 512; once the audio has been mixed I am applying a time-stretching algorithm using Dirac.

The clipping occurs without processing by Dirac.


You are taking 4 potentially full-scale signals, adding them together and then scaling by 0.6 prior to saturating the resulting output signal. So prior to saturation your maximum range is +/- 4 * 0.6 = +/- 2.4. Hence it's not too surprising that you hear some clipping. If you multiply by 0.25 instead of 0.6 then that should eliminate clipping in even the most extreme cases, but the output signal level may be a little low in the general case.

To verify this you could add some debug logging in your saturation code, e.g.

#if DEBUG
    if (mixedR > 1.0f || mixedR < -1.0f)
        fprintf(stderr, "Clipping occurred for mixedR = %g\n", mixedR);
#endif
    if(mixedR > 1.0f) mixedR = 1.0f; 
    if(mixedR < -1.0f) mixedR = -1.0f;


What you hear is probably not clipping but popping from discontinuities every 4096 samples (or whatever your buffer length is). You need to smoothly interpolate your panning values to avoid abrupt changes. Two easy ways to do this: 1) define a maximum change per sample or 2) interpolate to the new value over the entire buffer length.

Either way, the essential innovation is to store the current actual pan value and destination pan value to move toward.


Your "Convert buffer to float" function is wrong. You must divide by 32767 instead of 32768, otherwise you will get clipping due to flipping of the MSB.

0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜