I am creating a basic signal generator and decided to use my audio card as the analogue output. I chose to use DirectSound because... it seemed like a good option.
I have encountered a bug in DirectShow .NET where I create a secondary buffer with a sample rate of 8khz, and upon playback, the sound plays back at approx. 8.1khz instead.
I am trying to write a program to play a small .wav file in C++. I have programmed it following the DirectX SDK documents to write and play on a secondary static buffer. It runs correctly except that
I\'m writing a managed wrapper around DirectSound. (It\'s a simple partial wrapper that solves my specific problem and nothing more. Don\'t tell me about NAudio or whateve开发者_如何转开发r.) Should a
I need help.. I have problem with splitting left & right channel from captured stream from mic (or sound card)
Is this possible using only C#?I want to be able to detect audio from: a 开发者_开发问答stream, microphone or soundcard and begin recording if audio level is above a settable threshold.
Back again with yet another DirectSound question, this one regarding the ways DirectSound Buffers can be used:
Right-o, I\'m working on implementing DirectSound in our Delphi voip app (the app allows for radios to be used by multiple users over a network connection)
Please help me make up my mind. Allocate at the start of the application, free at the exit. Allocate when streaming starts, free as soon as streaming stops.
I\'m trying to work out a simple DirectS开发者_JS百科ound app, but the lack of .net documentation has me frustrated.