开发者

Java - Broadcast voice over Java sockets

I have created a Server app that receives sound from client, i then broadcast this sound which is stored as bytes and send the bytes back to the clients that are connected to the server. now i am only using one client at the moment for testing and the client is receiving the voice back but the sound is stuttering all the time. Could some one please tell me what i am doing wrong?

I think i understand some part of why the sound isn't playing smoothly but don't understand how to fix the problem.

the code is bellow.

The Client:

The part that sends the voice to server

     public void captureAudio()
     {


      Runnable runnable = new Runnable(){

     public void run()
     {
          first=true;
          try {
           final AudioFileFormat.Type fileType = AudioFileFormat.Type.AU;                      
           final AudioFormat format = getFormat();
           DataLine.Info info = new DataLine.Info(TargetDataLine.class, format);
           line = (TargetDataLine)AudioSystem.getLine(info);               
           line.open(format);
           line.start();
                int bufferSize = (int) format.getSampleRate()* format.getFrameSize();
                byte buffer[] = new byte[bufferSize];           

                    out = new ByteArrayOutputStream();
                    objectOutputStream = new BufferedOutputStream(socket.getOutputStream());
                    running = true;
                    try {                      
                        while (running) {                         
                            int count = line.read(buffer, 0, buffer.length);
                            if (count > 0) {
                                objectOutputStream.write(buffer, 0, count);
                                out.write(buffer, 0, count);
                                InputStream input = new ByteArrayInputStream(buffer);
                                final AudioInputStream ais = new AudioInputStream(input, format, buffer.length /format.getFrameSize());

                            }                           
                        }
                        out.close();
                        objectOutputStream.close();
                    }
                    catch (IOException e) {                    
                        System.exit(-1);
                        System.out.println("exit");
                    }
          }
          catch(LineUnavailableException e) {
            System.err.println("Line Unavailable:"+ e);
            e.printStackTrace();
            System.exit(-2);
          }
          catch (Exception e) {
           System.out.println("Direct Upload Error");
           e.printStackTrace();
          }
     }

     };

     Thread t = new Thread(runnable);
     t.start();

     }

The part that receives the bytes of data from the server

    private void playAudio() {
     //try{


    Runnable runner = new Runnable() {

    public void run() {
        try {
            InputStream in = socket.getInputStream();
            Thread playTread = new Thread();

            int count;
            byte[] buffer = new byte[100000];
            while((count = in.read(buffer, 0, buffer.length)) != -1) {

                PlaySentSound(buffer,playTread);
            }
        }
        catch(IOException e) {
                System.err.println("I/O problems:" + e);
                System.exit(-3);
        }
      }
    };

    Thread playThread  = new Thread(runner);
    playThread.start();
  //}
  //catch(LineUnavailableException e) {
   //System.exit(-4);
  //}
    }//End of PlayAudio method

    private void PlaySentSound(final byte buffer[], Thread playThread)
    {

    synchronized(playThread)
    {

    Runnable runnable = new Runnable(){

    public void run(){
        try
        {

                InputStream input = new ByteArrayInputStream(buffer);
                final AudioFormat format = getFormat();
                final AudioInputStream ais = new AudioInputStream(input, format, buffer.length /format.getFrameSize());
                DataLine.Info info = new DataLine.Info(SourceDataLine.class, format);
                sline = (SourceDataLine)AudioSystem.getLine(info);
                sline.open(format);
                sline.start();              
                Float audioLen = (buffer.length / format.getFrameSize()) * format.getFrameRate();

                int bufferSize = (int) format.getSampleRate() * format.getFrameSize();
                byte buffer2[] = new byte[bufferSize];
                int count2;


                ais.read( buffer2, 0, buffer2.length);
                sline.write(buffer2, 0, buffer2.length);
                sline.flush();
                sline.drain();
                sline.stop();
                sline.close();  
                buffer2 = null;


        }
        catch(IOException e)
        {

        }
        catch(LineUnavail开发者_StackOverflow社区ableException e)
        {

        }
    }
    }; 
   playThread = new Thread(runnable);
   playThread.start();
   }


   }


In addition to HefferWolf's answer, I'd add that you're wasting a lot of bandwidth by sending the audio samples that you read from the microphone. You don't say if your app is restricted to a local network but if you're going over the Internet, it's common to compress/decompress the audio when sending/receiving.

A commonly used compression scheme is the SPEEX codec (a Java implementation is available here), which is relatively easy to use despite the documentation looking a bit scary if you're not familiar with audio sampling/compression.

On the client side, you can use org.xiph.speex.SpeexEncoder to do the encoding:

  • Use SpeexEncoder.init() to initialise an encoder (this will have to match the sample rate, number of channels and endianness of your AudioFormat) and then
  • SpeexEncoder.processData() to encode a frame,
  • SpeexEncoder.getProcessedDataByteSize() and SpeexEncoder.getProcessedData() to get the encoded data

On the client side use org.xiph.speex.SpeexDecoder to decode the frames you receive:

  • SpeexDecoder.init() to initialise the decoder using the same parameters as the encoder,
  • SpeexDecoder.processData() to decode a frame,
  • SpeexDecoder.getProcessedDataByteSize() and SpeexDecoder.getProcessedData() to get the encoded data

There's a bit more involved that I've outlined. E.g., you'll have to spit the data into the correct size for the encoding, which depends on the sample rate, channels and bits per sample, but you'll see a dramatic drop in the number of bytes you're sending over the network.


You split the sound packets into pieces of 1000000 bytes quite randomly and playback these on the client side not taking into account the sample rate and frame size which you calculated on the server side, so you will end up splitting peaces of sound into two which belong together.

You need to decode the same chunks on the server send as you send them on the client side. Maybe it is easier to send them using http multipart (where splitting up data is quite easy) then do it the basic way via sockets. Easiest way to this is to use apache commons http client, have a look here: http://hc.apache.org/httpclient-3.x/methods/multipartpost.html

0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜