开发者

How to implement low-latency n-to-n audio chat

I am investigating on how to implement an n-to-n audio chat (so, lets say 4 people hear each other). This is quite trivial using Flash or Wowza Media Server. The real problem is the latency because the 4 people in the chat have to do things as synchronous as possible (for example, something like singing together). Every millisecond matters.

What is your experience with ultra-low-latency audio chats?

  • What will be the lowest latency achievable?
  • How do you achieve it 开发者_JS百科(which software, protocol, media server, bitrate)?

Thank you very much!


The lowest latency achievable depends on many factors your code won't have control over, mainly concerning your network.

Now, If I was the one doing this project, I'd see what algorithms and protocols are available for clock synchronization. Once you do this, each host should likely just send time-stamped packets to the server. At the server end, you can combine these packets somehow (maybe a bitwise or over all the bytes for a certain time slot from each machine) and send them out again via multicast.

Trouble is, even your code will have problems... you don't have a way to reliably get those packets to the server in real-time. UDP will drop packets, and you'll have to build a tolerance in for accepting late arrivals or no-shows. TCP is no better in this regard. Sure, the packets are guaranteed to arrive in order, but at what cost in time? Additionally, to compress the sound at each host, then un-compress it at the server, do your combining, and re-compress... all while maintaining the feel of real-time sounds awfully ambitious.

I am by NO MEANs to be considered as being an expert, nor do I have ANY experience doing this type of thing, but it just SOUNDS tough.

0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜