开发者

How to find out when to increase bit rate? (TCP streaming solution)

We have an App that is generating data. Capturing live video fr开发者_如何转开发om camera, for example, and encoding it. We need to know what size can be one encoded frame for it to be sent over network and received without delay. So to say "live video stream over TCP". What are its main problems - user personal traffic load and server overall load. Our frames must have such size (size here == bit rate) to be received by server with minimal delay. TCP must be used in my case (we have to send all captured frames even if their quality will fall)

We have a stream with "frames". Each "frame" has a "timestamp". Frames have bit rate property which is actually their size. We generate frames with our app and stream them one-by-one on to our TCP server socket. At the same time the server posts replies so when after each sent frame we try to read from socket we receive which timestamp is currently on the server. If timestamp is lower than previous frame we lower bit rate 20%. Such scheme seems to work giving me one way vbr (lowering) but I wonder how to implement increase? I mean we can always try to increase 5% each frame until some limited desired value, but each time we have delay will lose real-time feature of our stream... Generally such scheme is for finding out how much of the network stream is currently used by other user apps and give picture of how much server is loaded at the same time so we can stream just the right amount of data for all to receive it in real time. So what shall I do to add increase to my scheme? So having current bit rate of A, I thought we could add +7% for 3 frames and then one -20% and then if all that 3 frames with +7% came in time we could add 14% to A and repeat circle and it would hopefully not be really noticeable if 2nd frame would come to us with delay...


network conditions are too unstable, so you need to adapt to them constantly. measure round-trip delay: let server send back acknowledges with packet id instead of timestamp. so you can avoid any time synchronization with server (complicated task). compare packet receiving time with packet sending time to receive round-trip delay value. analyze round-trip delay: if it is lower than some threshold (e.g. 500ms) - you can increase bitrate slightly, if round trip delay is bigger than some another threshold (e.g. 1s) - reduce bitrate. such threshold range is used to avoid endless bitrate modification


If you can accept dropped frames from the video you might want to consider using UDP instead. TCP guarantees delivery of packets but you pay some overhead for that.


Your first instinct is right - just as you decrease the bitrate when the latency goes up too high, you can increase the bitrate when the latency drops very low.

This does mean you have to determine a maximum desirable latency - you are trading off image quality against latency. If you want the best latency that your network can supply, set the maximum desirable latency to the network round-trip-time.

0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜