How to implement a TCP traffic limitation for the sender side?
I'm about to implement a webcam video chat system for multiple users in C++ (windows/linux). As the 'normal' user is usually connected via DSL/cable, there is a strong bandwidth limitation for my (prefered) TCP/IP connections.
The basic idea is to transmit the highest possible framerate given a bandwidth limitation for the sender side. (Other applications may still require internet bandwidth in the background.) In a second step, th开发者_开发百科e camera-capture-rate shall be automatically adjusted to the network limitations to avoid unncessary CPU overhead.
What I have is a constant stream of compressed images (with strongly variing buffer sizes) that have to be transmitted to the remote side. Given a limitation of let's say 20kb/s, how do I best implement that limitation? (Note that the user shall define this limit!)
Thx in advance, Mayday
Edit: Question clearifications (sry!)
- It's about how to traffic-shape an arbitrary TCP/IP connection.
- It's not how to implement image rate/quality reduction as my use-case suggests. (Altough I didn't consider to automatically adjust image compression, yet. (Thx Jon))
There are two things you can do to reduce your bandwidth:
- Send smaller images (more compression)
- Send less images
When implementing an algorithm that picks image size and quantity to honor the user-selected limit, you have to balance between a simple/robust algorithm and a performant algorithm (one that makes maximum use out of the limit).
The first approach I would try is to use a rolling average of the bandwidth you are using at any point in time to "seed" your algorithm. Every once in a while, check the average. If it becomes more than your limit, instruct the algorithm to use less (in proportion to how much you overstepped the limit). If it becomes significantly lower than your limit, say less than 90%, instruct the algorithm to use more.
The less/more instruction might be a variable (maybe int
or float
, really there is much scope for inventiveness here) used by your algorithm to decide:
- How often to capture an image and send it
- How hard to compress that image
You need a buffer / queue of at least 3 frames:
- One frame currently being sent to the network;
- One complete frame to be sent next;
- One frame currently being copied from the camera.
When the network sender finishes sending a frame, it copies the "to be sent next" frame to the "currently sending" slot. When the camera reader finishes copying a frame from the camera, it replaces the "to be sent next" frame with the copied frame. (Obviously, synchronisation is required around the "to be sent next" frame).
The sender can then modulate its sending rate as it sees fit. If it's running slower than the camera, it will simply drop frames.
精彩评论