real examples what is latency-driven or performance-driven application in java
Could 开发者_StackOverflowyou please give me a real example what is latency-driven or performance-driven application ? Both have what differences , what requirement in design system in java ?
Thanks.
Examples
An example of a latency-driven Java application is a signal processor or command+control unit for a radar. The JEOPARD project recently implemented such a thing, and the AN/FPS-85 radar is another example. Both of these are Java examples, and both use an instance of Real-Time Java. The latter uses RTSJ.
Why are they "latency-driven?" Well, computations are only correct if they are delivered on time -- when the computation is intended to steer a phased-array radar beam such that it impacts the predicted location of an object under track, the computation is incorrect if it is late. Therefore, there is a latency bound on the loop which traverses the last paint of the object with the control steering the beam onto the next predicted location.
These types of systems do have throughput requirements, but they tend not to be the driving requirements. Instead, specific latencies for specific activities must be met for correct operation, and that is the primary correctness metric.
Design techniques for these systems.
There are two common approaches: The first is basically to ignore the time requirements (latency, etc...), get the code "working" in the sense of being computationally correct, and then conduct performance engineering/optimization until the system implicitly behaves as you want. The second is to articulate clear timeliness requirements and design with those requirements in mind for each component. Given my background, I'm strongly biased toward the second path because the cost to take a random conventional development through integration and test, and tune it for the correct behavior tends to be very high and very risky. The more performance/latency-dependent the system is, the more you should ignore the rule "avoid premature optimization." It's not optimization if it's a correctness criteria. (This is not an excuse to write murky, fast code, but a bias.)
If some measure of end-to-end latency is required, a typical approach is to analyze what you expect to be the stressing conditions and develop a "latency budget", allocating portions of the latency to sequential bits of computation. As the system evolves, the budget may change around, but it becomes a useful design and test tool.
Finally, in Java, this might be manifest in three different approaches, which are really on a spectrum:
1) Just build the damn thing, and tune it once it more or less works. (Conventional design usually works this way.)
2) Build the thing, but also build in instrumentation/metrics to explicitly include latency context as work units progress through your software. A simple example of this is to timestamp arriving data and pass that timestamp along with the packet/unit as it is operated on. This is really easy for some systems, and basically impossible for others. If it's possible, it's highly recommended because then the timeliness context is explicitly available and may be used when making resource management decisions (i.e., assigning thread priorities, deadlines, queue priorities, etc...)
3) Do the analysis up-front, and use a real-time stack with formal timeliness parameters. This is the heavyweight solution, and is appropriate when you have high-criticality, safety-critical, or simply hard real-time constraints. Even if you aren't in that world, RTSJ implementations like Oracle's JavaRTS still offer benefits for soft real-time systems simply because they reduce jitter/non-determinism. There is usually a tradeoff here against raw throughput performance.
I have only addressed the computational side here. Obviously if your system includes or even is defined by networks, there's a whole world of latency/QoS management on that side. Common interfaces to time-sensitive Java applications there might include RSVP or perhaps specific middleware like DDS or CORBA or whatever. Probably half of the existing time-sensitive applications eschew middleware in favor of their own TCP, UDP, raw IP, or even specialized low-level solution, or build on top of a proprietary/special purpose bus.
Best Case vs. Common Case
In networking terms, throughput and latency are distinct dimensions of system performance. Throughput measures the rate (units per second) at which the system can process / transfer information. Latency measures the time (seconds) by which a computation/communication completes. Both of these can be used in common- or worst-case descriptions of performance, though it's a little hard to get your arms around "worst-case throughput" in many settings. For a specific example, consider the difference between a satellite link and a copper link over the same distance. In that setting, the satellite link has high latency (10's to 100's of milliseconds) because of speed of light time, but may also have very high bandwidth, and thus higher throughput. A single copper cable might have lower latency, but also have lower throughput (due to lower bandwidth).
In the computational setting, latency tends to be a measure of worst-case computation (though you often care about average latency, too), while throughput tends to be a measure of common-case computation rate. Examples of common latency metrics might be task-switch latency, interrupt service latency, packet service latency, etc.
Real-time or "time-critical" systems TEND to be dominated by concern for worst-case behaviors, and worst-case latencies in particular. Conventional/general-purpose systems TEND to be dominated by concern for maximum throughput. Soft real-time systems (e.g., VOIP or media) tend to manage both simultaneously, and tolerate a wider range of tradeoffs. There are corner cases like user interfaces, where perceived performance is a complicated mixture of both.
Edit to add: Some related, Java-specific SO questions. Coded using primitives only? and RTSJ implementations.
Latency is a networking term, think of it as "time to get the first byte."
Bandwidth is the other, related term networking term. Think of it as "time to transfer a large block of data."
These two things are more or less independent factors. For example, NetFlix sending you a BluRay is high latency (it takes a long time to get the first bit) but also high bandwidth (you gets lots and lots of data in one fell swoop).
Performance is a higher level concept. Performance is totally subjective - it can really only be discussed as as a delta compared to another system.
Latency, bandwidth, CPU, memory, bus, disk, and of course the code itself are all a factor in dealing with performance.
精彩评论