high performance virtual CPU - brainstorming + feedback?
i'm thinking to code a virtual CPU for the Linux kernel that basically decides when it is a good idea to execute a thread or a process in another machine. any feedback or ideas are highly welcome.
the overall work-flow is to be something like this:
- heuristically test a thread/process. if the process/thread is very lightweight (no cpu-heavy tasks), then use physical local CPUs + cache the result to speed up the lookup next time for the same process/thread.
- if the process/thread is heavyweight (cpu-intensive), then send it to get executed in another PC that setting somewhere in a nearby network.
- depending on the network delay, the decisions are adjusted. e.g. if the network is too slow, then more tasks will get executed locally.
in other words, from a high-level view, we will have a virtual CPU in the kernel, a single CPU that all applications are running on. from within the kernel, the virtual CPU is making the decision on where execute a given processes/thread to maximize the system throughput.
certainly such task could be simplified if the application/process/th开发者_StackOverflowread is designed for this (e.g. using MPI), but my goal is to create something for generic applications.. such as Apache HTTP.. Apache HTTPD, for example, executes a process per request. what if each process is executed in an ideal CPU, let it be local or remote, to maximize throughput? there are many other applications that use threads/process forks and depending on the nature of the beast the virtual CPU may decide.
any hints? advises? problems? must-read documents? rants that this won't work?
my most awesome regards
精彩评论