开发者

Is it possible to use files as a bidirectional communication channel between two remote processes (sort of "sockets over files")?

Here is the scenario:

  • An user have access to two machines

  • These machines can't communicate with network sockets because of firewall restrictions

  • But, both have access to a common network share with read/write permissions on a third machine

My question is: is it possible to write a small application executed on both machines that allows to establishes a communication channel between the two by using onl开发者_Go百科y files on the network share? Ideally it would emulates streams and sockets behaviour.

I imagine that:

1) it would involve to have two files used for communication, one for each direction

2) and having the possibility to read a file while another process is writing it... over the network.

But I'm not sure if it is feasible, mainly because I doubt about the point 2). Maybe it is possible in Unix-like environements with NFS, though.

Is it possible? Is it already existing?


I think it's a good idea to split the stream into packets first. Then these packets should appear as files in the common directory. There must be two "namespaces" for the two ways (a->b and b->a). For the easy debugging, file names should contain a timestamp, not just an incremental part.

There's only one issue with files: even if the file is pretty small, the receiver can catch it when it's not fully flushed, I mean the file is only halfway ready (common case: 0 byte long), or a network error occurred during the transfer. Avoid this situation, the sender should:

  • create the file on a temporary name,
  • write the content, close it,
  • then rename it to the final name (with timestamp etc.).

So, the receiver will only pick the file after the rename, when it's 100% sure, that the it's fully written.

Before sending a new file, the sender may check for the temporary file, if it exists, it means that the last transmit was aborted.

When creating a new transmit file, the sender should create an info file, which contains information about the transmit packet being sent, so upon an abortion, it will contain information about the failed packet. Maybe, the only information is the time of the transmit (remember: the temporary file name does not contain timestamp), but it's better than nothing.


How about this:
When machine A wants to send a message to machine B, it creates a file called _toBfromA_12782 (where 12782 is the current timestamp). It writes the content to the file and then changes the name to remove the first underscore.
Every few seconds, every machine checks if there are files with a file name which starts with toX where X is their own name. If multiple messages are found, they can then be ordered according to the time stamps. After reading the message file, the recipient deletes the file.
This assumes that all participants have a synchronized time but if they don't this too can be worked out (or really ignored actually).


I vaguely remembered something about fifo files on UNIX. A web search has confirmed that they were what I remembered (a way of performing communication via files). I haven't tested to see if they will work between two distinct machines, that have access to the same file system, but I think it probably will. I may have a go at mocking something up later when I have access to a Unix system to satisfy my curiousity.

Essentially, you have to create a FIFO file, by using mkfifo. Then you can open the file and use blocking read/writes to process it (each 'open' can either read, or write, not both at the same time so you would need to, one for each direction). Some other descriptions of the process, which do include some code samples can be found here and here.

I tested mkfifo, using standard unix commands:

Create the pipe:

mkfifo mypipe

Write everything from one window to the pipe:

cat > mypipe

Read everything from the pipe to another window:

cat mypipe

The pipe worked as expected, type in one window, it appeared in the other, sadly however, this only seems to work (at least for me), when the processes are running on the same machine, so it doesn't really help with your problem. But I'll leave the answer in case it's helpful to somebody in the future...


Maybe try if one of these solutions work for you, depends if your network system allows one of the mentioned monitoring methods:

Monitoring a folder for new files in Windows

Make 2 Folders, each process writes to one of the folder, use a timestamp for the filename, write in exclusive mode. Each process monitors the other folder and when a file appears, you wait until it is written completely, then read it, then delete it.


mmap() can be used to share a file between two processes, and is a classic IPC strategy, and historically some kernel implementations of shared memory APIs would use temporary inodes as a backing store.

From the Kernel's perspective, the only difference in your situation is that file being used for IPC would be using inodes that utilize the VFS subsystem to connect to the NFS share.

0

上一篇:

下一篇:

精彩评论

暂无评论...
验证码 换一张
取 消

最新问答

问答排行榜