AsynchronousByteCharChannel and Timeouts
cowwoc at bbs.darktech.org
Thu Aug 6 06:45:31 PDT 2009
Alan Bateman wrote:
> It's done using overlapped I/O with WSARecv/WSASend. A completion port
> is used to receive the notifications when the I/O operations complete.
> We don't make use of I/O cancellation (Windows Vista did bring a new
> Win32 call to cancel specific I/O operations but it doesn't provide the
> guarantees that we require and we don't use it).
> If it were implemented as blocking sockets then the same issue would
> arise (from the Win32 docs: "If a send or receive operation times out on
> a socket, the socket state is indeterminate, and should not be used; TCP
> sockets in this state have a potential for data loss, since the
> operation could be canceled at the same moment the operation was to be
Weird. Comports (and sockets too I believe!) can be treated as HANDLEs
in the win32 API. You can then use ReadFile(), CancelIo() to read and
cancel overlapped operations. CancelIo() has the limitation of only
being cancellable from the thread that initiated the read but you can
use "completion ports" to work around this limitation and essentially
get the Windows Vista functionality back in Windows XP. That's what I'm
using for comports. Couldn't you do the same for sockets?
> For the case that the timer expires at just around the time that 5 bytes
> have been read into your buffer then the best solution is to have the
> I/O operation complete (successfully) and the buffer position moved on
> by 5 bytes. If you are saying that this is not feasible (because the
> underlying read returns 0) then are those 5 bytes lost?
To be honest, it's not clear. CancelIo() claims to cancel any pending
I/O operations. My interpretation is that if I issue an overlapped read
for 10 bytes but CancelIo() after 5 the read bytes would get buffered
for future reads (i.e. I don't lose any data). I posted this question to
> If this were a stream/socket then this data loss would be a disaster if the application
> were to read bytes after the timeout and data loss. For a serial
> connection (we are talking RS232 or descendants, right?) then it's not
> too bad as there will likely be a high level protocol to detect errors.
While this is true on a higher level, I can't assume that all serial
protocols will have error detection/correction and users would probably
refuse to use a library that silently eats bytes ;) At the very least I
would need to make such failures more explicit.
More information about the nio-discuss