I replaced sun.io converters by an adaptor to sun.nio.cs coders

Ulf Zibis Ulf.Zibis at gmx.de
Mon Sep 15 12:26:00 UTC 2008

Hi Sherman,

after finishing my work on tests, compairing my reengeneerings against 
the legacy sun.nio.cs encoders, and rethinking malformed sequence length 
, I come back to sun.io package...

Am 05.09.2008 00:59, Xueming Shen schrieb:
> Ulf, you are really productive:-) thanks for working so hard on the 
> "adapter" idea!

Oh, thank's for your flowers. it was nearly 2 weeks of work.
Also thanks about reviewing my code. The bugs, you have found, are serious.

> Took a very quick scan on the ByteToChar adapter, here are some 
> comments for your considering
> (1)in convert(), the decoder.decode(src, dst, true) is used instead of 
> the decode(src, dst, false), which probably
> is against the specified "a buffer by buffer conversion" use scenario, 
> consider the possibility that we have some
> "incomplete" bytes in the "input" stream, which might be "completed" 
> by sequential "input" in a second invoking
> of convert().

I see only 1 solution: ByteToCharConverter#flush() should first invoke 
decoder.decode(src, dst, true) before decoder.flush(), because there is 
no compatible endOfInput-logic in sun.io package.
In this context, I must admit, that I don't understand the necessity of  
this endOfInput-logic. It forces an additional invocation of 
encodeLoop() even if there is nothing to do in most cases. Why can't 
decoder.flush() do this job as in sun.io package???

> (2)flush(),  the spec says you need to "reset() before throw the 
> MalformedInputException,so the charOff need
> to be zero.


> (3)reset() does not set the badInputlength to 0.

1.) The legacy ByteToCharXXX don't do this, why should I do?
2.) If flush() has to reset before throwing MalformedInputException, 
then badInputlength maybe would be invalid.



More information about the core-libs-dev mailing list