Request for review, 6766644: Redefinition of compiled method fails with assertion "Can not load classes with the Compiler thread"

David Holmes David.Holmes at
Tue Feb 1 15:38:08 PST 2011

Keith McGuigan said the following on 02/01/11 23:11:
> On Jan 31, 2011, at 9:20 PM, David Holmes wrote:
>> As per your other email NULL check is pointless as "new" will never 
>> return null but will instead call vm_exit_out_of_memory to abort the 
>> VM - which is exactly my point. I don't believe that a failure to 
>> allocate in this code should be considered a fatal error for the VM.
> I agree, there may be situations where memory allocation failures should 
> be considered non-fatal, and the Hotspot allocation routines should 
> offer some support for that.  However, it's not clear to me that the 
> various specifications we adhere to have "escape-clause" situations 
> where events can be dropped or features scaled back when/if we run into 
> memory constraints.  They should, no doubt, but I don't recall seeing 
> anything like that at the present.  It's a can of worms that is probably 
> worth opening up, but it's a bigger issue than this bugfix.

Given our allocation routines leave a lot to be desired we should strive 
to avoid the need for dynamic allocation as it introduces new abort 
points. A program that runs fine today may abort with this fix due to 
additional C-Heap usage.

>> I don't understand why we would have to allocate the event objects in 
>> C-heap, all we want is a link field so that they can form their own 
>> queue without the need for wrapper Nodes.
>> 447 class JvmtiDeferredEvent VALUE_OBJ_CLASS_SPEC {
>> 448   friend class JvmtiDeferredEventQueue;
>> 449  private:
>>       JvmtiDeferredEvent _next; // for chaining events together
>>      public:
>>       JvmtiDeferredEvent next() { return _next; }
>>       void set_next(JvmtiDeferredEvent e) {
>>        assert(_next == NULL, "overwriting linked event!");
>>        _next = e;
>>       }
>> Further this would allow you to dequeue all the events at once instead 
>> of the service thread going through pulling them off one at a time.
> That code doesn't work in C++.  You can't have a type contain itself. 

Okay my bad they were supposed to be pointer types (it is a linked-list 
after all!)

>  How big would it be?  I suppose what you're suggesting is the Java 
> semantics where the '_next' field is a reference/pointer.  If so, you've 
> just described the 'QueueNode' class.   Could I collapse the definitions 
> and have just one instead of both JvmtiDeferredEvent and QueueNode? 
>  Sure, but I see value in having them separate.  For one,  you can have 
> different allocation schemes - QueueNodes go into the C-heap and 
> JvmtiDeferredEvents can be value objects.  Another reason is to 
> separately encapsulate two different concepts that aren't necessarily 
> related (list functionality and event specification).

The point is to avoid the need to dynamically allocate the QueueNodes. 
If the elements to be queued are self-linking then you don't need the 
extra wrapper objects. Take a look at VM_Operations as an example of 
self-linking objects.

It also makes it easy to return a list of events for batch processing, 
if you chose - ala VM_Operations again.

> And regardless of where you put the link, the event specification has to 
> be allocated in the C-heap ...

I don't care about where the events are allocated, that is a totally 
different issue. The point is to avoid the need to allocate nodes.

> Batching is a separate issue altogether.  The existing code could be 
> written to batch-process events using the existing classes.  

Yes but I think the batching is simpler if the events themselves are linked.

>>>> 988   if (_queue_head == NULL) {
>>>> 989     // Just in case this happens in product; it shouldn't be 
>>>> let's not crash
>>>> 990     return JvmtiDeferredEvent();
>>>> 991   }
>>>> doesn't look right. Are we returning a stack allocated instance 
>>>> here? (There's also a typo in the comment.)
>>> Yes, we're returning a stack-allocated instance, which returns to the 
>>> caller as a temp and is copied into a local variable there.  I'll fix 
>>> the comment.
>> It isn't "legal" to return a stack-allocated object from a method. I 
>> expect we'll get away with it but still ...
>> Couldn't you just allocate a static instance to use in this case?
> There's nothing wrong with returning an object by value.

Sorry I was thinking this was returning a pointer. So this is utilizing 
C++ copy-semantics to make a valid copy in the caller when the method 
returns. Still seems an odd construct to me but I assume C++ deals with 
it correctly.

>> Now it seems that flush_queue is also called by the service thread, 
>> but I can't see where from?
> It's not called directly, but it could happen if a user's JVMTI 
> compiled-method-load/unload event handler method calls JVMTI's 
> GenerateEvents(), so we need to code defensively here to prevent any 
> inadvertent self-deadlocks.


>> I'm a little concerned about all the locking/unlocking that seems to 
>> happen; and the fact that there are potentially a lot of spurious 
>> notifications. ...
> The enqueue code does not know the state of the service thread (nor 
> should it have to), so it will always have to send a notification to 
> wake it up if it's sleeping (even if the events are batched) And all the 
> waiting code loops checking for conditions, so a spurious notification 
> ought to be harmless.  

enqueue() is not an issue for notifications, it has to do them. The 
spurious wakeups are only a performance concern, not a correctness one.

> There is a single notifyAll that occurs when flushing is complete.  This 
> is implemented by the flush pseudo-event. (see JvmtiDeferredEvent::post()).

But this is a notifyAll per flush-event, which was my point. And 
combined with the single-event per iteration processing you do get a lot 
of locking/unlocking (due to the need to unlock when calling

> No doubt this could be restructured a number of different ways, but this 
> way is rather straightforward (easy to analyze) and appears correct.   
> The possibility to batch-process the events in the future is there if we 
> need it for some reason, but I don't see a need for it at the moment.

My mental model for much of this was the vm-operations queue and the 
batch processing we do there. But if events are sparse then this is not 
an immediate issue.


> --
> - Keith

More information about the hotspot-runtime-dev mailing list