Enable GUI Events¶
Enable mouse events are represented by the
MouseEvent type and their
event names (which are the suffixes used by
enable.interactor.Interactor.dispatch()) can be divided into two
groups: mouse clicks and mouse movements. The mouse click events have names
_dclick and names beginning with
middle. This means that Enable only supports three mouse
buttons (plus wheel events).
Mouse Event types¶
A mouse button was pressed. Dispatched as
A mouse button was released. Dispatched as
A mouse button was double-clicked. Dispatched as
The mouse moved within a component.
The mouse moved into a component’s bounds.
The mouse moved out of a component’s bounds.
The mouse wheel moved.
Below is a listing of the traits on a MouseEvent instance.
A mouse event.
If True, the ALT key is pressed on the keyboard
If True, the CTRL key is pressed on the keyboard
If True, the left button is pressed on the mouse
If True, the middle button is pressed on the mouse
If a wheel event, holds number of units moved by the wheel
If a wheel event, contains either “horizontal” or “vertical”
If a wheel event, contains a 2D movement vector as a tuple
If True, the right button is pressed on the mouse
If True, a SHIFT key is pressed on the keyboard
Previous versions of Enable had just one keyboard event, a ‘key_pressed’ event. This event conflated which key(s) were physically being pressed with the text which should be generated by that keypress. All the underlying windowing/event systems have a way of distinguishing these two pieces of information, either by issuing separate events for each (Wx, Pyglet, Vtk), or by storing them as different attributes on the event object (Qt).
In addition, there was no way to detect a key-up event. This is a comparatively minor use case, but potentially useful for doing things like toggling pointer states to indicate different mouse behaviour when modifiers keys are pressed.
We will keep the ‘key_pressed’ event. However the semantics will change so that it will keep track of the physical key being pressed rather than the text being entered. Thus pressing the ‘a’ key with the shift key down will result in an event with the ‘character’ attribute set to ‘a’, instead of ‘A’ as it would be now. The lower-case version of the character will be the canonical representation of the key, since that will cause minimal problems with existing event handlers in Enable and Chaco.
The character mapping will be used as it is now to map special keys to standard strings (like right arrow generating the string ‘right’).
In addition, events will now be generated by pressing modifier keys by themselves (eg. pressing shift will generate an event).
Most Chaco code will continue to use key_pressed as the primary event to listen to, it will just work more consistently than it did before. Some code may need to change to reflect the change in what the character attribute returns (XXX or we add a ‘key’ attribute and deprecate the ‘character’ attribute).
On Wx this event will be generated by EVT_KEY_DOWN, on Qt, this event will be generated by keyPressEvent, but will access the ‘key()’ attribute rather than the ‘text()’ attribute.
We will introduce a new ‘key_released’ event, which is essentially the mirror image of the ‘key_pressed’ event.
On Wx this event will be generated by EVT_KEY_UP, on Qt, this event will be generated by keyReleaseEvent, but will access the ‘key()’ attribute rather than the ‘text()’ attribute.
We will introduce a new ‘character’ event, which will hold the unicode characters, if any, that are generated by the key press event. It is possible that this event may also be generated by other mechanisms in the future, like pasting text. The event may hold multiple characters.
This event will not be generated if the key_pressed event is handled. This is largely because this is the way that wx works, but it also makes a certain amount of sense from an interaction modelling point of view.
This will never do key mapping to standard Enable names for non-printable characters: if there is no appropriate unicode representation, no event will be emitted.
The handful of Enable widgets which expect actual text input will be changed to use the character event instead of the key_pressed event.
On Wx this event will be generated by EVT_CHAR, on Qt, this event will be generated by keyPressEvent, but will access the ‘text()’ attribute rather than the ‘key()’ attribute.
Why Three Events?¶
Qt’s two-event model is generally nicer and cleaner, but adapting the other backends to use that model would require holding global state from EVT_KEY_DOWN (or its equivalent) to fold in to the Enable event generated from EVT_CHAR. This seems potentially fragile. The other alternative would be to try to work out from the EVT_KEY_DOWN what text would be generated, and that seems even more fragile.
On the other hand, adapting Qt to a 3-event model is straightforward: on keyPressedEvent we generate a ‘key_pressed’ event, and if text() is non-empty, we subsequently also emit a ‘character’ event.
Drag and Drop Events¶
Enable has support for objects being dropped onto components built on top of the backend’s (and, in practice, PyFace’s) support for drag and drop.
All drag-and-drop related events have type
DragEvent and provide the
y coordinates of the event, and the object being dragged or
dropped (if it is available from the backend, otherwise this will be
The system generates 3 types of drag and drop events:
These events are generated when the user is moving a dragged object over the Enable window. Tools or other Interactors which want to indicate that they can accept the drag should indicate this by calling the
set_drag_result()method of the Enable window indicating the type of operation that will be performed on the object (the default is a “copy”, but other possibilties include “move” or “link”, the full list of possibilites is found in the appropriate
constantsmodule). The value of the drag result influences the way that the operating system displays the dragged objects and cursor whil dragging.
This event is generated when the user drags objects out of the window.
This event is generated when the user releases the mouse button over the Enable window while dragging. Tools or other Interactors should handle this event to perform whatever operations need to be performed with the dropped objects.
As a convenience, there is a
BaseDropTool class which handles most of the
drag and drop interactions for you correctly. To use this, you need to
subclass and override at least the
This method is given the position and object instance and should return
Trueif the drop is accepted or
Falseif it is not.
If the drop is accepted, this method is called with the position and the objects, and should perform whatever actions are required with the dropped objects.
The behaviour is slightly different between the Wx and Qt backends: Qt provides
a reference to the dragged objects during
drag_over events, and so the
BaseDropTool will call
drag_over events to give
better feedback about the state of the drag and drop operation; whereas Wx does
not provide that information, so will always indicate to the operating system
that a drop is possible.
The type of drag result returned during
drag_over events is controlled by
If you want more control over the response to
drag_over events, then you
can additionally override the
get_drag_result method to return one of the
drag result constants dependin on the position and (possibly) the objects
being dragged. If you want cross-toolkit compatibility, you must handle the
case where the
get_drag_result method is called with the object being
None, which indicates that the object is not known yet.