Kernel Event Vector / Map


The event vector is a vector type data structure for kernel level drivers, in particular the monitors used in Capture. Currently the kernel drivers in Capture suffer from the inability to be extended easily and hold generic types of information and they also require a lot of memory allocations. This is where the event vector comes into play


The event vector has the following characteristics:

  • Easy insertion O(1)
  • Multithreaded insertion
  • Linked list containing both events and data elements
  • Easy processing by Capture (a single event list is passed up to Capture)
  • Key/value mapping between event data

The event vector is kind of like a std::vector, but developed for kernel events which require a varying amount of named event data. Some key differences are that events are not inserted into the vector at first, they are instead assigned a temporary buffer where they can dynamically add information about the event that has occured. It also contains a small hashmap which contains the string descriptions of the data stored in an event, this allows for a key/value mapping so the event list is easier to process when passed up to Capture.


The general structure of the event vector is a number of event lists which contain the events and their associated data. An event list is a large block of memory (usually 1024KB) that contains a linked list of events. The event vector can contain any number of event lists but usually starts of with k ( k = number of processors ).

It then contains a series of event buffers of any size (usually 128KB). These buffers are used to temporary store the incoming event into the vector while information about the event is gathered. The current creation policy for these event buffers is that when initialised the event vector will contain k2 buffers which allows for multithreaded access to the event vector. When a event is created a buffer is assigned to it, if there are no buffer available the initial number of buffers is increased by a power of 2. Some initial testing show that on a dual core machine the event buffers only grew to 4 under a heavy load (10,000 events a second). The main limitation on the buffers is that they can only contain a maximum event size of how big the buffer, however this usually does not exceed 1KB so 128KB should be enough.

Once an event has been allocated a buffer, data can be then added to it. The event contains a linked list of offsets into the buffer where the data is stored. Data is defined as a string defining what the data is, the data type, and the actual data.


The only memory allocations that occur are when new event lists are created. This should rarely occur as with a size of 1024KB the events lists can usually contain 0.5 seconds worth of events. The vector uses a double buffered type system such that when list is acquired (to be sent to Capture) events will be added to the next event list, if one does not exist when acquiring then another is created. If Capture acquires an event list every 0.25 seconds the system works well. If events occur too quickly new event lists will be created to store the events while Capture catches up. See the following diagram of how the system works.


If Capture dies for some reason and the kernel driver is not stopped then event lists will continously be created until the system run out of memory! It is up the the kernel drivers to stop inserting new events into the vector if it hasn't received communication from Capture in a certain amount of time. The new CaptureRegistryMonitor? will stop adding events after not hearing from Capture for 5 seconds. There is a built in ConnectionChecker? used for this functionality in the \KernelDrivers?\Common\Common.h

Testing in userspace using 4 threads to insert 4,000,000 events into events lists of size 10MB resulted in a time of 4 seconds. Lowering the event lists sizes to 1024KB resulted in times of 6 seconds because of the extra time to allocate new event lists.