Impact Acquire SDK Java
The 'LostImagesCount' increases

Symptoms

Especially when working with 10 GigE devices at high data rates on a Windows system the property LostImagesCount increases sporadically while the system is more or less idle. This especially can be observed when

  • One CPU's load is medium or high because of the 10G network traffic it processes
  • All other CPUs are more or less idle

The effect is reduced or even disappears completely when additional load is introduced to the system (e.g. through image processing or different processes).

Cause

The exact cause for this so far is not known. It seems like when a single CPU in a multi-core Windows system is heavily loaded while all the others are not the operating system forces the process causing most of the load to interrupt for a longer period of time (100 - 300 ms have been observed on Windows 10). While the network traffic in kernel space (thus inside the GigE Vision™ capture filter driver) continues to get processed then the user space portion of the process starves thus not more complete images are reported to the application and as a consequence the filter driver runs out of buffers after a while.

Resolution

The most obvious resolution to overcome problems caused by this behavior is to increase the request count for the problematic data stream. Having enough request objects to survive about 500 ms have proven to always work reliable. So when running at 50 Hz when this problem is encountered increasing the request count to 25 buffers should resolve all problems.

Generating addition load on the system that gets processed on different CPUs also seems to improve the situation a lot. E.g. when displaying all the images or when operating another device in parallel usually makes this effect disappear.