Balluff - BVS CA-GT Technical Documentation
|
Using the correct network interface card is crucial. In case of BVS CA-GT this would be a 10 Gigabit Ethernet controller which is capable of full duplex transmission.
In the properties of the network interface card settings, please check, if the network interface card is operating as a "10GBit Full Duplex Controller" and if this mode is activated.
Also the network interface card manufacturers provide driver updates for their cards every now and then. Using the latest NIC drivers is always recommended and might improve the overall performance of the system dramatically!
Please check, if the GigE Vision™ capture filter driver has been installed correctly by GigEConfigure.
All unused protocol drivers should be disabled in order to improve the overall performance and stability of the system! In the following screen-shot the minimum set of recommended protocol drivers is enabled only. If others are needed they can be switched on but it's best to reduce the enabled drivers to a bare minimum.
The camera sets the MTU to the maximum value automatically given by the NIC or switch and supports a maximum MTU of 8K. You can manually change the network packet size the camera uses for transmitting data using this property: "Setting → Base → Camera → GenICam → Transport Layer Control → Gev Stream Channel Selector → Gev SCPS Packet Size":
As a general rule of thumb it can be said, that the higher the MTU, the better the overall performance, as less network packets are needed to transmit a full image, which results in less overhead that arises from the handling of each arriving network packet in the device driver etc. However every component involved in the data transmission (that includes every switch, router and other network component, that is installed in between the device and the system receiving the packets) must support the MTU or packet size, as otherwise the first component not supporting the packet size will silently discard all packets, that are larger than the component can handle. Thus the weakest link
here determines to overall performance of the full system!
On the network interface card's side, this might look like this:
The behavior of the auto negotiation algorithm can be configured manually or to disable it completely if needed. The AutoNegotiatePacketSize property determines if Impact Acquire should try to find optimal settings at all and the way this is done can be influenced by the value of the AutoNegotiatePacketSizeMode property. The following modes for the are available:
Value | Description |
HighToLow | The MTU is automatically negotiated starting from the NICs current MTU down to a value supported |
LowToHigh | The negotiation starts with a small value and then tries larger values with each iteration until the optimal value has been found |
To disable the MTU auto negotiation just set the AutoNegotiatePacketSize property to "No".
In the properties of the network interface card settings, please check, if the "Interrupt Moderation" is activated.
An Interrupt causes the CPU to - well - interrupt what it is currently doing. This of course consumes time and other resources so usually a system should process interrupts as little as possible. In cases of high data rates the number of interrupts might affect the overall performance of the system. The "Interrupt Moderation"-setting allows to combine interrupts of multiple packets to just a single interrupt. The downside is that this might slightly increase the latency of an individual network packet but that should usually be negligible.
Once "Interrupt moderation" is enabled there even might be the possibility to configure the interrupt moderation rate of the network interface.
Usually there are multiple possible options which will reflect a different trade-off between latency and CPU usage. Depending on the expected data rate different values might be suitable here. Usually the NIC knows best so setting the value to Adaptive is mostly the best option. Other values should be used after careful consideration.
In case of high frame rates the best option sometimes might be "Extreme" when individual network packets are not needed to be processed right away. In that case however one needs to be aware of the consequences: If an application receives an incomplete image every now and then, then "Extreme" might not be the right choice since it basically means "wait until the last moment before the NIC receive buffer overflows, then generate an interrupt". While this is good in terms of minimal CPU load this of course is bad if this interrupt then is not served at once because of other interrupts as then packet data will be lost. It really depends on the whole system and sometimes needs some trial and error work. The amount of buffer reserved for receiving network data is configured by the Receive/Transmit Descriptors and combined with the Maximum transmission unit (MTU) / Jumbo Frames parameter these 3 values are the most important ones to consider!
Some NICs might also offer to configure the number of "RSS (Receive Side Scaling) Queues". In certain cases this technology might help to improve the performance of the system but in some cases it might even reduce the performance.
The feature in general offers the possibility to distribute the CPU load caused by network traffic to different CPU cores instead of handling the full load just on one CPU core.
Usually a single network stream (source and destination port and IP address) will always be processed on a single CPU in order to benefit from cache lines etc.. Switching on RSS will not change this but will try to distribute which network stream is processed on which CPU more evenly and combined NIC properties like the base CPU etc. it is even possible to configure which NIC shall use which CPU(s) in the host system.
Please check, if the number of "Receive Descriptors" (RxDesc) and "Transmit Descriptors" (TxDesc) of the NIC is set to the maximum value!
"Receive Descriptors" are data segments either describing a segment of memory on the NIC itself or in the host systems RAM for incoming network packets. Each incoming packet needs either one or more of these descriptors in order to transfer the packets data into the host systems memory. Once the number of free receive descriptors is insufficient packets will be dropped thus leads to data losses. So for demanding applications usually the bigger this value is configured the better the overall stability.
"Transmit Descriptors" are data segments either describing a segment of memory on the NIC itself or in the host systems RAM for outgoing network packets. Each outgoing packet needs either one or more of these descriptors in order to transfer the packets data out into the network. Once the number of free transmit descriptors is insufficient packets will be dropped thus leads to data losses. So for demanding applications usually the bigger this value is configured the better the overall stability.
These values have a close relationship to the Maximum transmission unit (MTU) / Jumbo Frames as well. You can get a feeling about values with following formula:
network packets needed >= 1.1 * PixelPerImg * BytesPerPixel --------------------------------- MTU
"Example 1: 1500 at 1.3MPixel"
network packets needed >= 1.1 * 1.3M * 1 -------------- 1500 >= 950
"Example 2: 8k at 1.3MPixel"
network packets needed >= 1.1 * 1.3M * 1 -------------- 8192 >= 175
Both examples are showing the required network packets per image which are NOT to be confused with the number of available receive descriptors! A descriptor (receive or transmit) usually describes a fixed size piece of memory (e.g. 2048 bytes). Now with increasing values of images per second the number of "Receive Descriptors" might be consumed very soon given that the NIC is not always served directly by the operating system. Usually, the default values reserved by the NIC driver vary between 64 and 256 and usually for acquiring uncompressed image data at average speed this is not enough. Also processing a network packet consumes some time which is independent of the size (e.g. generating an interrupt, calling upper software layers, etc.), which is why larger packets usually result in a better overall performance.
The "Receive Descriptors" a.k.a. "Receive Buffers" and "Transmit Descriptors" a.k.a. "Transmit Buffers" (Intel Ethernet Server Adapter I350-T2 network interface) can usually be found under "Performance Options":