Balluff - BVS CA-GX0 / BVS CA-GX2 Technical Documentation
|
Using the correct network interface card is crucial. In case of BVS CA-GX0 / BVS CA-GX2 this would be a 1 Gigabit Ethernet controller which is capable of full duplex transmission. In case an application shall use both links of an BVS CA-GX2 then at least a dual-port 1 Gigabit NIC is needed.
In the properties of the network interface card settings, please check, if the network interface card is operating as a "1000MBit Full Duplex Controller" and if this mode is activated.
Also the network interface card manufacturers provide driver updates for their cards every now and then. Using the latest NIC drivers is always recommended and might improve the overall performance of the system dramatically!
Please check, if the GigE Vision™ capture filter driver has been installed correctly by GigEConfigure.
All unused protocol drivers should be disabled in order to improve the overall performance and stability of the system! In the following screen-shot the minimum set of recommended protocol drivers is enabled only. If others are needed they can be switched on but it's best to reduce the enabled drivers to a bare minimum.
The camera sets the MTU to the maximum value automatically given by the NIC or switch and supports a maximum MTU of 8K. You can manually change the network packet size the camera uses for transmitting data using this property: "Setting → Base → Camera → GenICam → Transport Layer Control → Gev Stream Channel Selector → Gev SCPS Packet Size":
As a general rule of thumb it can be said, that the higher the MTU, the better the overall performance, as less network packets are needed to transmit a full image, which results in less overhead that arises from the handling of each arriving network packet in the device driver etc. However every component involved in the data transmission (that includes every switch, router and other network component, that is installed in between the device and the system receiving the packets) must support the MTU or packet size, as otherwise the first component not supporting the packet size will silently discard all packets, that are larger than the component can handle. Thus the weakest link
here determines to overall performance of the full system!
On the network interface card's side, this might look like this:
The behavior of the auto negotiation algorithm can be configured manually or to disable it completely if needed. The AutoNegotiatePacketSize property determines if Impact Acquire should try to find optimal settings at all and the way this is done can be influenced by the value of the AutoNegotiatePacketSizeMode property. The following modes for the are available:
Value | Description |
HighToLow | The MTU is automatically negotiated starting from the NICs current MTU down to a value supported |
LowToHigh | The negotiation starts with a small value and then tries larger values with each iteration until the optimal value has been found |
To disable the MTU auto negotiation just set the AutoNegotiatePacketSize property to "No".
In the properties of the network interface card settings, please check, if the "Interrupt Moderation" is activated.
An Interrupt causes the CPU to - well - interrupt what it is currently doing. This of course consumes time and other resources so usually a system should process interrupts as little as possible. In cases of high data rates the number of interrupts might affect the overall performance of the system. The "Interrupt Moderation"-setting allows to combine interrupts of multiple packets to just a single interrupt. The downside is that this might slightly increase the latency of an individual network packet but that should usually be negligible.
Once "Interrupt moderation" is enabled there even might be the possibility to configure the interrupt moderation rate of the network interface.
Usually there are multiple possible options which will reflect a different trade-off between latency and CPU usage. Depending on the expected data rate different values might be suitable here. Usually the NIC knows best so setting the value to Adaptive is mostly the best option. Other values should be used after careful consideration.
In case of high frame rates the best option sometimes might be "Extreme" when individual network packets are not needed to be processed right away. In that case however one needs to be aware of the consequences: If an application receives an incomplete image every now and then, then "Extreme" might not be the right choice since it basically means "wait until the last moment before the NIC receive buffer overflows, then generate an interrupt". While this is good in terms of minimal CPU load this of course is bad if this interrupt then is not served at once because of other interrupts as then packet data will be lost. It really depends on the whole system and sometimes needs some trial and error work. The amount of buffer reserved for receiving network data is configured by the Receive/Transmit Descriptors and combined with the Maximum transmission unit (MTU) / Jumbo Frames parameter these 3 values are the most important ones to consider!
Some NICs might also offer to configure the number of "RSS (Receive Side Scaling) Queues". In certain cases this technology might help to improve the performance of the system but in some cases it might even reduce the performance.
The feature in general offers the possibility to distribute the CPU load caused by network traffic to different CPU cores instead of handling the full load just on one CPU core.
Usually a single network stream (source and destination port and IP address) will always be processed on a single CPU in order to benefit from cache lines etc.. Switching on RSS will not change this but will try to distribute which network stream is processed on which CPU more evenly and combined NIC properties like the base CPU etc. it is even possible to configure which NIC shall use which CPU(s) in the host system.
Please check, if the number of "Receive Descriptors" (RxDesc) and "Transmit Descriptors" (TxDesc) of the NIC is set to the maximum value!
"Receive Descriptors" are data segments either describing a segment of memory on the NIC itself or in the host systems RAM for incoming network packets. Each incoming packet needs either one or more of these descriptors in order to transfer the packets data into the host systems memory. Once the number of free receive descriptors is insufficient packets will be dropped thus leads to data losses. So for demanding applications usually the bigger this value is configured the better the overall stability.
"Transmit Descriptors" are data segments either describing a segment of memory on the NIC itself or in the host systems RAM for outgoing network packets. Each outgoing packet needs either one or more of these descriptors in order to transfer the packets data out into the network. Once the number of free transmit descriptors is insufficient packets will be dropped thus leads to data losses. So for demanding applications usually the bigger this value is configured the better the overall stability.
These values have a close relationship to the Maximum transmission unit (MTU) / Jumbo Frames as well. You can get a feeling about values with following formula:
network packets needed >= 1.1 * PixelPerImg * BytesPerPixel --------------------------------- MTU
"Example 1: 1500 at 1.3MPixel"
network packets needed >= 1.1 * 1.3M * 1 -------------- 1500 >= 950
"Example 2: 8k at 1.3MPixel"
network packets needed >= 1.1 * 1.3M * 1 -------------- 8192 >= 175
Both examples are showing the required network packets per image which are NOT to be confused with the number of available receive descriptors! A descriptor (receive or transmit) usually describes a fixed size piece of memory (e.g. 2048 bytes). Now with increasing values of images per second the number of "Receive Descriptors" might be consumed very soon given that the NIC is not always served directly by the operating system. Usually, the default values reserved by the NIC driver vary between 64 and 256 and usually for acquiring uncompressed image data at average speed this is not enough. Also processing a network packet consumes some time which is independent of the size (e.g. generating an interrupt, calling upper software layers, etc.), which is why larger packets usually result in a better overall performance.
The "Receive Descriptors" a.k.a. "Receive Buffers" and "Transmit Descriptors" a.k.a. "Transmit Buffers" (Intel Ethernet Server Adapter I350-T2 network interface) can usually be found under "Performance Options":
While it is possible to operate BVS CA-GX2 Dual-GigE devices with a single network cable (which requires to use the connector LAN 1 only), the usage of two network cables operated in static LAG (Link Aggregation) is recommended when the devices should run at full speed. This Link Aggregation or SLA Static Link Aggregation feature is a technology invented to increase the performance between two network devices by combining two single network interfaces to one virtual link which provides the bandwidth of both single links (e.g. two 1 GBit links would result in a virtual single 2 GBit link). Swichting the modes (single or double link) requires a reboot of the camera.
If two network cables should be used either a network interface card with two network interfaces or two identical network interface cards allowing to be teamed are needed. Using a single card is recommended. In addition to this requirement the network interface card has to support so-called Link Aggregation, teaming, or bonding. Normally, Windows installs a standard device driver which does not support link aggregation. For this, you have to install and download a driver from the manufacturer's website. When e.g. using a "Intel Ethernet Server Adapter I350-T2" network controller this will be
After installation you have to combine the two interfaces which are used by the BVS CA-GX2. For this, open the device driver settings of the network controller via the "Device Manager":
"BVS CA-GX2"
and After closing the group assistant, you can connect the camera using the two network interfaces. There is no special connection necessary between camera and network controller interfaces.
It is also possible to use the camera with one network interface. This of course will result in a reduced data rate. For this use the right (primary) network interface of the camera only (the label on the back can be read normally).
Import-Module -Name "C:\Program Files\Intel\Wired Networking\IntelNetCmdlets\IntelNetCmdlets"
New-IntelNetTeam -TeamName "MyLAGTeam" -TeamMemberNames "Intel(R) Ethernet Server Adapter I350-T2","Intel(R) Ethernet Server Adapter I350-T2 #2" -TeamMode StaticLinkAggregationReplace MyLAGTeam with your preferred teaming name and also "Intel(R) Ethernet Server Adapter I350-T2" with the name of your used NIC.