Impact Acquire SDK C++
GigE Vision™

Unicast Device Discovery

This section in meant to explain that it is also possible to access GigE Vision™ devices residing in a different subnet using the Impact Acquire framework.

Attention
Please be aware that typically routers or network traffic that needs to gets routed through different subnets usually does not work as reliable as when only switches and cables are involved so the full bandwidth delivered by one or multiple GigE Vision™ devices might not be achievable but condition monitoring or slow streaming for analysis might be an interesting use case still. If possible routers supporting a higher link speed than the attached cameras should be used in order to ensure a reliable connection.

More details about unicast device discovery can be found in the Impact Acquire GUI manual in the "Detecting Devices Residing In A Different Subnet" chapter belonging to the IPConfigure tool as well as in the GigE Vision™ device manuals under "Use Cases" where the "Discovering Devices In Different Subnets" is of particular interest.

When using the Impact Acquire API it is important to note that everything that will be configured using the API will get stored permanently for every user working on a particular system. This e.g. allows to configure everything up front using IPConfigure if the network setup is static. After this has been done each application using the Impact Acquire framework will detect the remote devices just as every other device. In order to clear or change this the API or IPConfigure has to be used again.

The properties needed for this configuration are

Actions and their Balluff extensions

This section is meant to explain the basics of Actions and the details of Balluff extensions to them.

General description for action properties

The GigE Vision™ standard defines a packet type "ACTION_CMD" specifically for the purpose of directly invoking functionalities in cameras as a kind of network-based event. With these packets, any party connected to the network may invoke certain predefined actions in a number of cameras, allowing for quasi-synchronous or scheduled activities. For details on configuring and arming actions, see the corresponding chapter in the GigE Vision™ specification. For configuring actions in the camera, some features are defined by the SFNC (Standard Features Naming Convention) in chapter "Action Control", which may be accessed by the following properties:

Note
The device asserts the selected action signal only if:
  • the selected ActionDeviceKey is equal to the action device key in the action protocol message,
  • the logical AND-wise operation of the action group mask in the action protocol message against the selected ActionGroupMask is non-zero
  • and the selected ActionGroupKey is equal to the action group key in the action protocol message.

This way, a predefined number of GigE Vision™ devices may be set up to react to a number of action signals by configuring them as TriggerSource, CounterTriggerSource, CounterEventSource, CounterResetSource, or TimerTriggerSource.

The Interface module of the GenICam™ producer, on the other hand, contains standard-compliant functionality to emit action signals. You may set the ActionDeviceKey, ActionGroupKey and ActionGroupMask as a preparation before invoking the mvIMPACT::acquire::GenICam::ActionControl::actionCommand property itself, and you may set scheduling parameters to issue scheduled actions. The following properties are available:

In addition, you may set the GevActionDestinationIPAddress to unicast an action command to one camera only, or you may broadcast them if you select a broadcast address (like 169.254.255.255). See property:

If you need an acknowledge for the action that you signal, you may enable property mvActionAcknowledgeEnable. The necessary functionality depends on whether you use unicast (directing the aAction to just one camera), or if you broadcast the action command. In case of a unicast, you'll see the success of the command immediately in the return code of the command. In case of a broadcast, you have to specify the finalization condition for the command; first, you have to specify how many responses you expect (mvActionAcknowledgesExpected), and second, you have to define a maximum wait time (mvActionAcknowledgeTimeout), which is necessary to terminate the command if the requested number of responses has not yet arrived. In case of an error, there is a property for the number of received acknowledges (mvActionAcknowledgesReceived) as well as a property for the number of packets among them that signaled an error status (mvActionAcknowledgesFailed).

The related properties are:

Note
Balluff/MATRIX VISION cameras with an older firmware might send the acknowledge packet with a "GEV_STATUS_NOT_IMPLEMENTED" error status, and they may even send it in more cases than you expect. To avoid this, update your camera to a newer firmware.

Use cases for Actions and their acknowledges

Example 1: Suppose you have two cameras available in your application, both located at different spots beside a running lane. Both cameras shall take an image at a predefined time interval, but there is no possibility to synchronize the internal times with PTP, so scheduled Actions cannot be used. A small time difference in the interval is OK, and both cameras are controlled by a different PC, so actions shall be used. The time of exposure is determined at this PC, which will be broadcasting the action commands into the subnet with the two cameras. One camera will be triggered directly by setting the TriggerSource to its Action1 input, while on the other camera, the action signal triggers a 500ms timer, which in turn triggers the FrameStart of an image. Using action acknowledges, you can instantly receive confirmation for the success of the operation before even the first image is taken instead of having to wait for the end of exposure and the transfer of the image. In addition to the usual setup with the Keys and Mask properties and setting the gevActionDestinationIPAddress to the correct broadcast address, set mvActionAcknowledgeEnable to true, set mvActionAcknowledgesExpected to 2 and mvActionAcknowledgeTimeout to a suitable time (the default value is 20 ms). If the ActionCommand succeeds, you have the confirmation that the process has been successfully started and afterwards, mvActionAcknowledgesReceived holds the value of 2. Checking mvActionAcknowledgesFailed for a value of 0 asserts that both Acknowledge packets returned with a SUCCESS status.

Example 2: Suppose you have four cameras available in your application, and you'd like to use exactly two of them for a certain action command whose success you want an acknowledge for. Furthermore, you have a low-light environment resulting in long exposure times, and you need to know at once if the action command has been received, to be able to trigger the other two cameras with a different action command in case of an unsuccessful acknowledge. In addition to the usual setup with the Key and Mask properties and setting the gevActionDestinationIPAddress to a broadcast address, set mvActionAcknowledgeEnable to true, set mvActionAcknowledgesExpected to 2 and mvActionAcknowledgeTimeout to a suitable time (the default value is 20 ms). By setting up the standard action properties, you ensure that only the desired two cameras receive the first action command, and by enabling the acknowledge, you have a near-instant confirmation that the action command has arrived at the cameras and has triggered the desired activities (instead of, e.g. having to wait for the end of exposure and the transfer of the image). You check the result of the ActionCommand to see if the action was successful, and as an additional confirmation, assert that property mvActionAcknowledgesReceived holds the value 2.

Example 3: Suppose you have four cameras in a low-light scene, the action command is sent to all of the cameras, and the minimum condition is that at least three of them acknowledge the action command. All four cameras are now in the same action group, and you set mvActionAcknowledgesExpected to 3. In case that fewer than three cameras acknowledge the action command within the time defined by mvActionAcknowledgeTimeout, the ActionCommand fails. On arrival of the third acknowledge, the command is terminated successfully, and additional acknowledges are irrelevant.

Coding example for Actions

For illustration, example 1 is implemented in a simplified manner; the two cameras in this example are now controlled in the same spot where the action command is configured and invoked.

As a first step the user selects the cameras. They must be GEV cameras, and they must be connected to the same interface.

cout << "Please select the first camera that you want to use:" << endl;
Device* pDev0 = getDeviceFromUserInput( devMgr );
cout << "Using camera with " << pDev0->serial << endl;
if( !isCameraSuitableForSample( pDev0, systemIndex, interfaceIndex ) )
{
return 1;
}

The two cameras must be setup for action-based triggering, which in detail means opening the camera, and setting up the Action properties and the Acquisition properties for action-based triggering.

//-----------------------------------------------------------------------------
bool prepareCam0ForActionTriggering( Device* pDev )
//-----------------------------------------------------------------------------
{
if( !pDev->isOpen() )
{
pDev->open();
}
ActionControl ac( pDev );
if( !ac.actionSelector.isValid() || !ac.actionDeviceKey.isValid() || !ac.actionGroupKey.isValid() || !ac.actionGroupMask.isValid() )
{
printNotSupportedAndWait( pDev, "Not all action features found in device." );
return false;
}
ac.actionDeviceKey.write( 0x1 );
ac.actionSelector.write( 0 );
ac.actionGroupKey.write( 0x1 );
ac.actionGroupMask.write( 0x1 );
AcquisitionControl acC( pDev );
if( !acC.triggerSelector.isValid() || !acC.triggerSource.isValid() || !acC.triggerMode.isValid() )
{
printNotSupportedAndWait( pDev, "Not the necessary Trigger features found in device." );
return false;
}
acC.triggerSelector.writeS( "FrameStart" );
acC.triggerSource.writeS( "Action1" );
acC.triggerMode.writeS( "On" );
return true;
}

The second camera has an additional timer in-between the action signal and the frame start, delaying the trigger of the frame start by half a second.

...
ctc.timerSelector.writeS( "Timer1" );
ctc.timerReset.call();
ctc.timerDuration.write( 500000 );
ctc.timerTriggerSource.writeS( "Action1" );
...
AcquisitionControl acC( pDev );
acC.triggerSource.writeS( "Timer1End" );
...

After preparing the action command on the interface, setting up the interface to broadcast the correct action on this subnet and to expect two acknowledges

//-----------------------------------------------------------------------------
void prepareInterfaceForActionTriggering( const InterfaceModule& im )
//-----------------------------------------------------------------------------
{
im.actionDeviceKey.write( 0x1 );
im.actionGroupKey.write( 0x1 );
im.actionGroupMask.write( 0x1 );
const uint64_type subnetMask = im.gevInterfaceSubnetMask.read();
const int64_type broadcast = ( ( im.gevInterfaceSubnetIPAddress.read() & subnetMask ) | ~subnetMask ) & 0x00000000ffffffff;
im.gevActionDestinationIPAddress.write( broadcast );
im.mvActionAcknowledgeEnable.write( TBoolean::bTrue );
im.mvActionAcknowledgesExpected.write( 2 );
im.mvActionAcknowledgeTimeout.write( 20 );
}

After this, acquisition is set up:

fi.imageRequestSingle();
fi.acquisitionStart();

and the action command is invoked. Note the error handling which is possible with the returned result of the action command and the properties for evaluating the acknowledges, which allows you to deal better and faster with problematic situations.

if( ( im.actionCommand.call() != mvIMPACT::acquire::DMR_NO_ERROR ) ||
( im.mvActionAcknowledgesReceived.read() < im.mvActionAcknowledgesExpected.read() ) ||
( im.mvActionAcknowledgesFailed.read() > 0 ) )
{
cout << "Action was not successful. " << im.mvActionAcknowledgesReceived.read() << " of ";
cout << im.mvActionAcknowledgesExpected.read() << " expected acknowledges have arrived ";
cout << "and " << im.mvActionAcknowledgesFailed.read() << " had a non-SUCCESS status code" << endl;
cout << "Aborting image reception." << endl;
return 4;
}
// Now fetch the images because we know that the commands have been delivered correctly
...
@ DMR_NO_ERROR
The function call was executed successfully.
Definition mvDriverBaseEnums.h:2603

The complete code used for this use case can be found at: GigEVisionActionFeatures.cpp

Setting Up Resend Behaviour

Note
Before configuring anything related to the GigE Vision™ resend concept it is recommended to work through everything discussed in the Optimizing The Overall System Performance chapter. Even though requesting the retransmission of lost data is well defined within the GigE Vision™ standard in almost every case it is not needed. In any case eliminating the root cause for the data loss should be fixed whenever possible before trying to request lost data instead. If image data is still lost from time to time as described here and also everything recommended in that chapter didn't solve the issue then (and only then) this section should be used as a last straw!

Checking How Much Data Actually Is Lost

The Impact Acquire SDK provides 2 properties to check how big a problem there is:

These properties should be used to get a rough idea about the severity of the problem. Usually as a rule of thumb whenever much more than up to 3 percent of a buffer is lost on a regular basis then requesting the retransmission of that data will not be helpful. On the contrary chances are, that doing so will make matters worse since retransmissions put even more pressure on the link. In this case either reducing the bandwidth or changing network components of data routing is likely to help.

When only small amounts of data are lost from time to time then fine-tuning the resend behaviour of the system might produce good results.

The most important properties are:

Other properties within the DataStreamModule nodemap who's name starts with mvResend can also improve the situation. There documentation provides hints on when and how to use them.