Impact Acquire SDK C++
Chunk Data Format

In addition to the image data itself certain devices might be capable of sending additional information (e.g. the state of all the digital inputs and outputs at the time of the begin of the frame exposure) together with the image itself and some devices might even send NO image but only information for each captured frame. A typical scenario for the latter case would be some kind of smart device with a local application running on the device. In such a case a remote application could configure the device and the application and in order to minimize the amount of data to be captured/transmitted an application could decide only to send e.g. the result of a barcode decoding application. In such an application an image might be sent to the application only if the local application could not run successfully. For example if the local application is meant to detect barcodes but could succeed in doing so for some images. Then the remote application might want to store these images where no barcode could be found for later analysis.

When a device is initialized by the framework it will retrieve information about what additional information can be delivered by the device and will add these properties to the Info list of each request object. This however only can be done by the framework if the GenICam™ description file of the device provides this information.

Most devices will be capable of sending data in several formats, thus if NO additional information is configured for transmission, the features for accessing these information will be invisible and/or will not contain 'real' data. However when enabling the transmission of additional data and the decoding information was provided by the device's description file, each request will contain up-to-date data for all the features that have been transmitted once the request object has been received completely and has been returned to the application/the user.

When a device does not support the transmission of additional chunk data or only supports the transmission of a subset of the features published in this interface, then accessing any of the unsupported features without checking if the feature is available will either raise an exception (in object orientated programming languages) or will fail and report an error.

In order to distinguish between image data and additional information transmitted, a certain buffer format is used. The device framework will then internally parse each buffer for additional information and will update the corresponding features in the request object of the Impact Acquire framework. The exact layout of the buffer format depends on the internal transfer technology used as well as on the interface design thus is not part of this documentation and therefore accessing the raw memory of a request by bypassing interface functions is not recommended. Sometimes (for example if the device description file does not contain the required information) this however might be the only option to access the meta-data at all. So it can be done when an application knows about the internal details.

The most common additional features have been added to the interface. If a device offers additional features that can be transmitted but are not part of the current user interface these can still be obtained by an application by iterating over all the features in the Info list of the request object. How this can be done can e.g. be seen in the source code of ImpactControlCenter (look for occurrences of getInfoIterator) or have a look at the documentation of the function mvIMPACT::acquire::Request::getInfoIterator() to see a simple code snippet. Another more convenient approach for known custom features is to derive from mvIMPACT::acquire::RequestFactory (an example can be found there).

All features that are transmitted as chunk information will be attached to a sublist "Info/ChunkData" of a request object. The general concept of chunk data has been developed within GigE Vision and GenICam working groups. Detailed information about the layout of chunk data can be found in the GigE Vision™ specification or in any other standard supporting the chunk data format. Detailed knowledge in general should not be important for receiving or accessing the data provided in chunk format as Impact Acquire does all the parsing and processing internally if possible.

In rare cases (e.g. for proprietary meta-data) an application might want to do the parsing on its own. Then the property BufferSizeFilled of a request object can be used to find the entry point for a custom chunk parser. When chunk data has been received the first chunk tag can be located by jumping BufferSizeFilled bytes beyond the start address of the captured buffer described by the pointer stored in the ImageData property of a request. The chunk data can then be parsed according to the transmission standard or detailed knowledge about the captured buffer's memory layout.