Impact Acquire SDK .NET
Working With Frame Grabber Boards (deprecated)
Note
The last minor release containing frame grabber related Impact Acquire packages was version 2.47.0. If frame grabber boards are part of the application the last released package for these boards can be mixed with newer releases of other Impact Acquire up to version 3.0.0. Mixing 2.x installations with 3.x and higher installations will not be supported. See Migrating Applications To Impact Acquire 3.0.0 And Higher as well! Frame grabber board packages have entered maintenance mode!

Certain capture device (e.g. frame grabber) can process data from a wide range of imaging devices (e.g. cameras). However in order to interpret the incoming data from an imaging device correctly, the capture device needs to be given a certain amount of information about the structure of the video signal.

The Impact Acquire interface addresses this necessity by the introduction of so called "camera descriptions". A "camera description" is a certain set of parameters that should enable the capture device to cope with the incoming image data to reconstruct a correct image from the imaging device in the memory of the host system. For instance this information may contain information whether the image is transmitted as a whole or if it's transmitted as individual blocks (e.g. when dealing with interlaced cameras) that need to be reconstructed in a certain way to form the complete image.

Each capture device will support different sets of parameters. For example some capture devices will only be able to capture image data from standard video source such as a PAL or NTSC compliant camera, while others might only be capable to acquire data from digital image sources such as CameraLink compliant cameras. To reflect these device specific capabilities the "camera descriptions" have been grouped into different base classes. See mv.impact.acquire.CameraDescriptionBase and derived classes to find out how the basic structure of these objects look. Which basic "camera description" classes are supported by an individual device can be seen directly after the device has been initialized by creating and instance of mv.impact.acquire.CameraDescriptionManager. These objects will contain different lists for each camera class. One or more of this lists might be empty for a device, which indicates, that this device doesn't support this particular group of descriptions.

Note
For devices that don't support camera descriptions at all, creating an instance of mv.impact.acquire.CameraDescriptionManager will raise an exception, that should be handled appropriately.

To select a certain camera description to be used to prepare the capture device for the expected data the property mv.impact.acquire.CameraSettingsFrameGrabber.type can be modified. Its translation dictionary (e.g. see mv.impact.acquire.EnumPropertyI<T>.getTranslationDictString) will contain every camera description currently available.

Property Interaction

Certain properties will affect other properties depending on the value written to the property. This section is meant to help to answer some of the questions, that might arise from this behaviour.

mv.impact.acquire.CameraSettingsFrameGrabber.acquisitionField <-> mv.impact.acquire.CameraDescriptionStandard.startField:

While mv.impact.acquire.CameraSettingsFrameGrabber.acquisitionField is set to mv.impact.acquire.TAcquisitionFiled.afAuto the field selected in the property mv.impact.acquire.CameraDescriptionStandard.startField will be used to trigger the acquisition.

mv.impact.acquire.CameraSettingsFrameGrabber.interlacedMode <-> mv.impact.acquire.CameraDescriptionNonStandard.interlacedType:

When the latter property is set to mv.impact.acquire.TCameraInterlacedType.citNone the property mv.impact.acquire.CameraSettingsFrameGrabber.interlacedMode will be invisible, as it doesn't make sense to define how an interlaced signal has to be reconstructed if no interlaced signal is present.

If mv.impact.acquire.CameraDescriptionNonStandard.interlacedType is set either to mv.impact.acquire.TCameraInterlacedType.citInterlaced or mv.impact.acquire.TCameraInterlacedType.citInvertedInterlaced, mv.impact.acquire.CameraSettingsFrameGrabber.interlacedMode can be used to define whether the resulting image shall be reconstructed from the odd and even frame of the interlaced signal (mv.impact.acquire.CameraSettingsFrameGrabber.interlacedMode then must be set to mv.impact.acquire.TInterlacedMode.imOn) or if the single fields shall be treated as individual images (mv.impact.acquire.CameraSettingsFrameGrabber.interlacedMode then must be set to mv.impact.acquire.TInterlacedMode.imOff).

In the latter situation either just one particular field (either odd or even) or every field can be captured. This again can be defined via the two properties mv.impact.acquire.CameraSettingsFrameGrabber.acquisitionField and mv.impact.acquire.CameraDescriptionStandard.startField. The following table will show the behaviour for a camera signal depending on these selections:

Value of "interlacedType" Value of "interlacedMode" Value of "acquisitionField" Value of "startField"
Result of this setting / behaviour
citNone no influence(will be invisible) no influence(will be invisible) no influence INTERLACED VIDEO SOURCES: All fields will be treated like individual images no matter whether it's an odd or even field. NON-INTERLACED VIDEO SOURCES: Here we are just dealing with full frames thus every frame will be captured.
citInterlaced or citInvertedInterlaced imOn no influence(will be invisible) afEven full images merged from one even and one odd field will be captured. The acquisition will start with the next detected even field after an image has been requested.
citInterlaced or citInvertedInterlaced imOn no influence(will be invisible) afOdd full images merged from one even and one odd field will be captured. The acquisition will start with the next detected odd field after an image has been requested.
citInterlaced or citInvertedInterlaced imOn no influence(will be invisible) afAny full images merged from one even and one odd field will be captured. The acquisition will start with the next detected field after an image has been requested.
citInterlaced or citInvertedInterlaced imOff afAuto (the value of "startField" will be used) afEven Only even fields will be captured. These will be treated as individual images.
citInterlaced or citInvertedInterlaced imOff afEven no influence
citInterlaced or citInvertedInterlaced imOff afAuto (the value of "startField" will be used) afOdd Only odd fields will be captured. These will be treated as individual images.
citInterlaced or citInvertedInterlaced imOff afOdd no influence
citInterlaced or citInvertedInterlaced imOff afAuto (the value of "startField" will be used) afAny all fields will be captured and treated like individual images (alternating odd and even).
citInterlaced or citInvertedInterlaced imOff afAny afAny

When dealing with CameraLink camera descriptions there are some other dependencies between CameraLink specific properties.

Dependency between mv.impact.acquire.CameraDescriptionCameraLink.bitsPerPixel and mv.impact.acquire.CameraDescriptionCameraLink.pixelsPerCycle:

Value of "BitsPerPixel" Allowed values for "PixelsPerCycle"
8 1, 2, 3, 4, 8
10 1, 2, 3, 4
12
14 1
16
24
30
36

Dependency between mv.impact.acquire.CameraDescriptionCameraLink.pixelsPerCycle, mv.impact.acquire.CameraDescriptionCameraLink.tapsXGeometry and mv.impact.acquire.CameraDescriptionCameraLink.tapsYGeometry:

Value of "PixelsPerCycle" Allowed values for "TapsXGeometry" Allowed values for "TapsYGeometry"
1 cltxg1X cltyg1Y
2 cltxg1X cltyg1Y2, cltyg2YE
cltxg1X2, cltxg2X, cltxg2XE, cltxg2XM cltyg1Y
3 cltxg1X3, cltxg3X cltyg1Y
4 cltxg1X4, cltxg4X, cltxg2X2, cltxg2X2E, cltxg2X2M, cltxg4XE cltyg1Y
cltxg1X2, cltxg2X, cltxg2XE, cltxg2XM cltyg1Y2, cltyg2YE
8 cltxg1X8, cltxg8X cltyg1Y

Camera Descriptions

Create A New Camera Description

Now when a camera is connected, that differs in one or more parameter(s) from the default offered by one of the available base classes and no special description for the imaging device in question is available a new matching description must be generated.

Note
It's also possible to modify one of the standard descriptions to adapt the parameter set to the used imaging device, but this method is not recommend as this would define something to be 'standard', which in fact is not. Therefore it is not possible to store the standard descriptions permanently. It is however possible to modify and work with the changed parameters, but these changes will be lost once the device is closed.

The recommended way of adapting an imaging source to a capture device is to create a new description for a imaging device that does not completely fall into one of the offered standard descriptions. The first thing to decide when creating a new camera description is to which existing description offers the closest match for the new description. Once this has been decided a copy of this description can be created with an arbitrary name(that must be unique within the family the description is created from).

Note
For an example how to create a new camera description see the description of mv.impact.acquire.CameraDescriptionBase.

Afterwards the newly created camera description will be added to the list of existing ones. It will therefore be available via the instance of mv.impact.acquire.CameraDescriptionManager and also selectable via mv.impact.acquire.CameraSettingsFrameGrabber.type Its parameters at this point will match the "parent" description(the one the function mv.impact.acquire.CameraDescriptionBase.copyDescription was executed from) completely.

Now the reason for creating a new camera description was that the parameters in the existing description didn't exactly match the connected imaging device. Therefore the next step would probably be to modify some of the parameters.

Storing Camera Descriptions

Note
A new camera description will NOT be stored permanently by default. In order to make this description available the next time the capture device is initialized, the newly created description must be exported via a function call.

To store a camera description permanently the function mv.impact.acquire.CameraDescriptionBase.exportDescription of the new camera description must be invoked.

As a direct result the modified settings will become the new default values of this particular camera description.

Note
Please note, that this will NOT work for one of the standard camera descriptions. Whenever the user tries to export one of these, the error mv.impact.acquire.TDMR_ERROR.DMR_EXECUTION_PROHIBITED will be returned. This is to reflect the fact, that a standard can't be manually modified. This must ALWAYS be done by creating a new description.

When exporting a camera description a file in XML format will be written to disk. On Windows camera descriptions will be stored under "%PUBLIC%/Documents/Balluff/ImpactAcquire/CameraFiles" (or "%MVIMPACT_ACQUIRE_DATA_DIR%/CameraFiles" which will point to the same folder), on Linux this directory will be "/opt/ImpactAcquire/data/camerafiles" while under other platforms these files will end up in the current working directory.

Now when closing and re-opening a device only the default camera descriptions an the one selected before settings have been saved will appear in the list of camera descriptions of an instance of mv.impact.acquire.CameraDescriptionManager. This is to save memory. However all detected camera descriptions will be available via the property mv.impact.acquire.CameraSettingsFrameGrabber.type

Once a description is selected, that hasn't been in the list of camera descriptions before, it will be created and thus will become available for modifications again via the class mv.impact.acquire.CameraDescriptionManager.

Again: For a different camera a new description should be generated, to operate complex cameras in different modes, either a new description can be generated or an existing one can be modified.

After a camera has been modified the function mv.impact.acquire.CameraDescriptionBase.importDescription can be used to fall back to the values stored in the camera description file. This will restore the default settings for this description. The function mv.impact.acquire.ComponentCollection.restoreDefault does serve the same purpose, but will work for default descriptions as well.