Impact Acquire SDK C
|
The image acquisition using Impact Acquire with Balluff hardware is handled driver-side by these queues:
The main advantage of using queues is the independence of the capture process from side effects. Whenever the application and/or the operating system gets stuck as it is busy doing other things, in the background the framework can continue to work when it has image requests to process. Thus, short time intervals where the system doesn't schedule the applications threads can be buffered.
The following figure shows what the initial state of the driver's internal queues will look like when no request for an image has been made by the user:
Within the user application, you can control the number of request objects with these functions:
Whenever the user application requests an image, one of the request objects is placed in the request queue.
In device specific interface layout (see property InterfaceLayout
of the device object or the enumeration mvIMPACT::acquire::Device::interfaceLayout for details) during this step all the settings (e.g. gain, exposure time, ...) that shall be used for this particular image request will get buffered, thus the driver framework will guarantee that these settings will be used to process this request.
In contrast to that when working with the GenICam™
interface layout, all modifications applied to settings will directly be written to the device thus in order to guarantee that a certain change is already reflected in the next image, the capture process must be stopped, then the changes must be applied and the capture process is restarted again.
The function to send the request for one image down to the driver is named: mvIMPACT::acquire::FunctionInterface::imageRequestSingle
Whenever there are outstanding requests in the request queue the framework will always automatically remove the oldest request from this queue when it is either done processing the previous request OR was in idle mode. It will then try to capture data according to the desired capture parameters (such as gain, exposure time, etc.) into the buffer associated with this request.
When the framework stops processing the current request object it locks the content (and therefore also the attached image) and thus guarantees not to change it until it is unlocked by the user again.
When the user starts to wait for a request, the next available result queue entry will be returned to the application. The wait is a blocking function call. The function will return when either a request has been placed in the result queue by the framework or the user supplied timeout passed to the wait function call has elapsed. If there is at least one entry in the result queue already when the wait function is called, the function will return immediately. In order to check if there are entries in the result queue without waiting an application can pass 0
as the timeout value. However when the result queue is NOT empty, this call will remove one entry and return it to the application as would happen when passing a timeout value larger than 0, thus the application must always handle requests returned by the wait function. requests extracted from the result queue but NOT explicitly unlocked by the application will not be usable by the framework anymore!
This can be achieved by calling the function:
mvIMPACT::acquire::FunctionInterface::imageRequestWaitFor
Request objects returned to the user are locked for the driver. So if the user does not unlock the images, the framework can't use this request object anymore and thus a permanent acquisition won't be possible as sooner or later all available requests will have been processed by the framework and have been returned to the user. Therefore, it is crucial to unlock request objects that have been processed by the user in order to allow the framework to use them again. This mechanism makes sure that the framework can't overwrite stuff the user still has to process and also makes sure that the user can't write into memory the framework will use to capture images into.
The functions to unlock the request buffer are called:
mvIMPACT::acquire::FunctionInterface::imageRequestUnlock
To restore the driver internal initial state (remove all queue entries from the chain of processing) this method can be used:
mvIMPACT::acquire::FunctionInterface::imageRequestReset
When not explicitly passing the request objects number to the framework when calling the imageRequestSingle
function, the driver framework will use any of the request objects currently available to him, thus the order of the request objects might change randomly. An application should NOT rely on request objects arriving in a certain order unless explicitly stated by the application itself.
This section explains the different timeouts conditions that can occur during the capture process.
Two different timeouts have influence on the acquisition process. The first one is defined by the property ImageRequestTimeout_ms
. This can be accessed like this:
mvIMPACT::acquire::BasicDeviceSettings::ImageRequestTimeout_ms
This property defines the maximum time the framework tries to process a request. If this time elapses before e.g. an external trigger event occurred or an image was transmitted by the camera the request object will be returned to the driver. In that case, the framework will place this request object in the result queue and the user will get a valid request object when calling the corresponding 'wait for' function, but this request then will NOT contain a valid image, but it result property will contain mvIMPACT::acquire::rrTimeout.
The other timeout parameter is the timeout value passed to the 'wait for' function (described above). This value defines the maximum time in ms the function will wait for a valid image. After this timeout value elapsed, the function will return, but will not necessarily return a request object when no request object has been placed in the result queue.
Now assuming ImageRequestTimeout_ms
is set to a value of 2000 ms the figure shows the status after, this is a triggered application and
Then the request queue will contain one entry:
If now the next external trigger signal occurs AFTER the timeout of 2000ms has elapsed (or a trigger signal never occurs) the framework will place the unprocessed request object in its result queue.
From here, it will be returned to the user immediately the next time mvIMPACT::acquire::FunctionInterface::imageRequestWaitFor is called.
mvIMPACT::acquire::Request::requestResult will contain mvIMPACT::acquire::rrTimeout (figure 11) now
If the next external trigger comes BEFORE the timeout of 2000ms has elapsed, an image will be captured and the next call to mvIMPACT::acquire::FunctionInterface::imageRequestWaitFor will return after the image is ready, passing a valid image back to the user.
mvIMPACT::acquire::Request::requestResult will contain mvIMPACT::acquire::rrOK.
It's important to realize that even several calls to mvIMPACT::acquire::FunctionInterface::imageRequestWaitFor are possible to wait for a single request object. Until no image has been placed in the result queue each call will return after its timeout has elapsed returning mvIMPACT::acquire::DEV_WAIT_FOR_REQUEST_FAILED. However, waiting multiple times for an image has NO effect on the position of the request objects in their queues. This is only when either an image has been captured or when the timeout defined by the property ImageRequestTimeout_ms
has elapsed.
A couple of other features need to be mentioned when explaining Impact Acquire's buffer handling and data acquisition behaviour:
Some devices (e.g. GigE Vision™ devices) use a streaming approach to send data to the host system. This means once started the device will send its data to the host system as it becomes ready and does NOT wait for the host system to ask for data (such as an image). For the host system that can result in data being lost if it does not provide a sufficient amount of buffers to acquire the data into. Thus whenever a device is generating images, frames or data buffers faster than the host system can provide capture buffers, data is lost.
Some device/framework combinations therefore support 2 different ways to control the start/stop behaviour of data streaming from a device:
AcquisitionIdleTimeMax_ms
properties. If no new request command is issued during that time, the framework will automatically send a 'stop streaming' command to the deviceWhile the automatic mode keeps the application code fairly simple, the user controlled mode will sometimes allow a much better control about the capture process. Especially for applications where NO data loss can be accepted, the user controlled mode might be the better choice even though it requires a bit of additional code on the application side.
If the user controlled mode is supported the AcquisitionStartStopBehaviour
property's translation dictionary will contain the assbUser
value. The default mode will be assbDefault
, but if supported an application can switch to assbUser
BEFORE the device is opened.
AcquisitionStartStopBehaviour
will become read-only after the device has been opened.Situations where an application controlled start and stop of the device's data transmission might be beneficial:
When the application controls the start and stop of the streaming from the device some additional code must be added to an application: