Impact Acquire SDK C++
ContinuousCapture_BVS-3D-RV1.cpp

The ContinuousCapture_BVS-3D-RV1 program is based on the ContinuousCapture.cpp example concerning the acquisition implementation. The sample is intended for BVS 3D-RV1 devices only as it makes use of device specific settings which won't be supported by the majority of other devices. The sample will set up a device in a way that Multi-Part Format data will be delivered and will display intensitiy image buffer and disparity image buffer parts in two different display windows.

Since
3.4.0
Program location
The source file ContinuousCapture_BVS-3D-RV1.cpp is only part of this document so far, so in order to play around with it must be copied from here!
Note
The following device families are currently supported by this sample:
  • BVS 3D-RV1
ContinuousCapture_BVS-3D-RV1 example:
  1. Checks if bvs_sgm_producer is available on the system.
  2. Opens a BVS 3D-RV1 device.
  3. Asks to decrease bandwidth (2.5 Gbit/s default setting)
  4. Configures the device in a way it will deliver multi-part data OR will terminate with an error message if the configuration fails.
  5. Captures buffers continuously in continuous software trigger mode and will display(Windows) or output information (on other platforms) for the both parts of the delivered buffer.
  6. Processes the image information to real-world point cloud information.
How it works
  1. First the program checks in function isSGMProducerAvailable() if the required bvs_sgm_producer.cti file is available. If not, the program aborts and outputs an error message.
  2. After getting the device from user input the sample opens the device (pDev->open()).
  3. Then the sample will try to set up the device accordingly by calling the configureDevice() function.
  4. The bandwidth can be decreased to work on an usual single GigE connection (setBandwidth()).
  5. Image acquisition will make use of the mvIMPACT::acquire::helper::RequestProvider class discussed in the ContinuousCapture.cpp example application. Therefore the image will be continuously software triggered. If necessary, the device will disable the pattern projector once the acquisition is stopped.
  6. After a successful image acquisition, the function disparityToPointCloud() processes every pixel value to 3D voxel values (if enabled before).
Note
To end the image acquisition loop ENTER must be pressed.
Used device settings:
//-----------------------------------------------------------------------------
bool configureDevice( Device* pDev )
//-----------------------------------------------------------------------------
{
ImageFormatControl ifc( pDev );
if( ifc.componentSelector.isValid() && ifc.componentSelector.isWriteable() )
{
if( !supportsEnumStringValue( ifc.componentSelector, "Intensity" ) )
{
return false;
}
ifc.componentSelector.writeS( "Intensity" );
ifc.componentEnable.write( TBoolean::bTrue );
if( !supportsEnumStringValue( ifc.componentSelector, "Disparity" ) )
{
return false;
}
ifc.componentSelector.writeS( "Disparity" );
ifc.componentEnable.write( TBoolean::bTrue );
}
else
{
return false;
}
AcquisitionControl acq( pDev );
if( !acq.exposureAuto.isValid() ||
!acq.exposureAuto.isWriteable() ||
!acq.acquisitionMultiPartMode.isValid() ||
!acq.acquisitionMultiPartMode.isWriteable() )
{
return false;
}
if( !supportsEnumStringValue( acq.exposureAuto, "Continuous" ) ||
!supportsEnumStringValue( acq.acquisitionMultiPartMode, "SynchronizedComponents" ) )
{
return false;
}
acq.exposureAuto.writeS( "Continuous" );
acq.acquisitionMultiPartMode.writeS( "SynchronizedComponents" );
ChunkDataControl cdc( pDev );
if( !cdc.chunkModeActive.isValid() || !cdc.chunkModeActive.isWriteable() )
{
return false;
}
cdc.chunkModeActive.write( TBoolean::bTrue );
DepthControl dctl( pDev );
if( !dctl.depthAcquisitionMode.isValid() ||
!dctl.depthAcquisitionMode.isWriteable() ||
!dctl.depthExposureAdaptTimeout.isValid() ||
!dctl.depthExposureAdaptTimeout.isWriteable() ||
!dctl.depthQuality.isValid() ||
!dctl.depthQuality.isWriteable() ||
!dctl.depthMinDepth.isValid() ||
!dctl.depthMinDepth.isWriteable() ||
!dctl.depthMaxDepth.isValid() ||
!dctl.depthMaxDepth.isWriteable() )
{
return false;
}
if( !supportsEnumStringValue( dctl.depthAcquisitionMode, "SingleFrameOut1" ) ||
!supportsEnumStringValue( dctl.depthQuality, "Medium" ) ||
!supportsValue( dctl.depthExposureAdaptTimeout, 0.0 ) ||
!supportsValue( dctl.depthMinDepth, 1.0 ) ||
!supportsValue( dctl.depthMaxDepth, 3.0 ) )
{
return false;
}
dctl.depthAcquisitionMode.writeS( "SingleFrameOut1" );
dctl.depthQuality.writeS( "Medium" );
dctl.depthExposureAdaptTimeout.write( 0.0 );
dctl.depthMinDepth.write( 1.0 );
dctl.depthMaxDepth.write( 3.0 );
return true;
}
Handling multi part data
Once the request object is ready, the number of included multi part buffers can be obtained by calling mvIMPACT::acquire::Request::getBufferPartCount(). A specific buffer part can be used by calling mvIMPACT::acquire::Request::getBufferPart() and passing the number of the desired part to create a BufferPart object. Afterwards mvIMPACT::acquire::BufferPart::getImageBufferDesc() can be used for mvIMPACT::acquire::Request objects.
//-----------------------------------------------------------------------------
void myThreadCallback( shared_ptr<Request> pRequest, ThreadParameter& threadParameter )
//-----------------------------------------------------------------------------
{
// trigger will be called after a new image has been acquired
threadParameter.dctl_.depthAcquisitionTrigger.call();
// display some statistical information every 100th image
if( threadParameter.requestsCaptured_ % 100 == 0 )
{
const Statistics& s = threadParameter.statistics_;
cout << "Info from " << threadParameter.pDev_->serial.read()
<< ": " << s.framesPerSecond.name() << ": " << s.framesPerSecond.readS()
<< ", " << s.errorCount.name() << ": " << s.errorCount.readS()
<< ", " << s.captureTime_s.name() << ": " << s.captureTime_s.readS() << endl;
}
if( pRequest->isOK() )
{
const unsigned int bufferPartCount = pRequest->getBufferPartCount();
if( bufferPartCount > 0 )
{
// data has been delivered in multi-part format
for( unsigned int i = 0; i < bufferPartCount; i++ )
{
const BufferPart& bufferPart( pRequest->getBufferPart( i ) );
#ifdef USE_DISPLAY
const TBufferPartDataType bufferDataType = bufferPart.dataType.read();
cout << "Image captured: " << bufferPart.width.read() << "x" << bufferPart.height.read() << " buffer contains: " << bufferPart.dataType.readS() << " data" << endl;
if( bufferDataType == bpdt2DImage )
{
threadParameter.displayWindowPrimary_.GetImageDisplay().SetDisplayMode( TDisplayMode::DM_Default );
threadParameter.displayWindowPrimary_.GetImageDisplay().SetImage( bufferPart.getImageBufferDesc().getBuffer() );
threadParameter.displayWindowPrimary_.GetImageDisplay().Update();
}
else if( bufferDataType == bpdt3DImage )
{
threadParameter.displayWindowSecondary_.GetImageDisplay().SetDisplayMode( TDisplayMode::DM_Default );
threadParameter.displayWindowSecondary_.GetImageDisplay().SetImage( bufferPart.getImageBufferDesc().getBuffer() );
threadParameter.displayWindowSecondary_.GetImageDisplay().Update();
if( threadParameter.pointCloutCalculationAllowed )
{
disparityToPointCloud( pRequest, threadParameter.pDev_ );
}
}
else
{
cout << "The data type of buffer part " << i << " of the current request is reported as " << bufferPart.dataType.readS() << ", which will NOT be handled by this example application" << endl;
}
#else
cout << "Image captured: " << bufferPart.width.read() << "x" << bufferPart.height.read() << "buffer contains: " << bufferPart.dataType.readS() << " data" << endl;
#endif // #ifdef USE_DISPLAY
}
}
}
else
{
cout << "Error: " << pRequest->requestResult.readS() << endl;
}
}
Source code
#include <apps/Common/exampleHelper.h>
#include <common/crt/mvstdlib.h>
#include <fstream>
#include <string>
#include <iostream>
#include <mvIMPACT_CPP/mvIMPACT_acquire_helper.h>
#include <mvIMPACT_CPP/mvIMPACT_acquire.h>
#include <mvIMPACT_CPP/mvIMPACT_acquire_GenICam.h>
#ifdef _WIN32
# include <mvDisplay/Include/mvIMPACT_acquire_display.h>
# define USE_DISPLAY
#endif // #ifdef _WIN32
using namespace mvIMPACT::acquire;
using namespace std;
//-----------------------------------------------------------------------------
// The function checks whether the file in the given path is available on
// the host system.
bool checkFileExists( const string& fullPath )
//-----------------------------------------------------------------------------
{
ifstream file( fullPath.c_str() );
return file.good();
}
//-----------------------------------------------------------------------------
/// \brief Gets an environment variable
///
/// \return
/// - a positive value if the variable exists
/// - 0 if the variable does \b NOT exist
inline int getenv(
/// The environment variable wanted
const string& name,
/// Pointer to the string that will contain the value of the environment variable when the
/// variable exists. This can be NULL if the user just wants to know if the variable is there
string* pVal = 0 )
//-----------------------------------------------------------------------------
{
size_t bufSize = 0;
int result = mv_getenv_s( &bufSize, 0, 0, name.c_str() );
if( result == 0 )
{
if( bufSize > 0 )
{
auto_array_ptr<char> buf( bufSize );
result = mv_getenv_s( &bufSize, buf.get(), buf.parCnt(), name.c_str() );
if( ( result == 0 ) && pVal )
{
*pVal = string( buf.get() );
}
return 1;
}
}
return 0;
}
//-----------------------------------------------------------------------------
// This function checks if the BVS SGM producer environment variable has been
// set and if the required Producer is available on Windows OS.
// Basically this function is redundant to isSupportedBySample() function, but
// without isSGMProducerAvailable() function the program would stop without
// any hint of the root cause: bvs_sgm_producer.cti file is required to find
// the BVS 3D-RV1 camera.
bool isSGMProducerAvailable( void )
//-----------------------------------------------------------------------------
{
static const string s_pathVariable( ( sizeof( void* ) == 8 ) ? "GENICAM_GENTL64_PATH" : "GENICAM_GENTL32_PATH" );
#if defined(linux) || defined(__linux) || defined(__linux__) || defined(__APPLE__)
const static string PATH_SEPARATOR( ":" );
#elif defined(_WIN32) || defined(WIN32) || defined(__WIN32__)
const static string PATH_SEPARATOR( ";" );
#else
# error Unsupported target platform
#endif
string pathValue;
if( getenv( s_pathVariable, &pathValue ) > 0 )
{
cout << "Checking for SGM Producer presence. This might take some time..." << endl;
const static string s_libName = "bvs_sgm_producer.cti";
if( !pathValue.empty() )
{
string::size_type posStart = 0;
string::size_type posEnd = 0;
while( ( posStart = pathValue.find( PATH_SEPARATOR, posStart ) ) != string::npos )
{
const string filePath = pathValue.substr( posEnd, posStart - posEnd );
if( checkFileExists( filePath + "/" + s_libName ) )
{
return true;
}
posEnd = posStart + PATH_SEPARATOR.length();
posStart += 1;
}
if( checkFileExists( pathValue.substr( posEnd ) + "/" + s_libName ) )
{
return true;
}
cout << "Error: " << s_libName << " not found" << endl;
}
else
{
cout << "Unable to continue! No " << s_pathVariable << " environment variable is empty. Please follow manual or ask " << COMPANY_NAME << " technical support for advice";
}
}
else
{
cout << "Unable to continue! No " << s_pathVariable << " environment variable found. Please follow manual or ask " << COMPANY_NAME << " technical support for advice";
}
return false;
}
//-----------------------------------------------------------------------------
bool isPointCloudCalculationAllowed( void )
//-----------------------------------------------------------------------------
{
cout << "Do you want to calculate the point cloud for each received image (please note, this will consume quite some CPU resources)? (Y/N): ";
char pointCloudSelection;
cin >> pointCloudSelection;
cin.get();
return tolower( pointCloudSelection ) == 'y';
}
//-----------------------------------------------------------------------------
// The processed point cloud data will be stored in a map and can be used
// e.g. to create an .ply file which can display real 3D data on a monitor.
void disparityToPointCloud( shared_ptr<Request> pRequest, Device* pDev )
//-----------------------------------------------------------------------------
{
Scan3dControl s3dc( pDev );
const double scan3DPrincipalPointU = s3dc.scan3dPrincipalPointU.read();
const double scan3DPrincipalPointV = s3dc.scan3dPrincipalPointV.read();
const double baseline = s3dc.scan3dBaseline.read();
const double scan3dCoordScale = s3dc.scan3dCoordinateScale.read();
const double scan3DFocalLength = s3dc.scan3dFocalLength.read();
map<double, pair<double, double>>pointCloudData;
for( int y = 0; y < pRequest->imageHeight.read(); y++ )
{
unsigned short* p = reinterpret_cast< unsigned short* >( ( char* )pRequest->imageData.read() + y * pRequest->imageLinePitch.read() );
for( int x = 0; x < pRequest->imageWidth.read(); x++ )
{
const double dik = *p++ * scan3dCoordScale;
const double px = ( x + 0.5 - scan3DPrincipalPointU ) * ( baseline / dik );
const double py = ( y + 0.5 - scan3DPrincipalPointV ) * ( baseline / dik );
const double pz = ( scan3DFocalLength * ( baseline / dik ) );
pointCloudData.emplace( pz, make_pair( px, py ) );
}
}
}
//-----------------------------------------------------------------------------
// This function allows to use e.g. a single GigE switch by reducing the
// overall speed of the cameras even if a 2.5 GigE switch and network port is
// recommended.
int setBandwidth( Device* pDev )
//-----------------------------------------------------------------------------
{
cout << "Do you want to limit your network bandwidth (required if you use a single GigE connection instead of 2.5 GigE)? (Y/N): ";
char selection;
cin >> selection;
if( tolower( selection ) == 'y' )
{
DeviceControl dc( pDev );
dc.deviceLinkThroughputLimitMode.write( TBoolean::bTrue );
dc.deviceLinkThroughputLimit.write( 90000000 );
}
cin.get();
return 0;
}
//-----------------------------------------------------------------------------
struct ThreadParameter
//-----------------------------------------------------------------------------
{
Device* pDev_;
unsigned int requestsCaptured_;
Statistics statistics_;
DepthControl dctl_;
const bool pointCloutCalculationAllowed;
#ifdef USE_DISPLAY
ImageDisplayWindow displayWindowPrimary_;
ImageDisplayWindow displayWindowSecondary_;
#endif // #ifdef USE_DISPLAY
explicit ThreadParameter( Device* pDev ) : pDev_( pDev ), requestsCaptured_( 0 ), statistics_( pDev ), dctl_( pDev ), pointCloutCalculationAllowed { isPointCloudCalculationAllowed() }
#ifdef USE_DISPLAY
// initialise display window
// IMPORTANT: It's NOT safe to create multiple display windows in multiple threads!!!
, displayWindowPrimary_( "mvIMPACT_acquire sample, Device " + pDev_->serial.read() + " 2D Image" )
, displayWindowSecondary_( "mvIMPACT_acquire sample, Device " + pDev_->serial.read() + " Disparity Image" )
#endif // #ifdef USE_DISPLAY
{}
ThreadParameter( const ThreadParameter& src ) = delete;
ThreadParameter& operator=( const ThreadParameter& rhs ) = delete;
};
//-----------------------------------------------------------------------------
void myThreadCallback( shared_ptr<Request> pRequest, ThreadParameter& threadParameter )
//-----------------------------------------------------------------------------
{
// trigger will be called after a new image has been acquired
threadParameter.dctl_.depthAcquisitionTrigger.call();
// display some statistical information every 100th image
if( threadParameter.requestsCaptured_ % 100 == 0 )
{
const Statistics& s = threadParameter.statistics_;
cout << "Info from " << threadParameter.pDev_->serial.read()
<< ": " << s.framesPerSecond.name() << ": " << s.framesPerSecond.readS()
<< ", " << s.errorCount.name() << ": " << s.errorCount.readS()
<< ", " << s.captureTime_s.name() << ": " << s.captureTime_s.readS() << endl;
}
if( pRequest->isOK() )
{
const unsigned int bufferPartCount = pRequest->getBufferPartCount();
if( bufferPartCount > 0 )
{
// data has been delivered in multi-part format
for( unsigned int i = 0; i < bufferPartCount; i++ )
{
const BufferPart& bufferPart( pRequest->getBufferPart( i ) );
#ifdef USE_DISPLAY
const TBufferPartDataType bufferDataType = bufferPart.dataType.read();
cout << "Image captured: " << bufferPart.width.read() << "x" << bufferPart.height.read() << " buffer contains: " << bufferPart.dataType.readS() << " data" << endl;
if( bufferDataType == bpdt2DImage )
{
threadParameter.displayWindowPrimary_.GetImageDisplay().SetDisplayMode( TDisplayMode::DM_Default );
threadParameter.displayWindowPrimary_.GetImageDisplay().SetImage( bufferPart.getImageBufferDesc().getBuffer() );
threadParameter.displayWindowPrimary_.GetImageDisplay().Update();
}
else if( bufferDataType == bpdt3DImage )
{
threadParameter.displayWindowSecondary_.GetImageDisplay().SetDisplayMode( TDisplayMode::DM_Default );
threadParameter.displayWindowSecondary_.GetImageDisplay().SetImage( bufferPart.getImageBufferDesc().getBuffer() );
threadParameter.displayWindowSecondary_.GetImageDisplay().Update();
if( threadParameter.pointCloutCalculationAllowed )
{
disparityToPointCloud( pRequest, threadParameter.pDev_ );
}
}
else
{
cout << "The data type of buffer part " << i << " of the current request is reported as " << bufferPart.dataType.readS() << ", which will NOT be handled by this example application" << endl;
}
#else
cout << "Image captured: " << bufferPart.width.read() << "x" << bufferPart.height.read() << "buffer contains: " << bufferPart.dataType.readS() << " data" << endl;
#endif // #ifdef USE_DISPLAY
}
}
}
else
{
cout << "Error: " << pRequest->requestResult.readS() << endl;
}
}
//-----------------------------------------------------------------------------
// This function will allow to select devices that support the GenICam interface
// layout(these are devices, that claim to be compliant with the GenICam standard)
// and that are bound to drivers that support the user controlled start and stop
// of the internal acquisition engine. Other devices will not be listed for
// selection as the code of the example relies on these features in the code.
bool isDeviceSupportedBySample( const Device* const pDev )
//-----------------------------------------------------------------------------
{
const string product = pDev->product.readS();
return( product.find( "BVS 3D-RV1" ) != string::npos );
}
//-----------------------------------------------------------------------------
bool configureDevice( Device* pDev )
//-----------------------------------------------------------------------------
{
ImageFormatControl ifc( pDev );
if( ifc.componentSelector.isValid() && ifc.componentSelector.isWriteable() )
{
if( !supportsEnumStringValue( ifc.componentSelector, "Intensity" ) )
{
return false;
}
ifc.componentSelector.writeS( "Intensity" );
ifc.componentEnable.write( TBoolean::bTrue );
if( !supportsEnumStringValue( ifc.componentSelector, "Disparity" ) )
{
return false;
}
ifc.componentSelector.writeS( "Disparity" );
ifc.componentEnable.write( TBoolean::bTrue );
}
else
{
return false;
}
AcquisitionControl acq( pDev );
if( !acq.exposureAuto.isValid() ||
!acq.exposureAuto.isWriteable() ||
!acq.acquisitionMultiPartMode.isValid() ||
!acq.acquisitionMultiPartMode.isWriteable() )
{
return false;
}
if( !supportsEnumStringValue( acq.exposureAuto, "Continuous" ) ||
!supportsEnumStringValue( acq.acquisitionMultiPartMode, "SynchronizedComponents" ) )
{
return false;
}
acq.exposureAuto.writeS( "Continuous" );
acq.acquisitionMultiPartMode.writeS( "SynchronizedComponents" );
ChunkDataControl cdc( pDev );
if( !cdc.chunkModeActive.isValid() || !cdc.chunkModeActive.isWriteable() )
{
return false;
}
cdc.chunkModeActive.write( TBoolean::bTrue );
DepthControl dctl( pDev );
if( !dctl.depthAcquisitionMode.isValid() ||
!dctl.depthAcquisitionMode.isWriteable() ||
!dctl.depthExposureAdaptTimeout.isValid() ||
!dctl.depthExposureAdaptTimeout.isWriteable() ||
!dctl.depthQuality.isValid() ||
!dctl.depthQuality.isWriteable() ||
!dctl.depthMinDepth.isValid() ||
!dctl.depthMinDepth.isWriteable() ||
!dctl.depthMaxDepth.isValid() ||
!dctl.depthMaxDepth.isWriteable() )
{
return false;
}
if( !supportsEnumStringValue( dctl.depthAcquisitionMode, "SingleFrameOut1" ) ||
!supportsEnumStringValue( dctl.depthQuality, "Medium" ) ||
!supportsValue( dctl.depthExposureAdaptTimeout, 0.0 ) ||
!supportsValue( dctl.depthMinDepth, 1.0 ) ||
!supportsValue( dctl.depthMaxDepth, 3.0 ) )
{
return false;
}
dctl.depthAcquisitionMode.writeS( "SingleFrameOut1" );
dctl.depthQuality.writeS( "Medium" );
dctl.depthExposureAdaptTimeout.write( 0.0 );
dctl.depthMinDepth.write( 1.0 );
dctl.depthMaxDepth.write( 3.0 );
return true;
}
//-----------------------------------------------------------------------------
int main( void )
//-----------------------------------------------------------------------------
{
if( !isSGMProducerAvailable() )
{
cout << "Couldn't locate bvs_sgm_producer.cti file on your system. Please follow manual or ask " << COMPANY_NAME << " technical support for advice" << endl;
return 1;
}
DeviceManager devMgr;
Device* pDev = getDeviceFromUserInput( devMgr, isDeviceSupportedBySample );
if( pDev == nullptr )
{
cout << "Unable to continue! Press [ENTER] to end the application" << endl;
cin.get();
return 1;
}
// if this device offers the 'GenICam' interface switch it on, as this will
// allow are better control over GenICam compliant devices
conditionalSetProperty( pDev->interfaceLayout, dilGenICam, true );
// if this device offers a user defined acquisition start/stop behavior
// enable it as this allows finer control about the streaming behavior
conditionalSetProperty( pDev->acquisitionStartStopBehaviour, assbUser, true );
try
{
pDev->open();
}
catch( const ImpactAcquireException& e )
{
// this e.g. might happen if the same device is already opened in another process...
cout << "An error occurred while opening the device(error code: " << e.getErrorCode() << ")." << endl
<< "Press [ENTER] to end the application" << endl;
cin.get();
return 1;
}
cout << "Initialising the device. This might take some time..." << endl;
if( !configureDevice( pDev ) )
{
cout << "Unable to continue! The selected device does not support some of the required features. Press [ENTER] to end the application" << endl;
cin.get();
return 1;
}
setBandwidth( pDev );
ThreadParameter threadParam( pDev );
cout << "Press [ENTER] to stop the acquisition thread" << endl;
threadParam.dctl_.depthAcquisitionTrigger.call();
helper::RequestProvider requestProvider( pDev );
requestProvider.acquisitionStart( myThreadCallback, ref( threadParam ) );
cin.get();
requestProvider.acquisitionStop();
return 0;
}
Contains information about a specific part of a captured buffer.
Definition mvIMPACT_acquire.h:8410
std::string name(void) const
Returns the name of the component referenced by this object.
Definition mvIMPACT_acquire.h:1206
Grants access to devices that can be operated by this software interface.
Definition mvIMPACT_acquire.h:7171
This class and its functions represent an actual device detected by this interface in the current sys...
Definition mvIMPACT_acquire.h:6118
PropertyS product
A string property (read-only) containing the product name of this device.
Definition mvIMPACT_acquire.h:6537
void open(void)
Opens a device.
Definition mvIMPACT_acquire.h:6420
PropertyIDeviceInterfaceLayout interfaceLayout
An enumerated integer property which can be used to define which interface layout shall be used when ...
Definition mvIMPACT_acquire.h:6644
PropertyIAcquisitionStartStopBehaviour acquisitionStartStopBehaviour
An enumerated integer property defining the start/stop behaviour during acquisition of this driver in...
Definition mvIMPACT_acquire.h:6800
Category for the acquisition and trigger control features.
Definition mvIMPACT_acquire_GenICam.h:2115
Category that contains the Chunk Data control features.
Definition mvIMPACT_acquire_GenICam.h:11823
Adjustment and triggering of the depth image for 3D control.
Definition mvIMPACT_acquire_GenICam.h:11048
Category for device information and control.
Definition mvIMPACT_acquire_GenICam.h:82
Category for Image Format Control features.
Definition mvIMPACT_acquire_GenICam.h:1132
Category for control of 3D camera specific features.
Definition mvIMPACT_acquire_GenICam.h:10765
A base class for exceptions generated by Impact Acquire.
Definition mvIMPACT_acquire.h:256
int getErrorCode(void) const
Returns a unique numerical representation for this error.
Definition mvIMPACT_acquire.h:275
std::string readS(int index=0, const std::string &format="") const
Reads data from this property as a string.
Definition mvIMPACT_acquire.h:3340
Contains basic statistical information.
Definition mvIMPACT_acquire.h:14509
PropertyF framesPerSecond
A float property (read-only) containing the current number of frames captured per second.
Definition mvIMPACT_acquire.h:14586
PropertyF captureTime_s
A float property (read-only) containing the overall time an image request spent in the device drivers...
Definition mvIMPACT_acquire.h:14560
PropertyI errorCount
An integer property (read-only) containing the overall count of image requests which returned with an...
Definition mvIMPACT_acquire.h:14568
A class that can be used to display images in a window.
Definition mvIMPACT_acquire_display.h:606
A helper class that can be used to implement a simple continuous acquisition from a device.
Definition mvIMPACT_acquire_helper.h:432
TBufferPartDataType
Defines buffer part data types.
Definition TBufferPartDataType.h:41
This namespace contains classes and functions belonging to the GenICam specific part of the image acq...
Definition mvIMPACT_acquire.h:23827
This namespace contains classes and functions that can be used to display images.
This namespace contains classes and functions belonging to the image acquisition module of this SDK.
Definition mvCommonDataTypes.h:34