Image Acquisition
When Are Images Acquired?
Many vision systems require that you specify a special command or step to acquire an image to do processing with. Vision Guide 8.0 helps remove this extra step since images are acquired at the start of a vision sequence.
Normally, you only need to specify the proper setting for the RuntimeAcquire property. For most applications you will never need to even set the RuntimeAcquire property because it defaults to Stationary. This means that an image will be acquired at the beginning of the vision sequence execution. There are other sequence properties that are used to configure how images are acquired.
Using the Same Image with Multiple Vision Sequences
If you want to use the same image for two or more vision sequences, all you have to do is to set the RuntimeAcquire property for the first vision sequence to Stationary and set the RuntimeAcquire property for the other vision sequences to None. Setting the other vision sequences RuntimeAcquire to None will prevent that sequence from acquiring another image so all processing will be done on the image acquired from the 1st vision sequence.
Using Image Buffers
There is one image buffer for each camera (buffer 0), and 10 image buffers that can be shared between sequences (buffers 1 to 10).
Set the ImageBuffer sequence property to specify which buffer to use. The default buffer is 0. You can use image buffers to grab and store several images in memory, then process them later.
For example, you can create a sequence with no objects that grabs images into multiple buffers, and then process the images in other sequences.
' seq1 has no objects - it is used to grab images ' into multiple buffers
VSet seq1.RuntimeAcquire, VISION_ACQUIRE_STATIONARY
VSet seq1.ImageBuffer, 1
VRun seq1 ' Grab an image into buffer 1
Go Image2Pos
VSet seq1.ImageBuffer, 2
VRun seq1 ' Grab an image into buffer 2
Go Image3Pos
VSet seq1.ImageBuffer, 3
VRun seq1 ' Grab an image into buffer 3
...
' Now process the previously grabbed images
VSet seq2.RuntimeAcquire, VISION_ACQUIRE_NONE
VSet seq2.ImageBuffer, 1
VRun seq2 ' Process the image in buffer 1
VGet seq2.AllPassed, allPassed
...
When using image buffers 1 - 10 for sequences that do not acquire an image (processing only), the Camera property value is ignored during VRun.
Using External Trigger Image Acquisition
Vision Guide 8.0 supports a trigger input that allows the vision sequence to acquire an image and search by an external signal. To use a trigger input, follow these steps:
Wire the trigger signal to the camera connector. If you are also using a strobe light, you can wire that to the strobe output signal of the camera.
Set the RuntimeAcquire property for the sequence to use the trigger input to Strobed.
In your SPEL+ program, execute VRun as usual, then wait for the AcquireState property to change to the value 3, which indicates that the image acquisition is completed. In this example, the trigger is being signaled from an external device.
Integer state
Boolean found
#define PICTURE_DONE 3
TmReset 0
VRun seq1
Do
Wait 0.01
VGet seq1.AcquireState, state
If Tmr(0) > 10 Then
Error ER_STROBE_OT ' User defined overtime error
EndIf
Loop Until state = PICTURE_DONE
VGet seq1.obj1.Found, found
- If you do not wait for the image to be grabbed by checking AcquireState, then the next vision command in the task will automatically wait for the image to be acquired before executing. In this case, the image must be acquired before processing can continue, or you must abort the task. It is recommended that you check AcquireState, so that your program can continue if the trigger never fires, and the image is not acquired.
- When you run a sequence using the trigger input from the Vision Guide 8.0 GUI, the system will wait until the trigger is input. You can abort by clicking the [Abort] button.
CAUTION
Ambient lighting and external equipment noise may affect vision sequence image and results. A corrupt image may be acquired and the detected position could be any position in an object’s search area. Be sure to create image processing sequences with objects that use search areas that are no larger than necessary.
Working with Color
If a color camera is configured, you can acquire color images or load color images from disk.
Some vision tools can process color images, and other tools only work with grayscale images, as shown in the table below.
Vision Tool | Color Processing | Grayscale Processing |
---|---|---|
Blob | × | |
Correlation | × | |
Geometric | × | |
Edge | × | |
Polar | × | |
Code Reader | × | |
OCR | × | |
ImageOp | × | × |
ColorMatch | × | × |
LineFinder | × | |
LineInspector | × | |
ArcFinder | × | |
ArcInspector | × | |
DefectFinder | × | |
BoxFinder | × | |
CornerFinder | × | |
Contour | × |
The ImageOp tool has a ColorFilter operation and a ColorStretch operation that process a color image. All other ImageOp operations use the grayscale image.
The ColorMatch tool is normally used to process color images. However, it can also be used with grayscale images, using colors that are different levels of gray.
When a color image is acquired, an internal grayscale image is also created for use with tools that require a grayscale image. When a sequence is run, the objects that perform color processing use the color image, and the objects that perform grayscale processing use the grayscale image.
A color image consists of three color bands: Red, Green and Blue. Use the ImageColor sequence property to select which color band(s) to acquire. The default setting is All, which means a full color image is acquired using all three bands.
You can also select Red, Green, Blue, or Grayscale. When Red, Green, or Blue is selected, then the grayscale image is derived from the color monochrome image. This allows you to search one color band with the grayscale processing tools.
If you click the [Run Object] button on the [Vision Guide] window and the current object is a grayscale processing tool, then the video image is shown in grayscale, just as the tool would see it during processing.