Using Vision Objects

This section describes the details of vision objects, such as various vision object layouts and difference in operating methods.

CAUTION


Ambient lighting and external equipment noise may affect vision sequence image and results. A corrupt image may be acquired and the detected position could be any position in an object’s search area. Be sure to create image processing sequences with objects that use search areas that are no larger than necessary.

ImageOp Object

ImageOp Object Description
ImageOp objects enable you to perform morphology (including open, close, minify, magnify), convolution (including sharpen, smooth), flip, binarize, and rotate for a specified region of interest.
Other vision objects that are placed in the ImageOp region of interest will perform their operations on the output of ImageOp. For example, you can execute a scale-down operation on the entire video image with an ImageOp tool, and then place Blob objects inside the ImageOp search window to search the minified image.

ImageOp Object Layout
The Image object has an image processing window, as shown below.

ImageOp Object Properties
The following list is a summary of the ImageOp object properties with brief descriptions. For details on each property, refer to the following.
"Vision Guide 8.0 Properties & Result Reference"

Property Description
AbortSeqOnFail Allows the user to specify that if the object fails (not passed), then the entire sequence is aborted at that point and no further objects in the sequence are processed.
AngleObject

Defines which object to perform automatic rotation from.

The image will be rotated counter-clockwise using the Angle result of the specified object. The Operation property must be set to Rotation, otherwise this property is ignored.

Default: None

AngleObjectResult Specifies the result for the AngleObject property to use.
Caption

Used to assign a caption to the ImageOp object.

Default: Empty String

ColorMode

Sets which color space (RGB/HSV) to use for color operations.

Default: RGB

CurrentModel

Runtime only.

When Operation is set to ColorFilter, this specifies which model to use for the ModelColor property and also for VTeach. When CurrentModel = 0, the background color is selected.

Default: 0

Description

Sets a user description

Default: Blank

Enabled

Specifies whether to execute the object.

Default: True

FailColor Selects the color of an object when it is not accepted.
Frame

Specifies which positioning frame to use.

Default: None

FillHoles

Specifies whether to fill the small holes in the binary image

This property is displayed when Operation is set to Binarize.

Default: False

FrameResult

Specifies which number of the Frame results to be used.

Default: 1

Graphics

Specifies which graphics to display.

Default: 1 - All

ImageBuffer1

Specifies the first image by the buffer number when the Operation property is set to SubtractAbs.

SubtractAbs calculates the absolute value image of (first image - second image).

ImageBuffer1File

Specifies the path of image file to save in buffer when the ImageBuffer1 is set to “File”.

Default: None

ImageBuffer2 Specifies the second image by the buffer number when the Operation property is set to SubtractAbs.
ImageBuffer2File

Specifies the path of image file to save in buffer when the ImageBuffer2 is set to “File”.

Default: None

Iterations

Defines how many times to perform the specified operation.

Default: 1

KeepRGBRatio

Specifies whether to maintain the RGB ratio for the ColorStretch operation.

Default: True

LabelBackColor

Sets the background color for the object's label.

Default: Transparent

MaxRGB

Defines the maximum color for the ColorStretch operation.

Default: 255, 255, 255

MinRGB

Defines the minimum color for the ColorStretch operation.

Default: 0, 0, 0

ModelColor

Runtime only. When Operation is set to ColorFilter, this property is used at runtime to teach a model or the background color manually by setting the RGB color value directly.

Default: RGB (0, 0, 0)

ModelColorTol

Runtime only. This property is used at runtime to set the color tolerance for a model color. If a pixel color is within the tolerance of a model color, then the pixel is unchanged.

Default: 10 (ColorMode = RGB), 0,0,50 (ColorMode = HSV)

ModelName Runtime only. This property is used at runtime to set the name of the current model.
ModelWin

Runtime only.

Sets or returns the search window left, top, height, width parameters in one call.

ModelWinHeight Defines the height of the model window.
ModelWinLeft Defines the left most position of the model window.
ModelWinTop Defines the top most position of the model window.
ModelWinWidth Defines the width of the model window.
Name

Used to assign a unique name to the ImageOp object.

Default: ImgOp01

NumberOfModels

Runtime only. This is the number of color models used. At runtime, you can set the NumberOfModels, then use CurrentModel and VTeach to teach each color model.

Default: 1

Operation

Sets the type of image processing to perform.

For more details on Operation property, refer to the following.

" Vision Guide 8.0 Properties & Result Reference"

Default: Open

PassColor

Defines the color of the detected object when it is accepted.

Default: LightGreen

PassType Selects the rule that determines if the object passed.
Polarity Defines the differentiation between objects and background. (Either “Dark Object on Light Background” or “Light Object on Dark Background”.)
RotationAngle

Defines how many degrees to rotate the image when the Operation property is set to Rotation.

Default: 0

RotationDirection Specifies the direction of rotation for rotation operation.
SearchWin Runtime only. Sets or returns the search window left, top, height, width parameters in one call.
SearchWinAngle Defines the angle of the area to be searched.
SearchWinCenterX Defines the X coordinate value of the center of the area to be searched.
SearchWinCenterY Defines the Y coordinate value of the center of the area to be searched.
SearchWinHeight

Defines the height of the area to be searched in pixels.

Default: 100

SearchWinLeft Defines the left most position of the area to be searched in pixels.
SearchWinTop Defines the upper most position of the area to be searched in pixels.
SearchWinType Defines the type of the area to be searched (i.e. Rectangle, RotatedRectangle, Circle).
SearchWinWidth

Defines the width of the area to be searched in pixels.

Default: 100

ShiftObject Sets the object to process Shift.
ShiftX Sets the amount of Shift of X direction.
ShiftY Sets the amount of Shift of Y direction.
ThresholdAuto Specifies whether to automatically set the threshold value of the gray level that represents the feature (or object), the background, and the edges of the image.
ThresholdBlockSize

Defines the range to refer the neighborhood area to set the threshold and use when the Operation property is set to BinarizeAdaptive.

Default: 1/16ROI

ThresholdColor

Defines the color assigned to pixels within the thresholds.

Default: Black

ThresholdHigh

Defines the upper threshold hold setting to use when the Operation property is set to Binarize.

Default: 128

ThresholdLevel

Defines the ratio between the neighborhood area and the luminance difference to use when the Operation property is set to BinarizeAdaptive.

Default: 15%

ThresholdLow

Defines the lower threshold hold setting to use when the Operation property is set to Binarize.

Default: 0

ThresholdMethod Sets processing method of binarization.
ZoomFactor

Defines the zoom value to use when the Operation property is set to Zoom.

Default: 1

ImageOp Object Results
The following list is a summary of the ImageOp object results with brief descriptions. For details on each result, refer to the following.
"Vision Guide 8.0 Properties & Result Reference"

Result Description
Passed Returns whether the object detection result was accepted.
Time Returns the amount of time required to process the object (unit: millisecond).
FocusValue

Returns the relative focus level.

The focus is optimum when the value becomes the minimum.

This result is displayed only when Operation property is set to DetectFocus.

Using ImageOp Objects
Now we have set the foundation for understanding how to use Vision Guide ImageOp objects. This next section will describe the steps required to use ImageOp objects as listed below:

  • How to create a new ImageOp object
  • Position and Size the search window
  • Configure the properties associated with the ImageOp object
  • Test the ImageOp object & examine the results
  • Make adjustments to properties and test again

Prior to starting the steps shown below, you should have already created a new vision sequence or selected a vision sequence to use. If you have no vision sequence to work with, you can create a new vision sequence by clicking on the [New Sequence] button on the Vision Guide toolbar. You can also select a sequence which was created previously by clicking on the [Sequence] tab in the Vision Guide window and clicking on the dropdown list box which is located towards the top of the [Sequence] tab. See the following for more details on how to create a new vision sequence or select one which was previously defined.
Creating a New Vision Sequence

Step 1: Create a New ImageOp Object

  1. Click the [All Tools] - the [New ImageOp] button on the Vision Guide toolbar.
  2. Move the mouse over the image display. You will see the mouse pointer change to the ImageOp object icon.
  3. Continue to move the mouse until the icon is at the desired position in the image display, then click the left mouse button to create the object.
  4. Notice that a name for the object is automatically created. In the example, it is called "ImgOp01" because this is the first ImageOp object created for this sequence. (We will explain how to change the name later.)

Step 2: Position and Size the Search Window
There is an ImageOp object similar to the one shown below:

New ImageOp Object Layout

  1. Click the name label of the ImageOp object and while holding the mouse down, drag the ImageOp object to the position where you would like the top left position of the search window to reside.
  2. Resize the ImageOp object search window as required using the search window size handles. Move the mouse pointer over a size handle, then while holding down the left mouse button, drag the handle to resize the window.

Step 3: Configure the ImageOp Object Properties
Now, set property values for the ImageOp object. Some of the commonly used properties that are specific to the ImageOp object are described below. Explanations for other properties such as Graphics, etc. which are used on many of the different vision objects can be seen in the Vision Guide 8.0 Properties and Results Reference Manual or in the ImageOp properties list.

Item Description
Name property

The default name given to a newly created ImageOp object is “ImgOp**” where ** is a number which is used to distinguish between multiple ImageOp objects within the same vision sequence.

If this is the first ImageOp object for this vision sequence, the default name will be “ImgOp01”.

To change the name, click the Value field of the Name property, type a new name and press the return key. Once the name property is changed, everywhere the ImageOp object's name is displayed is updated to reflect the new name.

Operation property Determines which image operation to perform. This is the most important property for the ImageOp object.
Iterations property Determines the number of iterations to perform.
Polarity property

Determine whether the operation will be performed on dark objects or light objects.

The default setting is dark objects. If you want to change it, click the Value field of the Polarity property and you will see a dropdown list with 2 choices: DarkOnLight or LightOnDark. Click the choice you want to use.

You can test the ImageOp object and then come back to set any other properties as required later.

Step 4: Teach colors for the ImageOp object
When the Operation property is set to ColorFilter, then you will need to teach one or more colors to be filtered along with a background color. The [Teach] button on the Vision Guide window is enabled and a rectangular model window appears inside the ImageOp main window.
When the Teach button is clicked, the following modeless dialog box is displayed:

Clicking the [Add] button adds a new filter color with a default color of black. After selecting the background color or one of the filter colors, click the [Teach] button to use the average color of the pixels in the model window, or you can enter the RGB value of the color directly.
You can change the size and position of the model window while this dialog box remains open. So you can add a color, size and position the model window, and then click Teach to teach the new color without closing the dialog box.
You can change the ColorMode property while the Teach dialog box is opened.
When ColorMode is RGB, then the Tolerance for each color has one value. When ColorMode is HSV, then the Tolerance for each color has three values (hTol, sTol, vTol).
The [Delete] button is used to delete filter colors. The background color cannot be deleted.

Step 5: Test the ImageOp Object
To run the ImageOp object, click the [Run Object] button located at the bottom left of the [Object] tab. You will be able to see the effect of your ImageOp tool on the image.

Step 6: Make Adjustments to Properties and Test Again
After running the ImageOp object a few times, you may have encountered problems or just want to fine-tune some of the property settings. Some common fine tuning techniques are described below:
Fine-tuning of the ImageOp object may be required for some applications. The primary properties associated with fine-tuning of an ImageOp object are described below:

Item Description
Iterations Determines how many times to perform the desired image operation.
ThresholdColor, ThresholdLow, ThresholdHigh, ThresholdAuto

These properties adjust parameters for the Binarize operation.

Refer to the descriptions of these properties in the Vision Guide 8.0 Properties and Results Reference Manual.

Once you have completed adjusting and have tested ImageOp until you are satisfied with the results, you are finished with creating this vision object. Go on to creating other vision objects or configuring and testing an entire vision sequence.

Geometric Object

Geometric Object Description
The Geometric object finds a model based on geometric features. It employs an algorithmic approach that finds models using edge-based geometric features instead of a pixel-to-pixel correlation. As such, the Geometric object offers several advantages over correlation pattern matching, including greater tolerance of lighting variations (including specular reflection) and models, as well as variations in scale and angle.
The Geometric object is normally used for determining the position and orientation of an object by locating features on the object. This is commonly used to find part positions to help guide the robot to pickup and placement positions.

Geometric Object Layout
The Geometric object has a search window and a model window, as shown below.

Geometric Object Properties
The following table lists general descriptions of Geometric object properties. For details on each property, refer to the following.
"Vision Guide 8.0 Properties & Result Reference"

Property Description
AbortSeqOnFail

Allows the user to specify that if the object fails (not passed), then the entire sequence is aborted at that point and no further objects in the sequence are processed.

Default: False

Accept

Specifies the shape score value to determine that the feature has been detected.

It detects objects that scores exceed the set value. If the value is small, it may result in false detection.

Default: 700

AngleEnable

Specifies whether a Geometric search will do a search with angle (rotation).

Specified prior to teaching a Model of the Geometric object.

AngleOffset

Specifies the offset value for rotation.

Default: 0.000

AngleRange Specifies the range within which to train a series of rotated models.
AngleStart Specifies the center of the angle search.
CalRobotPlacePos Calibrates RobotPlacePos when designing and performing program.
Caption Assigns a caption to the Geometric object.
CenterPointObject

Specifies the position to be used as the center point of the object.

When this property is set “Screen”, the object can be configured on arbitrary position. However, when specified to other vision object, the center point in PixelX, PixelY of the object is set.

CenterPntObjResult

Specifies which result to use from the CenterPointObject property.

If All is specified, Geometric object will be applied to all of the (NumberFound) for specified vision object results.

Default: 1

CenterPntOffsetX Sets or returns the X offset after the center point of the search window is positioned with the CenterPointObject.
CenterPntOffsetY Sets or returns the Y offset after the center point of the search window is positioned with the CenterPointObject.
CenterPntRotOffset

Specifies whether to rotate the XY offset value of the center (CenterPntOffsetX, CenterPntOffsetY) based on the Angle result of CenterPointObject.

If SearchWinType is set to RotatedRectangle, the search window rotates based on the Angle result.

Default: False

CheckClearanceFor Sets the object to confirm a clearance.
ClearanceCondition Specifies the way of decision for a clearance.
Confusion

Indicates the amount of confusion expected in the image to be searched.

This is the highest shape score a feature can get that is not an instance of the feature for which you are searching.

CoordObject

Specifies Coordinates object to copy the result. The copy is executed when the object is executed, and if it didn’t execute because of branch function of Decision, the copy will not be executed.

Default: None

CurrentResult Defines which result to display in the Results list on the Object window or which result to return data for when the system is requested to find more than one of a like feature within a single search window.
Description

Sets a user description

Default: Blank

DetailLevel Selects the level at which an edge is considered found during the search.
EditWindow Defines the don’t care pixels of the area to be searched.
Enabled

Specifies whether to execute the object.

Default: True

FailColor Selects the color of an object when it is not accepted.
Frame

Defines the current object searching position with respect to the specified frame.

(Allows the Geometric object to be positioned with respect to a frame.)

FrameResult

Specifies which number of the Frame results to be used.

Default: 1

Graphics Specifies a graphic to be displayed.
LabelBackColor Sets the background color for an object label.
ModelObject Determines which model to use for searching.
ModelOrgAutoCenter A model has a fixed reference point by which a location of the model in the image is indicated. This point is referred to as the model origin. The ModelOrgAutoCenter property causes the model origin to be placed at the center of the model window.
ModelOrgFindCenter Sets the model origin at the center of the registered model’s edge.
ModelOrgX Contains the X coordinate value of the model origin. (subpixels can be used)
ModelOrgY Contains the Y coordinate value of the model origin. (subpixels can be used)
ModelWin

Runtime only.

Sets or returns the model window left, top, height, width parameters in one call.

ModelWinAngle Defines the angle of the model window.
ModelWinCenterX Defines the X coordinate value of the center of the model window.
ModelWinCenterY Defines the Y coordinate value of the center of the model window.
ModelWinHeight Defines the height of the model window.
ModelWinLeft Defines the left most position of the model window.
ModelWinTop Defines the top most position of the model window.
ModelWinType Defines the model window type.
ModelWinWidth Defines the width of the model window.
Name Used to assign a unique name to the Geometric object. Default: Geom01
NumberToFind

Defines the number of objects to find in the current search window.

(Geometric objects can find more than 1 object at once.)

PassColor Selects the color for passed objects.
PassType

Selects the rule that determines if the object passed.

Default: SomeFound.

RejectOnEdge

Determines whether the part will be rejected if found on the edge of the search window.

Normally, this property should be set to True to avoid false detection caused by parts that are not completely within the search window.

SaveTeachImage Sets whether the camera image should be saved to a file when the model is taught.
ScaleEnable Enables scaling.
ScaleFactorMax Sets or returns the maximum scale factor applied to the ScaleTarget value.
ScaleFactorMin Sets or returns the minimum scale factor applied to the ScaleTarget value.
ScaleTarget Sets or returns the expected scale of the model you are searching for.
ScaleTargetPriority Sets or returns whether to detect objects near the ScaleTarget preferentially.
ScoreMode Sets or returns threshold for displaying the result at the time of Fail.
SearchReducedImage Sets or returns whether to use a size reduced image when searching.
SearchPolarity Sets or returns the search polarity.
SearchWin

Runtime only.

Sets or returns the following parameters in one call. Search window left, top, height, width, X coordinate of the center, Y coordinate of the center, radius size of inner circumference, radius size of outer circumference

SearchWinAngle Defines the angle of the area to be searched.
SearchWinAngleEnd Defines the end angle of the area to be searched.
SearchWinAngleStart Defines the start angle of the area to be searched.
SearchWinCenterX Defines the X coordinate value of the center of the area to be searched.
SearchWinCenterY Defines the Y coordinate value of the center of the area to be searched.
SearchWinHeight Defines the height of the area to be searched. (unit: pixel)
SearchWinLeft Defines the left most position of the area to be searched. (unit: pixel)
SearchWinPolygonPointX1 Defines the X coordinate value of the first vertex of the area to be searched when SearchWinType is set to “Polygon”.
SearchWinPolygonPointY1 Defines the Y coordinate value of the first vertex of the area to be searched when SearchWinType is set to “Polygon”.
SearchWinPolygonPointX2 Defines the X coordinate value of the second vertex of the area to be searched when SearchWinType is set to “Polygon”.
SearchWinPolygonPointY2 Defines the Y coordinate value of the second vertex of the area to be searched when SearchWinType is set to “Polygon”.
SearchWinPolygonPointX3 Defines the X coordinate value of the third vertex of the area to be searched when SearchWinType is set to “Polygon”.
SearchWinPolygonPointY3 Defines the Y coordinate value of the third vertex of the area to be searched when SearchWinType is set to “Polygon”.
SearchWinPolygonPointX4 Defines the X coordinate value of the fourth vertex of the area to be searched when SearchWinType is set to “Polygon”.
SearchWinPolygonPointY4 Defines the Y coordinate value of the fourth vertex of the area to be searched when SearchWinType is set to “Polygon”.
SearchWinPolygonPointX5 Defines the X coordinate value of the fifth vertex of the area to be searched when SearchWinType is set to “Polygon”.
SearchWinPolygonPointY5 Defines the Y coordinate value of the fifth vertex of the area to be searched when SearchWinType is set to “Polygon”.
SearchWinPolygonPointX6 Defines the X coordinate value of the sixth vertex of the area to be searched when SearchWinType is set to “Polygon”.
SearchWinPolygonPointY6 Defines the Y coordinate value of the sixth vertex of the area to be searched when SearchWinType is set to “Polygon”.
SearchWinPolygonPointX7 Defines the X coordinate value of the seventh vertex of the area to be searched when SearchWinType is set to “Polygon”.
SearchWinPolygonPointY7 Defines the Y coordinate value of the seventh vertex of the area to be searched when SearchWinType is set to “Polygon”.
SearchWinPolygonPointX8 Defines the X coordinate value of the eighth vertex of the area to be searched when SearchWinType is set to “Polygon”.
SearchWinPolygonPointY8 Defines the Y coordinate value of the eighth vertex of the area to be searched when SearchWinType is set to “Polygon”.
SearchWinPolygonPointX9 Defines the X coordinate value of the ninth vertex of the area to be searched when SearchWinType is set to “Polygon”.
SearchWinPolygonPointY9 Defines the Y coordinate value of the ninth vertex of the area to be searched when SearchWinType is set to “Polygon”.
SearchWinPolygonPointX10 Defines the X coordinate value of the tenth vertex of the area to be searched when SearchWinType is set to “Polygon”.
SearchWinPolygonPointY10 Defines the Y coordinate value of the tenth vertex of the area to be searched when SearchWinType is set to “Polygon”.
SearchWinPolygonPointX11 Defines the X coordinate value of the eleventh vertex of the area to be searched when SearchWinType is set to “Polygon”.
SearchWinPolygonPointY11 Defines the Y coordinate value of the eleventh vertex of the area to be searched when SearchWinType is set to “Polygon”.
SearchWinPolygonPointX12 Defines the X coordinate value of the twelfth vertex of the area to be searched when SearchWinType is set to “Polygon”.
SearchWinPolygonPointY12 Defines the Y coordinate value of the twelfth vertex of the area to be searched when SearchWinType is set to “Polygon”.
SearchWinRadiusInner Defines the circle inner radius of the area to be searched.
SearchWinRadiusOuter Defines the circle outer radius of the area to be searched.
SearchWinTop Defines the upper most position of the area to be searched. (unit: pixel)
SearchWinType Defines the type of the area to be searched (i.e. Rectangle, RotatedRectangle, Circle, Arc, Polygon).
SearchWinWidth Defines the width of the area to be searched. (unit: pixel)
SeparationAngle Sets or returns the minimum angle allowed between found objects.
SeparationMinX Sets or returns the minimum distance along the X axis allowed between found objects.
SeparationMinY Sets or returns the minimum distance along the Y axis allowed between found objects.
SeparationScale Sets or returns the minimum scale difference allowed between found objects.
SharedEdges Sets or returns whether to allow edges to be shared between found objects.
ShowModel

Displays a previously taught model at various zoom settings.

Can be used to change the model origin and don’t care pixels.

SkewFitEnable

Specifies whether to adopt skew on the model.

Default: False

Smoothness Sets or returns the smoothness level for the geometric edge extraction filter.
Sort Selects the sort order used for the results of an object.
Timeout Sets or returns the maximum search time for a Geometric object.

Geometric Object Results
The following table lists general descriptions of Geometric object results. For details on each result, refer to the following.
"Vision Guide 8.0 Properties & Result Reference"

Result Description
Angle

Returns the amount of rotation associated with a part that was found.

(i.e. This defines the amount of rotation a part may have in relation to the originally taught orientation.

CameraX Returns the X coordinate position of the found part's position (referenced by model origin) in the camera coordinate system.
CameraY Returns the Y coordinate position of the found part's position (referenced by model origin) in the camera coordinate system.
CameraXYU

Runtime only.

Returns the CameraX, CameraY, and CameraU coordinates of the found part's position in the camera coordinate system.

ClearanceOK Returns the result of decision for a clearance.
Found Returns whether the object was found. (i.e. whether the feature or part have a shape score which is above the Accept property's current setting.)
FoundOnEdge

Returns True when the Geometric object is found too close to the edge of the search window.

When FoundOnEdge is True the Found result is set to False.

NumberFound

Returns the object found.

(The detected number can be from 0 up to the number set with the NumberToFind property.)

Overlapped Returns True when the detected objects overlap.
Passed Returns whether the object detection result was accepted.
PixelX Returns the X coordinate position of the found part's position (referenced by model origin) in pixels.
PixelY Returns the Y coordinate position of the found part's position (referenced by model origin) in pixels.
PixelXYU

Runtime only.

Returns the PixelX, PixelY, and PixelU coordinates of the found part's position in pixels.

Reversed Returns True when the detected object has reverse polarity to the model.
RobotX Returns the X coordinate position of the found part's position (referenced by model origin) with respect to the Robot's Coordinate System.
RobotY Returns the Y coordinate position of the found part's position (referenced by model origin) with respect to the Robot's Coordinate System.
RobotU Returns the U coordinate position of the found part's position with respect to the Robot's Coordinate System.
RobotXYU

Runtime only.

Returns the RobotX, RobotY, and RobotU coordinates of the found part’s position with respect to the robot’s coordinate system.

Scale

Returns the scale factor.

ScaleEnabled must be set to True for this result to be valid.

Score Returns an Integer value which represents the level at which the feature found at runtime matches the model for which Geometric is searching.
ShowAllResults

Displays a dialog box which allows you to see all results for a specified vision object in a table form.

This makes it easy to compare results.

SkewDirection Returns the direction of skew on the object detected during execution.
SkewRatio Returns the skew ratio of the object detected during execution.
Time Returns the amount of time required to process the object (unit: millisecond).
TimedOut Returns whether the object execution terminates due to the time-out.

Understanding Geometric Search
The purpose of the Geometric object is to locate a previously trained model in a search window. Geometric objects can be used to find parts, detect presence and absence of parts or features, detect flaws, and a wide variety of other things.
This section will explain the basics of the Geometric objects. This section will cover the following topics:

  • Geometric Object Models and Features
  • Basic Searching Concepts
  • Setting the Accept and Confusion property
  • Additional Search Parameters
  • Using the Multiple Results Dialog box to Debug Searching Problems
  • Geometric objects and Rotation
  • Geometric objects and Scale
  • Model Training for Angle Searching
  • Searching Repeatability and Accuracy
  • Calibrating the Camera to Subject Distance

Geometric Object Models and Features
It is important to understand the difference between features and models when using Geometric objects. A feature is any specific pattern of edges in a search window. A feature can be anything from a simple edge a few pixels in area to a complex pattern of tens of thousands of pixels in area. The Geometric operation measures the extent to which a feature in a search window matches a previously taught Model of that feature. A feature is something within an actual search window as opposed to a model or template that is an idealized representation of a feature.
It is common to train a representative model from one search window to be used to search for similar features in that search window. The picture below shows a search window containing a feature of interest (a cross). To train a model of the cross, you define the model window and click the [Teach] button on the execution panel. (For details on teaching models, refer to the section Using Geometric Objects.)
As a result, the model is created as shown on the right side of the picture and can be used to search for other crosses within the search window.


Search window containing several features of interest (left) and model trained from image (right)

While finding models based on geometric features is a robust and reliable technique, there are a few pitfalls that you should be aware of when allocating your models so that you choose the best possible model.

  • Make sure your images have enough contrast
    Contrast is necessary for identifying edges in your model source and target image with sub-pixel accuracy. Images with weak contrast should be avoided since the Geometric search tool uses a geometric edge-based searching algorithm; the weaker the contrast, the less the amount and accuracy of edge-based information with which to perform a search. For this reason, it is recommended that you avoid models that contain slow gradations in grayscale values. You should maintain a difference in grayscale values of at least 10 between the background and the edges in your images.

  • Avoid poor geometric models
    Poor geometric models suffer from a lack of clearly defined geometric characteristics, or from geometric characteristics that do not distinguish themselves sufficiently from other image features. These models can produce unreliable results.

    For poor geometric models as shown in the figure, simple curves lack distinguishing features and can produce false matches.

  • Avoid ambiguous models
    Certain types of geometric models provide ambiguous results in terms of position, angle, or scale.
    Models that are ambiguous in position are usually composed of one or more sets of parallel lines only. For example, models consisting of only parallel lines should be avoided since it is impossible to establish an accurate position for them. An infinite number of matches can be found since the actual number of line segments in any particular line is theoretically limitless. Line segments should always contain some distinguishing contours that make them distinct from other image details.

    Models consisting of sets of parallel lines, without any distinguishing features as shown in the figure, should be avoided.

    Models that consist of small portions of objects should be tested to verify that they are not ambiguous in scale. For example, models that consist of isolated corners are ambiguous in terms of scale.
    This is an ambiguous candidate for searching through a range of scale. The lack of distinguishing geometric characteristics produces superfluous results.

    Symmetric models are often ambiguous in angle due to their similarity in features. For example, circles are completely ambiguous in terms of angle. Other simple symmetric models such as squares and triangles are ambiguous for certain angles.

  • Nearly ambiguous models
    When the major part of a model contains ambiguous features, false matches can occur because the percentage of the occurrence’s edges involved in the ambiguous features is great enough to be considered a match. To avoid this, make sure that your models have enough distinct features to distinguish the model from other target image features. This will ensure that only correct matches are returned as occurrences. For example, the model below can produce false matches since the greater proportion of active edges in the model is composed of parallel straight lines rather than distinguishing curves.

  • Basic Searching Concepts
    Searching locates features by finding the area of the search window to which the model is the most similar. The figure below shows a model and a search window, and the areas within the search window that are most similar to the model. A model similar to that shown below might be used to search for a feature such as a fiducial mark on a printed circuit board. A robot could then use the position data returned by the search function to find the location of the board for placing components or for positioning the board itself.

Setting the Accept and Confusion Property Thresholds
The Accept and Confusion properties are the main search parameters for Geometric objects. The Accept property influences searching speed by providing a hint as to when to pursue the search in a given region of the scene. When the Accept property is set high, features must be very similar to the model, so many regions can be ruled out by a cursory examination and not pursued further. If the Accept property is set to a low value, features that are only slightly similar to the model may exceed the Accept property threshold, so that a detailed examination of more regions in the scene is needed. Thus increasing the Accept property tends to increase speed. (i.e. higher Accept property values can make Geometric objects run faster.)
The Confusion property interacts with the number of results expected to influence searching speed. Together, the Confusion property and the number of results expected allow the system to quit the search before exploring all possible regions of the image.

Set the Accept property so that it will allow the system to find features that are examples of the “worst case degradation” you are willing to accept. The degradation may be caused by defects, scale, rotation or video noise. For the Accept property Vision Guide 8.0 sets the default value to 700. This is usually a good starting point for many applications. However, experimentation and correction will help you home in on the best value for your situation. Note that you do not always have to get perfect or nearly perfect scores for an application to function well. Shape scores of 200 may provide good positional information for some applications, depending on distortion of a feature.

Set the Confusion property based on the highest value you expect the “wrong thing” to get (plus a margin for error). The confusion threshold should be greater than or equal to the Accept property threshold. Setting the Confusion property to a high value will increase the time of the search, but may be necessary to insure that the right features are found. The Confusion property default value is 800 but should be adjusted depending upon the specific application requirements.

The figure below shows a scene where there is little confusion: the round pad is not very similar to the fiducial (cross). The Confusion property can therefore be set to a fairly low value (around 700). The Accept property is normally set less than or equal to the Confusion property, depending upon the amount of degradation you are willing to accept. Assuming this scene has little degradation, a shape score of 920 could be expected.

The figure below shows a scene where there is a lot of confusion; both the feedthrough and the IC pad are similar to the round pad. The Confusion property should therefore be set to a fairly high value (around 820).

A search window that has a region of constant gray value will always get a 0 Geometric value in that region. If a scene has a primarily uniform background (e.g. a white piece of paper), so that at most places there will be not Geometric, you can set the Confusion property to a low value because if the Geometric object finds anything, it must be the feature for which you are searching.

The Accept and Confusion properties can be thought of as hints that you provide to the system to enable it to locate features more quickly. In general, these properties should be set conservatively, but need not be set precisely. The most conservative settings are a low Accept property and High Confusion property. Use very conservative settings when you know very little about a scene in which you are searching; the search will be careful but slower. (This is especially important when using Geometric property positional results to give to the robot to move to.)

Use more liberal settings when you know a lot about a scene in which you are searching. For example, if you know that you are looking for one feature, and the rest of the scene is blank, a careful search is unnecessary; use more liberal settings and the search will be faster.

Additional Geometric search parameters

DetailLevel

DetailLevel determines what is considered an edge during the search.

Edges are defined by the transition in grayscale value between adjacent pixels. The default setting of Medium offers a robust detection of active edges from images with contrast variation, noise, and non-uniform illumination. In some cases where objects of interest have a very low contrast compared to high contrast areas in the image, some low contrast edges can be missed. If your images contain low-contrast objects, a DetailLevel setting of High should be used to ensure the detection of all important edges in the image. The VeryHigh setting performs an exhaustive edge extraction, including very low contrast edges. However, it should be noted that this mode is very sensitive to noise.

ScaleTargetPriority

Specifies whether to detect objects near the ScaleTarget preferentially. Set this property to True when detecting objects with small differences in size such as factory products.

Default: True

SearchReducedImage

Reduces the size of the input image when performing rough object detection. This property may reduce search time when the input image has numerous features (e.g. edges).

Default: False

Smoothness

The Smoothness property allows you to control the smoothing level of the edge extraction filter. The smoothing operation evens out rough edges and removes noise. The range of this control varies from 0 (no smooth) to 100 (a very strong smooth).

Default: 50

SharedEdges You can choose to allow multiple results to share edges, by setting SharedEdges to True. Otherwise, edges that can be part of more than one result are considered part of the result with the greatest score.
TimeOut

In time critical applications, you can set a time limit in milliseconds for the Geometric object to find occurrences of the specified model. If the required number of occurrences is not found before the time limit is up, the search will stop. Results are still returned for those occurrences found. However, it is not possible to predict which occurrences will be found before the time limit is reached.

Default: 2000 milliseconds

Using Multiple Results Dialog Box to Debug Searching Problems

Sometimes the parts that you are working with vary considerably (even within the same production lot) and sometimes there are 2 or more features on a part which are similar. This can make it very difficult to determine a good Accept property value. Just when you think you have set the Accept property to a good value, another part will come in which fools the system. In these cases it can be very difficult to see what is going on.
The [Show All Results] dialog box was created to help solve these and other problems. While you may only be interested in 1 feature on a part, requesting multiple results can help you see why a secondary feature is sometimes being returned by Vision Guide 8.0 as the primary feature you are interested in. This generally happens a few different ways:

  • When 2 or more features within the search window are very similar and as such have very close Score results.
  • When the Confusion or Accept properties are not set high enough which allow other features with lower scores than the feature you are interested in to meet the Accept property setting.

Both of the situations above can be quite confusing for the beginning Vision Guide 8.0 user when searching for a single feature within a search window.
If you have a situation where sometimes the feature you are searching for is found and sometimes another feature is found instead, use the Show All Results dialog box to home in on the problem. Follow the following steps to get a better view of what is happening:

  1. Set your NumberToFind property to 3 or more.
  2. Run the vision object from the Vision Guide 8.0 Development Environment.
  3. Click the [ShowAllResults] property button to bring up the [Show All Results] dialog box.
  4. Examine the scores of the top 3 or more features which were found.

Once you examine the scores of the top 3 or more features which were found as described above, it should become clear to you what is happening. In most cases you will see one of these two situations.

  • Each of the features that were found has a score greater than the Accept property setting. If this is the case, simply adjust your Confusion property value up higher to force the best feature to always be found rather that allowing other features to be returned because they meet the Accept threshold. You may also want to adjust the Accept property setting.
  • Each of the features are very close in score. If this is the case, then you will need to do something to differentiate between the feature which you are primarily interested in such as:
    • Readjust the search window so that the features which are randomly returning as the found feature are not contained inside.
    • Teach the Model again for the feature which you are most interested in.
    • Adjust the lighting for your application so that the feature which you are most interested in gets a much higher score than the other features which are currently fooling the system.
      For details on using multiple results, refer to the section Working with Multiple Results from a Single Object.

Geometric Objects and Separation
You can specify the minimum amount of separation from other occurrences (of the same model) necessary for an occurrence to be considered distinct (a match). In essence, this determines what amount of overlap by the same model occurrence is possible.
You can set the minimum separation for four criteria, which are: the X position, Y position, angle, and scale. For an occurrence to be considered distinct from another, only one of the minimum separation conditions needs to be met. For example, if the minimum separation in terms of angle is met, then the occurrence is considered distinct, regardless of the separation in position or scale. However, each of these separation criteria can be disabled so that it is not considered when determining a valid occurrence.
The minimum positional separation properties SeparationMinX and SeparationMinY determine how far apart the found positions of two occurrences of the same model must be. This separation is specified as a percentage of the model size at the nominal scale (ScaleTarget).
The default value is 10%. A value of 0% disables the property. For example, if your model is 100 pixels wide at the ScaleTarget, setting SeparationMinX to 10% would require that occurrences be at least 10 pixels apart in the X direction to be considered distinct and separate occurrences.
The minimum angular separation (SeparationAngle) determines the minimum difference in angle between occurrences. This value is specified as an absolute angle value. The default value is 10.0(. A value of 0( disables the property.
The minimum scale separation (SeparationScale) determines the minimum difference in scale between occurrences, as a scale factor. The default value is 1.1. A value of 1.0 disables the property.

Geometric Objects and Scale
The scale of the model establishes the size of the model that you expect to find in the target image. If the expected occurrence is smaller or larger than that of the model, you can set the scale according to the supported scale factors. The expected scale is set using the ScaleTarget property. (range 0.5 to 2.0).
By default, searching through a scale range is disabled. If necessary, you can also enable a search through a range of scales by setting ScaleEnable to True. This allows you to find models in the target image through a range of different sizes from the specified ScaleTarget, either smaller or larger. To specify the range of scales, use ScaleFactorMin and ScaleFactorMax. The ScaleFactorMin (0.5 to 1.0) and ScaleFactorMax (1.0 to 2.0) together determine the scale range from the nominal ScaleTarget.
These maximum and minimum factors are applied to the ScaleTarget setting as follows:

max scale = ScaleTarget x ScaleFactorMax
min scale = ScaleTarget x ScaleFactorMin

Note that the range is defined as factors so that if you change the expected scale (ScaleTarget), you do not have to modify the range. A search through a range of scales is performed in parallel, meaning that the actual scale of an occurrence has no bearing on which occurrence will be found first. However, the greater the range of scale, the slower the search becomes.

Geometric Objects and Rotation
Geometric objects are ideal for finding rotated parts. Since a geometric pattern of edges is being searched for, rotated parts can generally be found much more reliably than with other vision tools.
To use the search with angle capabilities of the Geometric object, the Model for the Geometric object must be taught with the AngleEnable property set to True. The AngleStart and AngleRange properties must also be set.

Model Training for Angle Searching
To Search with angle measurement, you must first configure the Geometric object for angle search. This is done by setting the AngleEnable property to True and using the AngleRange property to specify the range of angles over which models will be taught. You can also change the center of the angle search by setting the AngleStart property. This is the angle that the AngleRange is based on. For example, if AngleStart is 45 and AngleRange is 10, then the search will occur for models from 35° to 55°.
Keep in mind that when training models with angle, the search window must be large enough to allow the model to be rotated without any part of the model going outside of the search window.

Searching Repeatability and Accuracy
Searching repeatability and accuracy is affected by the size and details of the model (shape, coarseness of features, and symmetry of the model), and the degradation of the features as seen in the search window (noise, defects, and rotation and scale effects).
To measure the effect of noise on position, you can perform a search in a specific search window that contains a non-degraded feature, and then perform the exact same search again (acquiring a second image into the frame buffer) without changing the position of the object, and then comparing the measured positions. This can be easily done by the following steps:

  1. Click the [Run] button on the execution panel two or more times
  2. Click the [Statistics] button
  3. The Statistics dialog box can then be used to see the difference in position between the two object searches.

For a large model (30×30) on a non-degraded feature, the reported position can be repeatable to 1/20 of a pixel. However, in most cases it is more realistic to achieve results of just below a pixel. (1/2, 1/3, or 1/4 pixel)
Searching accuracy can be measured by performing a search in a specific search window that contains a non-degraded feature, moving the object an exact distance and then comparing the reported position difference with the actual difference. If you have a large model (30×30 or greater), no degradation, no rotation or scale errors, and have sufficient edges in both X and Y directions, searching can be accurate to 1/4 pixel. (Keep in mind that this searching accuracy is for the vision system only and does not take into effect the inaccuracies which are inherent with all robots. So if you try to move the part with the robot you must also consider the inaccuracies of the robot mechanism itself.)

Calibrating the Camera to Subject Distance
For optimal searching results, the size of the features in an image should be the same at search time as it was when the model was taught. Assuming the same camera and lens are used, if the camera to subject distance changes between the time the model is trained and the time the search is performed, the features in the search window will have a different apparent size. That is, if the camera is closer to the features they will appear larger; if the camera is farther away they will appear smaller.
If the camera to subject distance changes, you should retrain the Model.

Using Geometric Objects
We have reviewed how normalized Geometric and searching works and set the foundation for understanding how to use Vision Guide Geometric objects.
This section will describe the steps required to use Geometric objects as listed below:

  • Create a new Geometric object
  • Position and size the search window
  • Position and size the model window
  • Positioning the model origin
  • Configure properties associated with the Geometric object
  • Teach the Model
  • Test the Geometric object and examine the results
  • Make adjustments to properties and test again
  • Working with Multiple Results from a Single Geometric object

Prior to starting the steps shown below, you should have already created a new vision sequence or selected a vision sequence to use. If you have no vision sequence to work with, you can create a new vision sequence by clicking on the [New Sequence] button on the Vision Guide toolbar.

You can also select a sequence which was created previously by clicking on the sequence tree in the Vision Guide window.
See the following for more details on how to create a new vision sequence or select one which was previously defined.
Vision Sequences

Step 1: Create a new Geometric object

  1. Click the [All Tools] - the [Geometric] button on the Vision Guide toolbar.
  2. You will see a Geometric icon appear above the [Geometric] button.
  3. Click the Geometric icon and drag to the image display of the Vision Guide window.
  4. A name for the object is automatically created. In the example, it is called “Geom01” because this is the first Geometric object created for this sequence. (We will explain how to change the name later.)

Step 2: Position and Size the Search Window
You should now see a Geometric object similar to the one shown below:

New Geometric Object Layout

  1. Click the name label of the Geometric object and while holding the mouse down, drag the Geometric object to the position where you would like the top left position of the search window
  2. Resize the Geometric object search window as required using the search window size handles. (This means click a size handle and drag the mouse.) (The search window is the area within which we will search.)

CAUTION


Ambient lighting and external equipment noise may affect vision sequence image and results. A corrupt image may be acquired and the detected position could be any position in an object’s search area. Be sure to create image processing sequences with objects that use search areas that are no larger than necessary.

Step 3: Position and size the model window

  1. The search window for the Geometric object you want to work on should be highlighted where you can see the size handles on the search window. If you cannot see the size handles click the name field of the Geometric object. If you can see the size handles disregard this item and move on to the next item below.
  2. Click one of the lines of the box that forms the model window. This will cause the model window to be highlighted. (You should see the size handles on the model window now.)
  3. Click one of the lines of the box that forms the model window and while holding the mouse down, drag the model window to the position where you would like the top left position of the model window to reside.
  4. Resize the model window as required using the model window size handles. This means click a size handle and drag the mouse.

The model window should now be outlining the feature that you want to teach as the model for this Geometric object. Geometric object layout should now look like the example shown below, where the search window covers the area to be searched and the model window outlines the feature you want to search for. Although the actual search window and model window may be different, take this as an idea of what was expected so far.
Geometric object after search and model window positioning and resizing

KEY POINTS


Tips for Setting Proper Size and Position of the Model Window:
The size and position of the model window is very important since it defines the feature to be searched for. When creating a model window for a Geometric object, there are 2 primary items you must pay attention to:

  • The search time can be shortened by making the search window, which is the area to be searched, as small as possible. Also, especially when the rotation of the part is expected to be large, if the search window is set small and the model is also set small (for example, to a part of the part), the effect of the rotation of the part will be reduced.
  • Execution time can be reduced by making the model window as close in size to the search window as possible.

When two objects are positioned right next to each other and almost are touching, make the model window just a bit larger than the actual feature. This makes it easy to distinguish the object from others.
Note that the best model window size varies from application to application.

Step 4: Position the model origin
The model origin defines the position on the model that will be returned back as the position of the feature when you run the Geometric object. That is, the model origin should be placed in a place of significance if the position data is important.

For example, when using a Geometric object to find parts for a robot to pick up or place, it is important that the position of the model origin is in a location where the robot can easily grip the part because that is the position the robot will move to based on the RobotX, RobotY, RobotU or RobotXYU result.

When a new Geometric object is created the ModelOrgAutoCenter property is set to True (default value). This means that the model origin is set to the center of the model window automatically and cannot be moved manually. However if you want to move the model origin manually, you must first set the ModelOrgAutoCenter property to False. The steps to do this and also actually position the model origin are shown below.

  1. Click Geometric object on the flow chart of the Vision Guide window. Find the ModelOrgAutoCenter property on the properties list of the Object window and click in the value field.
  2. You will see a drop down list with 2 choices: True and False. Click the False choice. Now the ModelOrgAutoCenter property is set to False and the model origin can be moved with the mouse.
  3. Click the model window to highlight the model window.
  4. Click the model origin and drag the model origin to a new position by keeping the mouse button held down. The model origin can only be positioned within the bounds of the model window.

Step 5: Configure the Geometric object properties
Using the Object window displayed in Step 4, set the Geometric object property. To set properties, click the associated property's value field and enter a new value or click one of the items if a drop down list is displayed.
The following list shows some of the more commonly used properties for the Geometric object. Descriptions for other properties, such as AbortSeqOnFail and Graphics which are used on many of the different vision objects, are in Geometric properties. When testing the Geometric object, it is not necessary to set these properties. However, if you are working with Geometric objects for the first time, this section could be a good reference.

CAUTION


Ambient lighting and external equipment noise may affect vision sequence image and results. A corrupt image may be acquired and the detected position could be any position in an object’s search area. Properly configure Accept, RejectOnEdge and other properties to reduce the risk of detection errors.

Item Description
Name property

The default name given to a newly created Geometric object is “Geomxx” where xx is a number which is used to distinguish between multiple Geometric objects within the same vision sequence.

If this is the first Geometric object for this vision sequence, the default name will be “Geom01”.

To change the name, click the Value field of the Name property, type a new name and press the return key.

Once the name property is changed, everywhere the Geometric object's name is displayed is updated to reflect the new name.

Accept property

The Accept property sets the shape score that a feature must meet or beat to be considered Found.

The value returned to the Score result is compared against this Accept property Value.

Default: 700

Confusion property

If there are many features within the search window which look similar, the Confusion property is useful to help “home in” on the exact feature you want to find.

Default: 800

ModelOrgAutoCenter property

If you want to change the position of the model origin you must first set the ModelOrgAutoCenter property to False.

Default: True

Frame property

The Frame property allows you to select a previously defined Frame object as a reference frame for the Geometric object.

The details for Frames are described in Frame object.

NumberToFind property You can set the NumberToFind property larger than 1 depending on the number of features you want to find. This will allow the Geometric object to find multiple features within one search window.
AngleEnable property Set this property to True if you want to use a Geometric model to search with angle. To search multiple models with angle, set the property to True before teaching the model for the Geometric object.
AngleRange property Used with the AngleEnable property, the property searches the Geometric model with angle.
RejectOnEdge property

Allows you to exclude the parts touching the boundary of the search window.

Normally, this should be set to True.

It is possible to leave the properties as default and go on to the next step. The properties can be set later as necessary.

Step 6: Teach the model for the Geometric object
The Geometric object needs a model to search for and this is accomplished a process called teaching the model. You should have already positioned the model window for the Geometric object to outline the feature which you want to use as a model. Teaching the model is accomplished by following steps:

  1. Make sure that the Geometric object is the currently displayed. See the flow chart or the object tree to check which object is the object you are currently working on. Also, you can check the image display to see which object is highlighted in magenta.
  2. Click the [Teach] button on the execution panel. The model will be registered. It will take a few seconds for the Model to be taught in most cases. However, if you are teaching a model when the AngleEnable property is set to True, it takes more time to teach the model since the system is teaching many models each with a slight angle offset.

Step 7: Test the Geometric Object/Examine the Results
To run the Geometric object, click the [Run] button of the object on the execution panel.
Results for the Geometric object will be displayed. The primary results to examine at this time are:

Item Description
Found result

Returns whether the Geometric object was found.

If the feature you are searching for is found, this result returns as True. If the feature is not found, the Found result returns False and is highlighted in red. If the feature was not found, refer to Step 8 for the more common reasons why a Geometric object is not found.

FoundOnEdge result

This result will return True if the feature was found where a part of the feature is touching the boundary of the search window.

In this case the Found result will return False.

Score result

This shows how much the feature that most closely to the model matches to the model.

The Score result ranges from 0 to 1000 with 1000 being the best match possible. Examine the Score result after running a Geometric object as this is your primary measure of how well the feature was found.

Time result

The amount of time it took for the Geometric object to execute.

Remember that small search windows and small Models help speed up the search time.

NumberFound result When searching for more than one Geometric object, the NumberFound result returns the number of features which matched the Geometric object's Model.
Angle result

The angle at which the Geometric is oriented.

This is computed based on the original angle of the model. However, this value may sometimes be coarse in value and not so reliable.

We strongly recommend using the Polar object for finding Angles. Especially for robot guidance.

PixelX result

PixelY result

The XY position (in pixels) of the feature.

Remember that this is the position of the model origin with respect to the found feature. If you want to return a different position, you must first reposition the model origin and then re-teach the Model.

CameraX result

CameraY result

These are the XY position of the found feature in the Camera's Coordinate system.

The CameraX and CameraY results will only return a value if the camera has been calibrated. If it has not, then [No Cal] will be returned.

RobotX result

RobotY result

These are the XY position of the found feature in the Robot's Coordinate system.

The robot can be told to move to this XY position. (No other transformation or other steps are required.)

This value is the position of the model origin with respect to the found feature. If you want to return a different position, you must first reposition the model origin and then re-teach the Model. The RobotX and RobotY results will only return a value if the camera has been calibrated. If it has not then “No Cal” will be returned.

RobotU result

This is the angle returned for the found feature translated into the Robot's Coordinate system.

The RobotU result will only return a value if the camera has been calibrated. If it has not then [No Cal] will be returned.

ShowAllResults If you are working with multiple results, you may want to click the button in the ShowAllResults value field. This will bring up a dialog box to allow you to examine all the results for the current vision object.

KEY POINTS


The RobotXYU, RobotX, RobotY, RobotU, CameraX, CameraY, CameraXYU results will return “no cal” since a calibration is not done in the above example steps. This means it is impossible for the vision system to calculate the coordinate results with respect to the Robot coordinate system or Camera coordinate system since the calibration is not executed. Refer to Calibration for details.

Step 8: Make Adjustments to properties and test again
After running the Geometric object a few times, you may encounter problems with finding a Geometric or just want to fine tune some of the property settings. Some common problems and fine tuning techniques are described in the next section Geometric Object Problems.

Geometric Object Problems
If the Geometric object returns a Found result of False:

  • Try to change the Accept property to a lower value (for example, below the current score result) and run the Geometric object again.
  • Check the FoundOnEdge result if it has a return value of True. If it is True, the feature was found but it was found where a part of the feature is touching the search window. This causes the Found result to be returned as False. To correct this situation, make the search window larger or if this is impossible, try changing the position of the camera, or resizing the model window.
    If the Geometric object finds the wrong feature:
  • Check the Accept property if it is set high enough. If it is set rather low this could allow another feature to be found in place of the feature you are interested in.
  • Was the Confusion property set high enough? Is it higher than the Accept property? The Confusion property should normally be set to a value equal to or higher than the Accept property. But if there are features within the search window which are similar to the feature you are interested in, then the Confusion property must be moved to a higher value to make sure that your feature is found instead of one of the others.
  • Adjust the search window so that it more closely isolates the feature which you are interested in.

Geometric Object Fine Tuning
Fine tuning of the Geometric object is normally required to get the object working just right. Following is a description for the primary properties associated with fine tuning of a Geometric object and model addition:

Item Description
Accept property When you set the Accept property lower, the Geometric object can run faster. However, lower Accept property values can also cause features to be found which are not what you want to find. Try to run the Geometric object several times and if you are getting know about how the value of shape score that returned from Score result would be, adjust the value of Accept property. Find an appropriate value through several trial runs in order to acquire reliable feature at the best execution speed.
Confusion property If there are multiple features within the search window which look similar, you need to set the Confusion property relatively high. This will guarantee that the feature you are interested in is found rather than one of the confusing features. However, higher confusion costs execution speed. If you don't have multiple features within the search window which look similar, you can set the Confusion property lower to help reduce execution time.
Add another sample “Add another sample” can be selected when teaching is performed with the model window whose size is same as the current model’s model window. When the model slightly changes (shape or pattern is slightly different, shadow appears differently, etc.), selecting the changed model to add a new model may stabilize the score at object execution. The original model will be kept if the angle is largely out of position, or the model cannot be added due to significant difference.

Once you have completed adjusting and have tested the Geometric object until you are satisfied with the results, you are finished with creating this vision object. Go on to creating other vision objects or configuring and testing an entire vision sequence.

Other utilities for Geometric Objects
At this point you may want to consider examining the Histogram feature of Vision Guide 8.0. Histograms are useful because they graphically represent the distribution of grayscale values within the search window. The details regarding Vision Guide Histogram usage are to examine the Geometric object's results statistically.

Correlation Object

Correlation Object Description
The Correlation object is the most commonly used tool in Vision Guide 8.0. Once a Correlation object is trained it can find and measure the quality of previously trained features very quickly and reliably. The Correlation object is normally used for the following types of applications:

Item Description
Alignment For determining the position and orientation of a known object by locating features (e.g. registration marks) on the object. This is commonly used to find part positions to help guide the robot to pickup and placement positions.
Gauging Finding features on a part such as diameters, lengths, angles and other critical dimensions for part inspection.
Inspection Look for simple flaws such as missing parts or illegible printing.

Correlation Object Layout
The Correlation object has a search window and a model window, as shown below.

Correlation Object Properties
The following list is a summary of properties for the Correlation object. For details on each property, refer to the following.
"Vision Guide 8.0 Properties & Result Reference"

Property Description
AbortSeqOnFail

Allows the user to specify that if the object fails (not passed), then the entire sequence is aborted at that point and no further objects in the sequence are processed.

Default: False

Accept

Specifies the shape score that a feature must equal or exceed to be considered found.

It detects objects that scores exceed the set value. If the value is small, it may result in false detection.

Default: 700

AngleAccuracy

Specifies the desired accuracy for angle search in degrees.

Default: 1

AngleEnable

Specifies whether a correlation search will do a search with angle (rotation).

Specified prior to teaching a Model of the Correlation object.

Default: False

AngleMaxIncrement

Maximum angle increment amount for teaching a correlation model for searching with angle.

The maximum value is 10.

Default: 10

AngleOffset

Specifies the offset value for rotation.

Default: 0.000

AngleRange

Specifies the range within which to train a series of rotated models.

The maximum value is 45.

Default: 10

AngleStart

Specifies the center of the angle search.

Default: 0

CalRobotPlacePos Calibrates RobotPlacePos when designing and performing program.
Caption

Used to assign a caption to the Correlation object.

Default: Empty String

CenterPointObject

Specifies the position to be used as the center point of the object.

When this property is set “Screen”, the object can be configured on arbitrary position. However, when specified to other vision object, the center point in PixelX, PixelY of the object is set.

Default: Screen

CenterPntObjResult

Specifies which result to use from the CenterPointObject property.

If All is specified, the Correlation object will be applied to all (NumberFound) of the specified vision object results

Default: 1

CenterPntOffsetX

Sets or returns the X offset after the center point of the search window is positioned with the CenterPointObject.

Default: 0

CenterPntOffsetY

Sets or returns the Y offset after the center point of the search window is positioned with the CenterPointObject.

Default: 0

CenterPntRotOffset

Specifies whether to rotate the XY offset value of the center (CenterPntOffsetX, CenterPntOffsetY) based on the Angle result of CenterPointObject.

If SearchWinType is set to RotatedRectangle, the search window rotates based on the Angle result.

Default: False

CheckClearanceFor Sets the object to confirm a clearance.
ClearanceCondition Specifies the way of decision for a clearance.
Confusion

Indicates the amount of confusion expected in the image to be searched.

This is the highest shape score a feature can get that is not an instance of the feature for which you are searching.

Default: 800

CoordObject Specifies Coordinates object to copy the result. The copy is executed when the object is executed, and if it didn’t execute because of branch function of Decision, the copy will not be executed. Default: None
CurrentResult

Defines which result to display in the Results list (on the Object window) or which result to return data for when the system is requested to find more than one of a like feature within a single search window.

Default: 1

Description

Sets a user description

Default: Blank

EditWindow Defines the don’t care pixels of the area to be searched.
Enabled

Specifies whether to execute the object.

Default: True

FailColor

Selects the color for an object when it is failed.

Default: Red

Frame

Defines the current object searching position with respect to the specified frame.

It allows the Correlation object to be positioned with respect to a frame.

Default: None

FrameResult

Specifies which number of the Frame results to be used.

Default: 1

Graphics

Specifies a graphic to be displayed.

Default: 1 - All

LabelBackColor

Sets the background color for an object label.

Default: Transparent

ModelObject

Determines which model to use for searching.

Default: Self

ModelOrgAutoCenter

A model has a fixed reference point by which we describe its location in a model window. This point is referred to as the model’s Origin. The ModelOrgAutoCenter property causes the model origin to be placed at the center of the model window.

Default: True

ModelOrgX Contains the X coordinate value of the model origin. (subpixels can be used)
ModelOrgY Contains the Y coordinate value of the model origin. (subpixels can be used)
ModelWin

Runtime only.

Sets or returns the model window left, top, height, width parameters in one call.

ModelWinAngle Defines the angle of the model window.
ModelWinCenterX Defines the X coordinate value of the center of the model window.
ModelWinCenterY Defines the Y coordinate value of the center of the model window.
ModelWinLeft Defines the left most position of the model window.
ModelWinHeight

Defines the height of the model window.

Default: 50

ModelWinTop Defines the top most position of the model window.
ModelWinType Defines the model window type.
ModelWinWidth Defines the width of the model window.
Name

Used to assigns a unique name to the Correlation object.

Default: Corr01

NumberToFind

Defines the number of objects to find in the current search window.

Default: 1

PassColor

Selects the color for passed objects.

Default: LightGreen

PassType

Selects the rule that determines if the object passed.

Default: SomeFound

RejectOnEdge

Determines whether the part will be rejected if found on the edge of the search window. Normally, set True for the property. This avoids false detection caused by parts that are not completely within the search window.

Default: False

SaveTeachImage Sets whether the camera image should be saved to a file when the model is taught.
ScoreMode Sets or returns threshold for displaying the result at the time of Fail.
SearchWin

Runtime only.

Sets or returns the following parameters in one call. Search window left, top, height, width, X coordinate of the center, Y coordinate of the center, radius size of inner circumference, radius size of outer circumference

SearchWinAngle Defines the angle of the area to be searched.
SearchWinAngleEnd Defines the end angle of the area to be searched.
SearchWinAngleStart Defines the start angle of the area to be searched.
SearchWinCenterX Defines the X coordinate value of the center of the area to be searched.
SearchWinCenterY Defines the Y coordinate value of the center of the area to be searched.
SearchWinHeight

Defines the height of the area to be searched (unit: pixel).

Default: 100

SearchWinLeft Defines the left most position of the area to be searched (unit: pixel).
SearchWinPolygonPointX1 Defines the X coordinate value of the first vertex of the area to be searched when SearchWinType is set to “Polygon”.
SearchWinPolygonPointY1 Defines the Y coordinate value of the first vertex of the area to be searched when SearchWinType is set to “Polygon”.
SearchWinPolygonPointX2 Defines the X coordinate value of the second vertex of the area to be searched when SearchWinType is set to “Polygon”.
SearchWinPolygonPointY2 Defines the Y coordinate value of the second vertex of the area to be searched when SearchWinType is set to “Polygon”.
SearchWinPolygonPointX3 Defines the X coordinate value of the third vertex of the area to be searched when SearchWinType is set to “Polygon”.
SearchWinPolygonPointY3 Defines the Y coordinate value of the third vertex of the area to be searched when SearchWinType is set to “Polygon”.
SearchWinPolygonPointX4 Defines the X coordinate value of the fourth vertex of the area to be searched when SearchWinType is set to “Polygon”.
SearchWinPolygonPointY4 Defines the Y coordinate value of the fourth vertex of the area to be searched when SearchWinType is set to “Polygon”.
SearchWinPolygonPointX5 Defines the X coordinate value of the fifth vertex of the area to be searched when SearchWinType is set to “Polygon”.
SearchWinPolygonPointY5 Defines the Y coordinate value of the fifth vertex of the area to be searched when SearchWinType is set to “Polygon”.
SearchWinPolygonPointX6 Defines the X coordinate value of the sixth vertex of the area to be searched when SearchWinType is set to “Polygon”.
SearchWinPolygonPointY6 Defines the Y coordinate value of the sixth vertex of the area to be searched when SearchWinType is set to “Polygon”.
SearchWinPolygonPointX7 Defines the X coordinate value of the seventh vertex of the area to be searched when SearchWinType is set to “Polygon”.
SearchWinPolygonPointY7 Defines the Y coordinate value of the seventh vertex of the area to be searched when SearchWinType is set to “Polygon”.
SearchWinPolygonPointX8 Defines the X coordinate value of the eighth vertex of the area to be searched when SearchWinType is set to “Polygon”.
SearchWinPolygonPointY8 Defines the Y coordinate value of the eighth vertex of the area to be searched when SearchWinType is set to “Polygon”.
SearchWinPolygonPointX9 Defines the X coordinate value of the ninth vertex of the area to be searched when SearchWinType is set to “Polygon”.
SearchWinPolygonPointY9 Defines the Y coordinate value of the ninth vertex of the area to be searched when SearchWinType is set to “Polygon”.
SearchWinPolygonPointX10 Defines the X coordinate value of the tenth vertex of the area to be searched when SearchWinType is set to “Polygon”.
SearchWinPolygonPointY10 Defines the Y coordinate value of the tenth vertex of the area to be searched when SearchWinType is set to “Polygon”.
SearchWinPolygonPointX11 Defines the X coordinate value of the eleventh vertex of the area to be searched when SearchWinType is set to “Polygon”.
SearchWinPolygonPointY11 Defines the Y coordinate value of the eleventh vertex of the area to be searched when SearchWinType is set to “Polygon”.
SearchWinPolygonPointX12 Defines the X coordinate value of the twelfth vertex of the area to be searched when SearchWinType is set to “Polygon”.
SearchWinPolygonPointY12 Defines the Y coordinate value of the twelfth vertex of the area to be searched when SearchWinType is set to “Polygon”.
SearchWinRadiusInner Defines the circle inner radius of the area to be searched.
SearchWinRadiusOuter Defines the circle outer radius of the area to be searched.
SearchWinTop Defines the upper most position of the area to be searched (unit: pixel).
SearchWinType Defines the type of the area to be searched (i.e. Rectangle, RotatedRectangle, Circle, Arc, Polygon).
SearchWinWidth

Defines the width of the area to be searched (unit: pixel).

Default: 100

ShowModel

Displays a previously taught model at various zoom settings.

Can be used to change the model origin and don’t care pixels.

SkewFitEnable

Specifies whether to adopt skew on the model.

Default: False

Sort

Selects the sort order used for the results of an object.

Default: 0 - None

Timeout Sets or returns the maximum search time for a Correlation object.

Correlation Object Results
The following list is a summary of the Correlation object results with brief descriptions. The details for each result are explained in the Vision Guide 8.0 Properties and Results Reference Manual.

Results Description
Angle Returns the amount of found part rotation in degrees.
CameraX Returns the X coordinate position of the found part’s position (referenced by model origin) in the camera coordinate system. Values are in millimeters.
CameraY Returns the Y coordinate position of the found part’s position (referenced by model origin) in the camera coordinate system. Values are in millimeters.
CameraXYU

Runtime only.

Returns the CameraX, CameraY, and CameraU coordinates of the found part's position in the camera coordinate system.

ClearanceOK Returns the result of decision for a clearance.
Found Returns whether the object was found. (i.e. did the feature or part you are looking at have a shape score that is above the Accept property’s current setting.)
FoundOnEdge

Returns True when the Correlation object is found too close to the edge of the search window.

If RejectOnEdge is True, then the Found result is set to FALSE.

NumberFound

Returns the number of Correlation found.

(The detected number can be from 0 up to the number set with the NumberToFind property.)

Passed Returns whether the object detection result was accepted.
PixelX Returns the X coordinate position of the found part's position (referenced by model origin) in pixels.
PixelY Returns the Y coordinate position of the found part's position (referenced by model origin) in pixels.
PixelXYU

Runtime only.

Returns the PixelX, PixelY, and PixelU coordinates of the found part's position in pixels.

RobotX Returns the X coordinate position of the found part's position (referenced by model origin) with respect to the Robot's Coordinate System.
RobotY Returns the Y coordinate position of the found part's position (referenced by model origin) with respect to the Robot's Coordinate System.
RobotU Returns the U coordinate position of the found part's position with respect to the Robot's Coordinate System.
RobotXYU

Runtime only.

Returns the RobotX, RobotY, and RobotU coordinates of the found part's position with respect to the Robot’s Coordinate System.

Scale Returns the scaling value of the object detected during execution.
Score Returns an integer value between 0-1000 that represents the level at which the feature found at runtime matches the model for which Correlation is searching.
ShowAllResults

Displays a dialog box which allows you to see all results for a specified vision object in a table form.

This makes it easy to compare results.

SkewDirection Returns the direction of skew on the object detected during execution.
SkewRatio Returns the skew rate of the object detected during execution.
Time Returns the amount of time required to process the object (units: millisecond).
TimedOut Returns whether the object execution terminates due to the time-out.

Understanding Normalized Correlation
The purpose of Correlation is to locate and measure the quality of one or more previously trained features in a search window. Correlation objects can be used to find parts, detect presence and absence of parts or features, and detect flaws and a wide variety of other things.
While Vision Guide 8.0 has a wide variety of vision object tools, the Correlation object is the most commonly used tool due to its speed and overall reliability.
For example, in many applications Edge Tools can be used to find the edges of an object. However, if there are many potential edges within the same area that may confuse an Edge object, then a Correlation object may be able to be used instead to find the position of the edge.
There are also instances where Correlation objects are used over Blob objects (when a model can be taught) because they are more reliable.
Over the next few pages we will explain the basics behind the search tools as they apply to the Correlation object. This information will include the following sections:

  • Features and Models: Description
  • Basic Searching Concepts
  • Normalized Correlation
  • Normalized Correlation Shape Score
  • Normalized Correlation Optimization (Accept and Confusion)
  • Setting the Accept and Confusion property values
  • Additional Hints Regarding Accept and Confusion Properties
  • Correlation objects and rotation
  • Model Training for angle searching
  • Searching repeatability and accuracy
  • Calibrating the camera to subject distance

Correlation object models and features
It is important to understand the difference between features and models when using Correlation objects.
A feature is any specific pattern of gray levels in a search window. It can be anything from a simple edge a few pixels in area to a complex pattern of tens of thousands of pixels in area.
The correlation operation measures the extent to which a feature in a search window matches a previously taught model of that feature.
A feature is something within an actual search window as opposed to a model or template that is an idealized representation of a feature.
A model is a pattern of gray levels used to represent a feature. It is equivalent to a template in a template matching system.
If the model has two gray levels, it is a binary model. If it has more than two gray levels, it is a gray model.
All models used with Vision Guide 8.0 are gray models because gray models are more powerful in that they more closely represent the true feature than a binary model can. This helps produce more reliable results.
It is common to train a representative model from one search window to be used to search for similar features in that search window. The figure below shows a search window containing a feature of interest; a cross.
To train a model of the cross, you define the model window and click the [Teach] button on the execution panel.
(For details on teaching models, refer to Using Correlation Objects later in this chapter.
As a result, the model is created as shown in the right side figure and can be used to search for other crosses within the search window.

Search window containing several features of interest (left) and model trained from image (right)

Basic Searching Concepts
Searching locates features by finding the area of the search window to which the model is the most similar. The figure below shows a model and a search window, and the areas within the search window that are most similar to the model (peaks). A model similar to that shown in the figure might be used to search for a feature such as a fiducial mark on a printed circuit board. A robot could then use the position data returned by the search function to find the location of the board for placing components or for positioning the board itself.

There are a number of strategies that can be used to search for a model in a search window. The most commonly used method is the exhaustive search method for finding a match for the model. In the exhaustive search method, the model is evaluated at every possible location in the search window. The location in the search window with the greatest similarity is returned. The exhaustive search method can return the reliable result but on the other hand, it takes much time to process. Assume for example, a search window is 36 pixels square (36 x 36), and the model was 6 pixels square. To locate a match for the model, similarity would be assessed at all 961 possible locations.
The Vision Guide 8.0 searching method is a vast improvement over exhaustive searching. First, the search window is scanned to locate positions where a match is likely. Similarity is then assessed only for those candidate locations. The candidate of the best match is returned. This technique results in execution times that can be several thousand times faster than those achieved by an exhaustive search.

Normalized Correlation
Normalized correlation is a measure of the geometric similarity between an image and a model, independent of any linear differences in search window or model brightness.
Normalized correlation is used as the searching algorithm for Correlation objects of the Vision Guide 8.0 since it is powerful and robust. The normalized correlation value does not change in any of the following situations:

  • If all search window or model pixels are multiplied by some constant.
  • If a constant is added to all search window or model pixels.
    One of the most important characteristics of the normalized correlation is that the correlation value is independent of linear brightness changes in either the Search window or the Model.
    This is important because the overall brightness and contrast of an image are determined by factors such as illumination intensity, scene reflectivity, camera aperture, sensor gain and offset (including possible automatic gain control circuitry), and the gain and offset of the image digitizer. All of them are hard to be controlled in most production situations. For example, bulbs age, ambient light levels can change with time of day, cameras and digitizers may be defective and need to be replaced, and the reflectivity of objects being inspected can vary.
    Another important characteristic of normalized correlation is that the shape score value (see Normalized Correlation Shape Score subheading on the next page) has absolute significance; that is, you can define a perfect match that is the same for all models and search windows.
    It is independent not only of image brightness and contrast but also of model brightness, contrast, and size. It is this property that allows the shape scores to be used as a measure of feature quality for inspection applications.
    If the shape score calculated for a search window and model is 900, they are very similar.
    If the value is 100, they are not similar.
    These statements are valid without knowing anything about the model or search window area. This means the correlation coefficient has absolute meaning.

Normalized Correlation Shape Score
For the normalized correlation, the shape score of a feature (the extent to which it is similar to the model) is defined as a value between 0 and 1000. The larger the shape score, the greater the similarity between the feature and the model. (A shape score of 1000 represents a perfect match.)
The shape score is the value returned by the Correlation object as the Score result. (Additional information for the Score result is available in the Vision Guide 8.0 Properties and Results Reference Manual.)

Normalized Correlation Optimization (Accept and Confusion)
Traditional template matching is much too slow for most practical searching applications. This is because it is based on exhaustive correlation at every possible search position.
The solution to this problem lies in the use of a suitable directed search. In a directed search, data derived during the search are used to direct the search toward those locations that are promising and away from those that are not.
Searching uses an adaptation of a type of directed search called hill climbing. The idea of hill climbing is that the peak of a function of one or more variables can be found entirely from local data, by always moving in the direction of steepest ascent. The peak is reached when each of the neighboring points is lower than that point.
Hill climbing alone can fail under certain circumstances:

  • A false maximum can be mistaken for the peak of the hill.
  • If a plateau is reached there is no way to determine the direction in which the search should resume.

Another problem with hill climbing is determining the starting points: too few expeditions might miss significant hills; too many will eliminate the speed advantage that hill climbing provides by causing the same hill to be climbed from different directions.

Certain estimates can be made about the hills based on the search window area and the Model which help to overcome these problems.
Correlation search functions mathematically equivalent to a filtering operation: the correlation function is the output of a specific, known filter (the model), a filter that amplifies certain spatial frequencies and attenuates others. In addition, if the frequency content of a given portion of a search window is not similar to that of the model, there is no need to start a hill climbing expedition in this region.

By examining the transfer function of the model, the system can estimate the spatial frequency content of the correlation function, and, in turn the minimum spacing and size of the hills. Knowing the minimum spacing allows the system to plan where to start the hill climbing expeditions. Knowing the minimum size allows the system to avoid getting trapped by false peaks.
In addition to information obtained from the model, a pair of Vision Guide properties (i.e. the Accept property and the Confusion property) is specified to control the search.
The Accept property specifies the shape score that a feature must equal or exceed to be considered “Found” (i.e. Found result returns True) by the searching software. If a rough estimate of the height of a hill does not exceed the Accept property value in a given region of a search window, hill climbing is terminated for that region.
The Confusion property indicates the amount of confusion expected in the search window. Specifically, it is the highest shape score a feature can get that is not an instance of the feature for which you are searching. The Confusion property gives the system an important hint about the scene to be searched; namely, that if a feature gets a shape score above the confusion threshold, it must be an instance of the feature for which you are searching.
The system uses the Confusion property and the number of results (specified by the NumberToFind property) to determine which hills need to be climbed and which do not. Specifically, the search can terminate as soon as the expected number of features are found whose shape score is above the Confusion property threshold and the Accept property threshold.
To start the search, a number of hill climbing expeditions are conducted in parallel, starting at positions determined from the transfer function of the Model. As hill climbing progresses, a more and more accurate estimate of each hill's height emerges until the actual peak is found.

The hills are climbed in parallel, one step at a time, as long as no hill has an estimated height that exceeds the threshold set by the Confusion property. If an expedition reaches the Confusion property threshold, the hill is immediately climbed to its peak.

Setting the Accept and Confusion Property Thresholds
Both the Accept and Confusion properties affect searching speed for Correlation objects.
The Accept property influences searching speed by providing a hint as to when to pursue the search in a given region of the scene.
When the Accept property is set high, features must be very similar to the model. Therefore, many regions can be ruled out by a cursory examination and not pursued further.
If the Accept property is set to a low value, features that are only slightly similar to the model may exceed the Accept property threshold, so that a detailed examination of more regions in the scene is needed.

Thus increasing the Accept property tends to increase speed. (i.e. higher Accept property values can make Correlation objects run faster.)
The Confusion property interacts with the number of results expected to influence searching speed. Together, the Confusion property and the number of results expected allow the system to quit the search before exploring all possible regions of the image.
Set the Accept property so that it will allow the system to find features that are examples of the “worst case degradation” you are willing to accept. The degradation may be caused by defects, scale, rotation or video noise.
For the Accept property, the default value is set to 700 in the Vision Guide 8.0. This is usually a good starting point for many applications. However, experimentation and correction will help you home in on the best value for your situation.
Keep in mind that you do not always have to get perfect or nearly perfect scores for an application to function well. Even the shape scores of 200 provide good positional information for some applications, depending on the type of degradation to which a feature is subject. However, it is normally recommended that a shape score of above 500 is used for the Accept property for most applications.
Set the Confusion property based on the highest value you expect the “wrong thing” to get (plus a margin for error).
The confusion threshold should be greater than or equal to the Accept property threshold. Setting the Confusion property to a high value will increase the time of the search, but may be necessary to insure that the right features are found.
The Confusion property default value is 800 but should be adjusted depending upon the specific application requirements.

The figure below shows a scene where there is little confusion: the round pad is not very similar to the fiducial (cross). The Confusion property can therefore be set to a fairly low value (around 500).
The Accept property is normally set less than or equal to the Confusion property, depending upon the amount of degradation you are willing to accept. Assuming this scene has little degradation, a shape score of 920 could be expected.

A scene with little confusion

The figure below shows a scene where there is a lot of confusion; both the feed through hole and the IC pad are similar to the round pad. The Confusion property should therefore be set to a fairly high value (around 820).

A scene with a high degree of confusion

Additional Hints Regarding Accept and Confusion Properties
A search window that has a region of constant gray value will always get a 0 correlation value in that region. If a scene basically has a uniform background (e.g. a white piece of paper), there will be no correlation in most areas. Therefore, if the Correlation object finds anything, you can set the Confusion property to a low value since the found part should be the feature you are searching for.
The Accept and Confusion properties can be thought of as hints that you provide to the system to enable it to locate features more quickly.
In general, these properties should be set conservatively, but need not be set precisely. The most conservative settings are a low Accept property and high Confusion property.
Use very conservative settings when you know very little about a scene in which you are searching; the search will be careful but slower.
(This is especially important when using Correlation property positional results to give to the robot to move to.)
Use more liberal settings when you know a lot about a scene in which you are searching. For example, if you know that you are looking for one feature, and the rest of the scene is blank, a careful search is unnecessary; use more liberal settings and the search will be faster.

Using the Multiple Results Dialog Box to Debug Searching Problems
Sometimes the parts that you are working with vary considerably (even within the same production lot) and sometimes there are 2 or more features on a part which are similar. This can make it very difficult to determine a good Accept property value. Just when you think you have set the Accept property to a good value, another part will come in which fools the system. In these cases it can be very difficult to see what is going on.
The ShowAllResults dialog box was created to help solve these and other problems.
While you may only be interested in 1 feature on a part, requesting multiple results can help you see why a secondary feature is sometimes being returned by Vision Guide 8.0 as the primary feature you are interested in. This generally happens a few different ways:

  • When two or more features within the search window are very similar and as such have very close Score results.
  • When the Confusion or Accept properties are not set high enough which allow other features with lower scores than the feature you are interested in to meet the Accept property setting.

Both of the situations above can be quite confusing for the beginning Vision Guide 8.0 user when searching for a single feature within a search window.
If you have a situation where sometimes the feature you are searching for is found and sometimes another feature is found instead, use the Show All Results dialog box to home in on the problem. Follow the following steps to get a better view of what is happening:

  1. Set your NumberToFind property to 3 or more.
  2. Run the vision object from the Vision Guide 8.0 Development Environment.
  3. Click the [ShowAllResults] property button to bring up the Show All Results dialog box.
  4. Examine the scores of the top 3 or more features that were found.
  5. If only one or two features were found (Vision Guide 8.0 will only set scores for those features that are considered found) reduce your Accept property so that more than one feature will be found and Run the vision object again. (You can change the Accept level back after examining the ShowAllResults dialog box)
  6. Click the [ShowAllResults] property button to bring up the Show All Results dialog box.
  7. Examine the scores of the top three or more features that were found.

Once you examine the scores of the top three or more features that were found as described above, it should become clear to you what is happening. In most cases you will see one of these two situations.

  • Each of the features that were found has a score greater than the Accept property setting. If this is the case, simply adjust your Confusion property value up higher to force the best feature to always be found rather that allowing other features to be returned because they meet the Accept threshold. You may also want to adjust the Accept property setting.
  • Each of the features are very close in score. If this is the case, then you will need to do something to differentiate between the feature which you are primarily interested in such as:
    • Readjust the search window so that the features that are randomly returning as the found feature are not contained inside.
    • Teach the Model again for the feature that you are most interested in.
    • Adjust the lighting for your application so that the feature that you are most interested in gets a much higher score than the other features that are currently fooling the system.
      See the section Working with Multiple Results from a Single Object later in this chapter for more information on using multiple results.

Correlation Objects and Rotation
As with any template matching procedure, shape score and location accuracy degenerate if the size or the angle of the feature differs from that of the model. If the differences are large, the shape score will be very low, or the search operation will not find the feature.
The exact tolerance to angle and size changes depends on the Model, but typically falls between 3 and 10° for angle and 2 to 5 percent for size.
Exceptions include rotationally symmetric models, such as circles, which have no angle dependence; and simple edge and corner models, which have no size dependence.
Visualize two different scenes within a search window.
The first scene is the picture of a human face where the nose was taught as the model. The nose is not considered XY symmetric so rotation will dramatically affect the accuracy of position for this feature. The second scene is a printed circuit board where a fiducial (similar to one in the figure) is the mode. In this case the fiducial mark (a cross) is XY symmetric, so rotation does not have an immediate devastating effect on positional accuracy for this feature.
Basically, models that have a predominance of fine features (such as a nose, picture of a flower, tree or others) will be less tolerant to rotation. Features that are more symmetric, like the cross, will be more tolerant of rotation.
However, with this in mind we strongly recommend that you use Polar objects to determine rotation angles. The Correlation object can be used to find the XY position and then the Polar object can be associated with the Correlation object such that it can use the XY position as a center to work from to find the angle of the feature. See the Polar Object in this chapter for more information on how to associate Polar objects with Correlation and other vision objects.

There are a variety of techniques that can be used in situations where significant angle or scale changes are expected. The primary techniques are:

  • Break up complex features into smaller, simpler ones. In general, small simple models are much less sensitive to scale and angle changes than larger, more complex models.
  • Use the angle related properties (AngleEnable, AngleRange, and AngleMaxIncrement) to help find rotational angles when working with Correlation objects.
    This is appropriate for locating complex features in complex scenes, when those features cannot be broken up into simpler ones. This capability is implemented as a series of models at various angles. It can be many times slower than a normal search.

KEY POINTS


To use the search with angle capabilities of the Correlation object, the Model for the Correlation object must be taught with the AngleEnable property set to True. This causes the Correlation object to be taught at a variety of angles as defined by the AngleRange and AngleMaxIncrement properties.

Use Polar objects in conjunction with Correlation objects to determine the angle of rotation of a part. (See Polar Object in this chapter.)

Model Training for Angle Searching
To Search with angle measurement, you must first direct the system to teach a series of rotated models.
Setting the AngleEnable property to True and using the AngleRange property to specify the range of angles over which models will be taught does this. When you teach rotated models this way, searching automatically creates a set of models at various rotations in equal angular increments over that range.
You can also specify a maximum angle increment at which the models will be taught within the angular range. This is accomplished by setting an increment value in the AngleMaxIncrement property of a Correlation object.
However, keep in mind the following regarding the AngleMaxIncrement property:

  • If you provide a maximum angle increment, the model training function selects an angular increment automatically and uses the smaller of the automatically selected increment and the maximum angle increment you provide.
  • If you set the AngleMaxIncrement property to 0, the model teaching function selects an angle increment automatically and uses that angle increment. In this case the system typically sets an angle increment between 2 to 5°.
    This results in the smallest model storage requirements and the fastest search times, but may produce results that are more coarse than desired.

If you wish to measure angle precisely, you should set the AngleMaxIncrement property to an increment corresponding to the degree of precision you desire.
Keep in mind though, that the smaller the angle increment, the more storage will be required for the model and the slower the search time will be.

KEY POINTS


We recommend using the Polar object to determine angle whenever possible. This provides much more reliable and accurate results that are required using vision for robot guidance.

Keep in mind that when training models with angle, the search window must be large enough to allow the model to be rotated without any part of the model going outside of the search window.

Searching Repeatability and Accuracy
Searching repeatability and accuracy is a function of the size and details of the Model (shape, coarseness of features, and symmetry of the model), and the degradation of the features as seen in the search window (noise, defects, and rotation and scale effects).
To measure the effect of uncorrelated noise on position, you can perform a search in a specific search window that contains a non-degraded feature, and then perform the exact same search again (acquiring a 2nd image into the frame buffer) without changing the position of the object, and then comparing the measured positions.
This can be easily done by following steps:

  1. Click the [Run] button of the object on the execution panel two or more times
  2. Click the [Statistics] button.
  3. The Statistics dialog box can then be used to see the difference in position between the 2 object searches.

For a large model (30x30) on an non-degraded feature the reported position can be repeatable to 1/20 of a pixel. However, in most cases it is more realistic to achieve results of just below a pixel. (1/2, 1/3, or 1/4 pixel)
Searching accuracy can be measured by performing a search in a specific search window that contains a degraded feature, moving the object an exact distance and then comparing the reported position difference with the actual difference.
If you have a large model (30x30 or greater), no degradation, no rotation or scale errors, and have sufficient edges in both X and Y directions, searching can be accurate to 1/4 pixel. (Keep in mind that this searching accuracy is for the vision system only and does not take into effect the inaccuracies that are inherent with all robots. So if you try to move the part with the robot you must also consider the inaccuracies of the robot mechanism itself.)
The effects of rotation and scale on searching accuracy depend on the Model:

  • Models that are rotationally symmetric do well.
  • Models that have fine features and no symmetry do not do well.

Calibrating the Camera to Subject Distance
For optimal searching results, the size of the features in an image should be the same at search time as it was when the model was taught.
Assuming the same camera and lens are used, if the camera to subject distance changes between the time the model is trained and the time the search is performed, the features in the search window will have a different apparent size. That is, if the camera is closer to the features they will appear larger; if the camera is farther away they will appear smaller.

KEY POINTS


If the camera to subject distance changes, you must re-train the model.

Using Correlation Objects
Now that we've reviewed how normalized correlation and searching works we have set the foundation for understanding how to use Vision Guide 8.0 Correlation objects.
This section will describe the steps required to use Correlation objects as listed below:

  • Create a new Correlation object
  • Position and Size the search window
  • Position and size the model window
  • Position the model origin
  • Configure properties associated with the Correlation object
  • Teach the Model
  • Test the Correlation object & examine the results
  • Make adjustments to properties and test again
  • Working with Multiple Results from a Single Correlation object

Prior to starting the steps shown below, you should have already created a new vision sequence or selected a vision sequence to use. If you have no vision sequence to work with, you can create a new vision sequence by clicking on the [New Sequence] button.
You can also select a sequence which was created previously by clicking on the sequence tree in the Vision Guide window.
Refer to the following for more details on how to create a new vision sequence or select one that was previously defined.
Vision Sequences

Step 1: Create a new Correlation object

  1. Click the [All Tools] - the [New Correlation] button on the Vision Guide toolbar.
  2. The mouse cursor will change to a Correlation icon.
  3. Move the mouse cursor over the image display of the Vision Guide window and click the left mouse button to place the Correlation object on the image display
  4. Notice that a name for the object is automatically created. In the example, it is called "Corr01" because this is the first Correlation object created for this sequence. (We will explain how to change the name later.)

Step 2: Position and Size the Search Window
You should now see a Correlation object similar to the one shown below:

New Correlation object layout

  1. Click the name label (or on one of the sides) of the Correlation object and while holding the mouse down drag the Correlation object to the position where you would like the top left position of the search window to reside.
  2. Resize the Correlation object search window as required using the search window size handles. (This means click a size handle and drag the mouse.) The search window is the area within which we will search.

CAUTION


Ambient lighting and external equipment noise may affect vision sequence image and results. A corrupt image may be acquired and the detected position could be any position in an object’s search area. Be sure to create image processing sequences with objects that use search areas that are no larger than necessary.

Step 3: Position and size the model window

  1. The search window for the Correlation object you want to work on should be magenta in color and the sizing handles should be visible on each corner and along the middle of each side of the search window. If you cannot see the size handles click the name field of the Correlation object. Once you can see the sizing handles of the Correlation object you want to work on and it is magenta in color go to step 2.
  2. Click one of the lines of the box that forms the model window. This will cause the model window to be highlighted. (You should see the size handles on the model window now.)
  3. Enclose the object to register as a Correlation object model with the model window. Move the mouse pointer over one of the lines of the box which forms the model window and while holding the mouse down drag the model window to the position where you would like the top left position of the model window to reside.
  4. Resize the model window as required using the model window size handles . (This means click a size handle and drag the mouse.) (The model window should now be outlining the feature that you want to teach as the model for this Correlation object.)

Your Correlation object layout should now look something like the example in the figure below where the search window covers the area to be searched and the model window outlines the feature you want to search for. Of course your search window and Model will be different but this should give you an idea of what was expected so far.

Correlation object after search and model window positioning and resizing

KEY POINTS


Tips on Setting Proper Size and Position for the model window:
The size and position of the model window is very important since it defines the feature to be searched for. When creating a model window for a Correlation object, there are 2 primary items you must pay attention to:

  • The search time can be shortened by making the search window, which is the area to be searched, as small as possible. Especially if you expect that your part will rotate much, set the search window small and model small (like making them parts), then you can minimize the effect of part rotating.
  • Making the model window as close in size to the search window as possible can reduce execution time.

It is also sometimes a good idea to make the model window just a bit larger than the actual feature you are interested in. This will give the feature some border that may prove useful in distinguishing this object from others. Especially when two objects are positioned right next to each other and are touching. However, this additional border should only be a few pixels wide.
Keep in mind that each vision application is different so the best model window sizing technique will vary from application to application.

Step 4: Position the model origin
The model origin defines the position on the model that will be returned back as the position of the feature when you run the Correlation object. This means that the model origin should be placed in a place of significance if the position data is important.
For example, when using a Correlation object to find parts for a robot to pick up or place, it is important that the position of the model origin is in a location where the robot can easily grip the part.
This is because that will be the position the robot will move to based on the RobotX, RobotY, RobotU, RobotXYU results.
When a new Correlation object is created the ModelOrgAutoCenter property is set to True. (True is the default value for the ModelOrgAutoCenter property) This means that the model origin is set to the center of the model window automatically and cannot be moved manually.
If you want to move the model origin manually, you must first set the ModelOrgAutoCenter property to False. The steps to do this and also actually position the model origin are shown below.

  1. Click Correlation object on the flow chart on the Vision Guide window. Find the ModelOrgAutoCenter property on the property list on the Object window and click in the value field.
  2. You will see a drop down list with 2 choices: True and False. Click the False choice. You have now set the ModelOrgAutoCenter property to False and can move the model origin with the mouse.
  3. Click the model window to highlight the model window.
  4. Click the model origin and keep the mouse button held down while dragging the model origin to a new position. It should be noted that the model origin can only be positioned within the bounds of the model window.

Step 5: Configure the Correlation object properties
We can now set property values for the Correlation object. To set any of the properties simply click the associated property’s value field and then either enter a new value or if a drop down list is displayed click one of the items in the list.
Shown below are some of the more commonly used properties for the Correlation object.
When testing the Correlation object, it is not necessary to set these properties. However, if you are working with Correlation objects for the first time, this section could be a good reference.
Descriptions of other properties such as AbortSeqOnFail and Graphics, which are used on many of the different vision objects, can be seen in the following.

  • Correlation Object Properties List
  • "Vision Guide 8.0 Properties & Result Reference"

CAUTION


Ambient lighting and external equipment noise may affect vision sequence image and results. A corrupt image may be acquired and the detected position could be any position in an object’s search area. Properly configure Accept, RejectOnEdge and other properties to reduce the risk of detection errors.

Item Description
Name property

The default name given to a newly created Correlation object is “Corrxx” where xx is a number which is used to distinguish between multiple Correlation objects within the same vision sequence.

If this is the first Correlation object for this vision sequence then the default name will be “Corr01”. To change the name, click the Value field of the Name property, type a new name and press the return key. You will notice that once the name property is modified, every place where the Correlation object's name is displayed is updated to reflect the new name.

Accept property

The Accept property sets the shape score that a feature must meet or beat to be considered Found.

The value returned in the Score result is compared against this Accept property Value. The default value is 700 which will probably be fine before we try running the Correlation object for the first time.

Confusion property If there are many features within the search window which look similar, the Confusion property is useful to help “home in” on the exact feature you want to find. The default value is 800 that will probably be fine before running the Correlation object for the first time.
ModelOrgAutoCenter property

If you want to change the position of the model origin you must first set the ModelOrgAutoCenter property to False.

Default: True

Frame property Allows you to select a previously defined Frame object as a reference frame for this Correlation object. The details for Frames are defined in Frame Object in this chapter.
NumberToFind property Depending upon the number of features you want to find, you may want to set the NumberToFind property larger than 1. This will allow one Correlation object to find multiple features within one search window.
AngleEnable property You must set this property to True if you want to use a Correlation model to search with angle. This must be set to True at the time you teach the Model so that multiple models can be configured.
AngleMaxIncrement and AngleRange properties These properties are used along with the AngleEnable property for using the Correlation model to search with angle.
RejectOnEdge property Allows you to exclude the parts touching the boundary of the search window. Normally, this should be set to True.

It is possible to leave the properties as default and go on to the next step. The properties can be set later as necessary.

Step 6: Teach the model for the Correlation object
The Correlation object needs a model to search for and this is accomplished a process called teaching the model. You should have already positioned the model window for the Correlation object to outline the feature that you want to use as a model. Teaching the model is accomplished as follows:

  1. Make sure that the Correlation object is the currently displayed. See the flow chart or the object tree to check which object is the object you are currently working on. Also, you can check the image display to see which object is highlighted in magenta.
  2. Click the [Teach] button on the execution panel. The model will be registered. It will take only a few seconds for the Model to be taught in most cases. However, if you are teaching a model when the AngleEnable property is set to True it can take quite a few seconds to teach the model because the system is actually teaching many models each at a slight angle offset from the previous.

Step 7: Test the Correlation object / examine the results
To run the Correlation object, click the [Run] button on of the object on the execution panel.
Results for the Correlation object will now be displayed. The primary results to examine at this time are:

Item Description
Found result

Returns whether the Correlation was found.

If the feature you are searching for is found this result returns as True. If the feature is not found, the Found result returns a False and is highlighted in red. If the feature was not found read on to Step 8 for some of the more common reasons why a Correlation object is not found.

FoundOnEdge result

This result will return as True if the feature was found where a part of the feature is touching the boundary of the search window.

In this case the Found result will return as “False”.

Score result

This tells us how well we matched the model with the feature that most closely resembles the model.

Score results range from 0 to 1000 with 1000 being the best match possible. Examine the Score result after running a Correlation object as this is your primary measure of how well the feature was found.

Time result

The amount of time it took for the Correlation object to execute.

Remember that small search windows and small Models help speed up the search time.

NumberFound result When searching for more than 1 Correlation object, the NumberFound result returns the number of features that matched the Correlation object’s Model.
Angle result

The angle at which the Correlation is oriented.

This is computed based on the original angle of the model. However, this value may sometimes be coarse in value and not so reliable. We strongly recommend using the Polar object for finding Angles. Especially for robot guidance.

PixelX result

PixelY result

The XY position (in pixels) of the feature.

Remember that this is the position of the model origin with respect to the found feature. If you want to return a different position, you must first reposition the model origin and then re-teach the Model.

CameraX result

CameraY result

These define the XY position of the found feature in the Camera's Coordinate system.

The CameraX and CameraY results will only return a value if the camera has been calibrated. If it has not then [No Cal] will be returned.

RobotX result

RobotY result

These define the XY position of the found feature in the Robot's Coordinate system.

The robot can be told to move to this XY position. (No other transformation or other steps are required.)

Remember that this is the position of the model origin with respect to the found feature. If you want to return a different position, you must first reposition the model origin and then re-teach the Model. The RobotX and RobotY results will only return a value if the camera has been calibrated. If it has not then “No Cal” will be returned.

RobotU result

This is the angle returned for the found feature translated into the Robot's Coordinate system.

The RobotU result will only return a value if the camera has been calibrated. If it has not then [No Cal] will be returned.

ShowAllResults If you are working with multiple results, you may want to click the button in the ShowAllResults value field. This will bring up a dialog box to allow you to examine all the results for the current vision object.

KEY POINTS


The RobotX, RobotY, RobotU, RobotXYU and CameraX, CameraY, CameraU, CameraXYU results will return “no cal” at this time since we have not done a calibration in the example steps described above. This means that no calibration was performed so it is impossible for the vision system to calculate the coordinate results with respect to the Robot coordinate system or Camera coordinate system. Refer to the following for details.

Vision Calibration

Step 8: Make adjustments to properties and test again
After running the Correlation object a few times, you may have encountered problems with finding a Correlation or just want to fine-tune some of the property settings.
Some common problems and fine-tuning techniques are described in the next section called “Correlation object Problems”.

Correlation Object Problems
If the Correlation object returns a Found result of False.

  • Look at the Score result which was returned. Is the score result lower than the Accept property setting. If the Score result is lower, try changing the Accept property a little lower (for example, below the current score result) and run the Correlation object again.
  • Look at the FoundOnEdge result. Does it have a return value of True? If it is True, this means that the feature was found but it was found where a part of the feature is touching the search window. This causes the Found result to be returned as False. To correct this situation, make the search window larger or if this is impossible, try changing the position of the camera, or resizing the model window.

If the Correlation object finds the wrong feature

  • Was the Accept property set high enough? If it is set rather low this could allow another feature to be found in place of the feature you are interested in.
  • Was the Confusion property set high enough? Is it higher than the Accept property? The Confusion property should normally be set to a value equal to or higher than the Accept property. But if there are features within the search window which are similar to the feature you are interested in, then the Confusion property must be moved to a higher value to make sure that your feature is found instead of one of the others.
  • Adjust the search window so that it more closely isolates the feature that you are interested in.

Correlation Object Fine Tuning
Fine-tuning of the Correlation object is normally required to get the object working just right.
Following is a description for the primary properties associated with fine-tuning of a Correlation object and model addition:

Item Description
Accept property The lower the Accept property, the faster the Correlation object can run. However, lower Accept property values can also cause features to be found which are not what you want to find. After you have run the Correlation object a few times, you will become familiar with the shape score which is returned in the Score result. Use these values when determining new values to enter for the Accept property. A happy medium can usually be found with a little experimentation that results in reliable feature finds that are fast to execute.
Confusion property If there are multiple features within the search window which look similar, you will need to set the Confusion property relatively high. This will guarantee that the feature you are interested in is found rather than one of the confusing features. However, higher confusion costs execution speed. If you don't have multiple features within the search window that look similar, you can set the Confusion property lower to help reduce execution time.
Add another sample “Add another sample” can be selected when teaching is performed with the model window whose size is same as the current model’s model window. When the model slightly changes (shape or pattern is slightly different, shadow appears differently, etc.), selecting the changed model to add a new model may stabilize the score at object execution. The original model will be kept if the angle is largely out of position, or the model cannot be added due to significant difference.

Once you have completed adjusting and have tested the Geometric object until you are satisfied with the results, you are finished with creating this vision object.
Go on to creating other vision objects or configuring and testing an entire vision sequence.

Other Useful Utilities for Use with Correlation Objects
At this point you may want to consider examining the histogram feature of Vision Guide 8.0.
Histograms are useful because they graphically represent the distribution of gray-scale values within the search window. The details regarding Vision Guide histogram usage are described in the following.
Overview

You may also want to use the statistics feature of Vision Guide to examine the Correlation object's results statistically.
An explanation of the Vision Guide Statistics features are explained in the following.
Using Vision Guide Statistics

Blob Object

Blob Object Description
Blob objects compute geometric, topological and other features of images.
Blob objects are useful for determining presence/absence, size and orientation of features in an image. For example, Blob objects can be used to detect the presence, size and location of ink dots on silicon wafers, for determining the orientation of a component, or even for robot guidance. (However, it is recommended to use the Polar object for the decision of the rotation direction.)
Some of the features of the computed by Blob objects are:

  • Area and perimeter
  • Center of mass
  • Principal axes and moments
  • Connectivity
  • Extrema
  • Coordinate positions of the center of mass in pixel, camera coordinate system, and robot coordinate system
  • Holes, roughness, and compactness of blobs

Blob Object Layout
The Blob object layout is rectangular just like the correlation objects. However, Blob objects do not have models. This means there is no need for a model window or model origin for the Blob object layout. As shown below, Blob objects only have an object name and a search window. The search window defines the area within which to search for a blob. The Blob object layout is shown below:

Blob Object Properties
The following list is a summary of properties for the Blob object. The details for each property are explained in the Vision Guide 8.0 Properties and Results Reference Manual.

Property Description
AbortSeqOnFail

Allows the user to specify that if the object fails (not passed), then the entire sequence is aborted at that point and no further objects in the sequence are processed.

Default: False

CalRobotPlacePos Calibrates RobotPlacePos when designing and performing program.
Caption

Assigns a caption to the Blob object.

Default: Empty String

CenterPointObject

Specifies the position to be used as the center point of the object.

When this property is set “Screen”, the object can be configured on arbitrary position. However, when specified to other vision object, the center point in PixelX, PixelY of the object is set.

Default: Screen

CenterPntObjResult

Specifies which result to use from the CenterPointObject property.

If All is specified, Blob object will be applied to all of the (NumberFound) for specified vision object results.

Default: 1

CenterPntOffsetX

Sets or returns the X offset after the center point of the search window is positioned with the CenterPointObject.

Default: 0

CenterPntOffsetY

Sets or returns the Y offset after the center point of the search window is positioned with the CenterPointObject.

Default: 0

CenterPntRotOffset

Specifies whether to rotate the XY offset value of the center (CenterPntOffsetX, CenterPntOffsetY) based on the Angle result of CenterPointObject.

If SearchWinType is set to RotatedRectangle, the search window rotates based on the Angle result.

Default: False

CheckClearanceFor Sets the object to confirm a clearance.
ClearanceCondition Specifies the way of decision for a clearance.
CoordObject

Specifies Coordinates object to copy the result. The copy is executed when the object is executed, and if it didn’t execute because of branch function of Decision, the copy will not be executed.

Default: None

CurrentResult

Defines which result to display in the Results list (on the Object window) or which result to return data for when the system is requested to find more than one of a like feature within a single search window.

Default: 1

Description

Sets a user description

Default: Blank

EditWindow Defines the don’t care pixels of the area to be searched.
Enabled

Specifies whether to execute the object.

Default: True

FailColor

Selects the color of an object when it is not accepted.

Default: Red

FillHoles

Specifies whether to fill the holes in a binary image.

Default: False

Frame

Specifies which positioning frame to use.

Default: None

FrameResult

Specifies which number of the Frame results to be used.

Default: 1

Graphics Specifies a graphic to be displayed.
LabelBackColor

Selects the background color for an object label.

Default: Transparent

MaxArea

Defines the upper Area limit for the Blob object.

For a Blob to be found it must have an Area result below the value set for MaxArea property.

Default: 100,000

MinArea

Defines the lower Area limit for the Blob object.

For a Blob to be found it must have an Area result above the value set for MinArea property.

Default: 25

MinMaxArea Runtime only. Sets or returns both MinArea and MaxArea in one statement.
Name

Used to assign a unique name to the Blob object.

Default: Blob01

NumberToFind

Defines the number of blobs to find in the search window.

Default: 1

PassColor

Selects the color for an object when it is passed.

Default: Light Green

PassType

Selects the rule that determines if the object passed.

Default: SomeFound

Polarity

Defines the differentiation between objects and background. (Either “Dark Object on Light Background” or “Light Object on Dark Background”.)

Default: 1 - DarkOnLight

RejectOnEdge

Determines whether the part will be rejected if found on the edge of the search window.

Default: False

SearchWin

Runtime only.

Sets or returns the following parameters in one call. Search window left, top, height, width, X coordinate of the center, Y coordinate of the center, radius size of inner circumference, radius size of outer circumference

SearchWinAngle Defines the angle of the area to be searched.
SearchWinAngleEnd Defines the end angle of the area to be searched.
SearchWinAngleStart Defines the start angle of the area to be searched.
SearchWinCenterX Defines the X coordinate value of the center of the area to be searched.
SearchWinCenterY Defines the Y coordinate value of the center of the area to be searched.
SearchWinHeight

Defines the height of the area to be searched (Unit: pixel).

Default: 100

SearchWinLeft Defines the left most position of the area to be searched (Unit: pixel).
SearchWinPolygonPointX1 Defines the X coordinate value of the first vertex of the area to be searched when SearchWinType is set to “Polygon”.
SearchWinPolygonPointY1 Defines the Y coordinate value of the first vertex of the area to be searched when SearchWinType is set to “Polygon”.
SearchWinPolygonPointX2 Defines the X coordinate value of the second vertex of the area to be searched when SearchWinType is set to “Polygon”.
SearchWinPolygonPointY2 Defines the Y coordinate value of the second vertex of the area to be searched when SearchWinType is set to “Polygon”.
SearchWinPolygonPointX3 Defines the X coordinate value of the third vertex of the area to be searched when SearchWinType is set to “Polygon”.
SearchWinPolygonPointY3 Defines the Y coordinate value of the third vertex of the area to be searched when SearchWinType is set to “Polygon”.
SearchWinPolygonPointX4 Defines the X coordinate value of the fourth vertex of the area to be searched when SearchWinType is set to “Polygon”.
SearchWinPolygonPointY4 Defines the Y coordinate value of the fourth vertex of the area to be searched when SearchWinType is set to “Polygon”.
SearchWinPolygonPointX5 Defines the X coordinate value of the fifth vertex of the area to be searched when SearchWinType is set to “Polygon”.
SearchWinPolygonPointY5 Defines the Y coordinate value of the fifth vertex of the area to be searched when SearchWinType is set to “Polygon”.
SearchWinPolygonPointX6 Defines the X coordinate value of the sixth vertex of the area to be searched when SearchWinType is set to “Polygon”.
SearchWinPolygonPointY6 Defines the Y coordinate value of the sixth vertex of the area to be searched when SearchWinType is set to “Polygon”.
SearchWinPolygonPointX7 Defines the X coordinate value of the seventh vertex of the area to be searched when SearchWinType is set to “Polygon”.
SearchWinPolygonPointY7 Defines the Y coordinate value of the seventh vertex of the area to be searched when SearchWinType is set to “Polygon”.
SearchWinPolygonPointX8 Defines the X coordinate value of the eighth vertex of the area to be searched when SearchWinType is set to “Polygon”.
SearchWinPolygonPointY8 Defines the Y coordinate value of the eighth vertex of the area to be searched when SearchWinType is set to “Polygon”.
SearchWinPolygonPointX9 Defines the X coordinate value of the ninth vertex of the area to be searched when SearchWinType is set to “Polygon”.
SearchWinPolygonPointY9 Defines the Y coordinate value of the ninth vertex of the area to be searched when SearchWinType is set to “Polygon”.
SearchWinPolygonPointX10 Defines the X coordinate value of the tenth vertex of the area to be searched when SearchWinType is set to “Polygon”.
SearchWinPolygonPointY10 Defines the Y coordinate value of the tenth vertex of the area to be searched when SearchWinType is set to “Polygon”.
SearchWinPolygonPointX11 Defines the X coordinate value of the eleventh vertex of the area to be searched when SearchWinType is set to “Polygon”.
SearchWinPolygonPointY11 Defines the Y coordinate value of the eleventh vertex of the area to be searched when SearchWinType is set to “Polygon”.
SearchWinPolygonPointX12 Defines the X coordinate value of the twelfth vertex of the area to be searched when SearchWinType is set to “Polygon”.
SearchWinPolygonPointY12 Defines the Y coordinate value of the twelfth vertex of the area to be searched when SearchWinType is set to “Polygon”.
SearchWinRadiusInner Defines the circle inner radius of the area to be searched.
SearchWinRadiusOuter Defines the circle outer radius of the area to be searched.
SearchWinTop Defines the upper most position of the area to be searched (unit: pixel).
SearchWinType Defines the type of the area to be searched (i.e. Rectangle, RotatedRectangle, Circle, Arc, Polygon).
SearchWinWidth

Defines the width of the area to be searched (Unit: pixel).

Default: 100

SizeToFind

Selects which size of blobs to find.

Default: 1 - Largest

Sort

Selects the sort order used for the results of an object.

Default: 0 - None

ThresholdAuto

Specifies whether to automatically set the threshold value of the gray level that represents the feature (or object), the background, and the edges of the image.

Default: Disables

ThresholdBlockSize

Defines the range to refer the neighborhood area to set the threshold and use when the ThresholdMethod property is set to LocalAdaptive.

Default: 1/16ROI

ThresholdColor

Defines the color assigned to pixels within the thresholds.

Default: Black

ThresholdHigh

Works with the ThresholdLow property to define the gray level regions that represent the feature (or object), the background, and the edges of the image.

The ThresholdHigh property defines the upper bound of the gray level region for the feature area of the image.

Any part of the image that falls within gray level region defined between ThresholdLow and ThresholdHigh will be assigned a pixel weight of 1. (i.e. it is part of the feature.)

If the ThresholdAuto property is “True” and the ThresholdColor property is “White”, this property value will be set to 255 and cannot be changed.

Default: 128

ThresholdLevel

Defines the ratio between the neighborhood area and the luminance difference to use when the ThresholdMethod property is set to LocalAdaptive.

Default: 15%

ThresholdLow

Works with the ThresholdHigh property to define the gray level regions that represent the feature (or object), the background, and the edges of the image.

The ThresholdLow property defines the lower bound of the gray level region for the feature area of the image.

Any part of the image that falls within gray level region defined between ThresholdLow and ThresholdHigh will be assigned a pixel weight of 1. (i.e. it is part of the feature.)

If the ThresholdAuto property is “True” and the ThresholdColor property is “Black”, this property value will be set to 0 and cannot be changed.

Default: 0

ThresholdMethod Sets processing method of binarization.

Blob Object Results
The following list is a summary of the Blob object results with brief descriptions. For details on each result, refer to the following.
"Vision Guide 8.0 Properties & Result Reference"

Results Description
Angle Returns the amount of found part rotation in degrees.
Area Returns the area of the blob in pixels.
CameraX Returns the X coordinate position of the found part's position in the camera coordinate system.
CameraY Returns the Y coordinate position of the found part's position in the camera coordinate system.
CameraXYU

Runtime only.

Returns the CameraX, CameraY, and CameraU coordinates of the found part's position in the camera coordinate system.

ClearanceOK Returns the result of decision for a clearance.
Compactness Returns the compactness of a blob.
Extrema

Runtime only.

Returns MinX, MaxX, MinY, MaxY pixel coordinates of the blob Extrema.

Found

Returns whether the object was found.

(i.e. was a Connected Blob found which has an Area result that falls between the MinArea and MaxArea properties.)

FoundOnEdge Returns True when a Blob object is found too close to the edge of the search window.
Holes Returns the number of holes found in the blob.
MajorDiameter Returns the major diameter in the similar case of a ellipse of the found blob.
MaxFeretDiameter Returns the maximum feret diameter of the found blob.
MaxX Returns the maximum X pixel coordinate of the blob Extrema in pixels.
MaxY Returns the maximum Y pixel coordinate of the blob Extrema in pixels.
MinorDiameter Returns the minor diameter in the similar case of a ellipse of the found blob.
MinX Returns the minimum X pixel coordinate of the blob Extrema in pixels.
MinY Returns the minimum Y pixel coordinate of the blob Extrema in pixels.
NumberFound

Returns the number of blobs found within the search window.

(This number can be anywhere from 0 up to the number of blobs you requested the Blob object to find with the NumberToFind property.)

Passed Returns whether the object detection result was accepted.
Perimeter The number of pixels along the outer edge of the found blob.
PixelX Returns the X coordinate position of the found part's position in pixels.
PixelY Returns the Y coordinate position of the found part's position in pixels.
PixelXYU

Runtime only.

Returns the PixelX, PixelY, and PixelU coordinates of the found part's position in pixels.

RobotX Returns the X coordinate of the detected object in the robot coordinate system.
RobotY Returns the Y coordinate of the detected object in the robot coordinate system.
RobotU Returns the U coordinate of the detected object in the robot coordinate system.
RobotXYU

Runtime only.

Returns the RobotX, RobotY, and RobotU coordinates of the detected object in the robot coordinate system.

Roughness Returns the roughness of a blob.
ShowAllResults

Displays a dialog box which allows you to see all results for a specified vision object in a table form.

This makes it easy to compare results.

Time Returns the amount of time required to process the object (unit: millisecond).
TotalArea Returns the sum of areas of all results found.

How Blob Analysis Works
Blob analysis processing takes place in the following steps:

  1. Segmentation, which consists of
    • Thresholding
    • Connectivity analysis
  2. Blob results computation

Segmentation
In order to make measurements of an object's features, blob analysis must first determine where in the image an object is: that is, it must separate the object from everything else in the image. The process of splitting an image into objects and background is called segmentation.
The Blob object is intended for use on an image that can be segmented based on the grayscale value of each of its pixels. A simple example of this kind of segmentation is thresholding.
While the Blob object described in this chapter produces well defined results even on arbitrary gray scale images that cannot be segmented by gray-scale value, such cases are typically of limited utility because the results are influenced by the size of the window defining the image.

For whole image blob analysis, the feature of interest must be the only object in the image having a particular gray level. If another object in the image has the same gray-scale value, the image cannot be segmented successfully. Figure A through Figure D illustrate which can and cannot be segmented by gray-scale value for whole image blob analysis.


Figure A: Scene that can be segmented by gray-scale value

A shows the camera’s field of view (left). The scene to be processed using a Blob object falls within the search window labeled “Blob01”. After segmentation by gray-scale value, the object and the background are easily distinguishable as shown on the right side of Figure A.


Figure B: Scene from Figure A with larger search window

Changing the size of the search window as shown in Figure B, changes only the size of the background. It has no effect on the features of the blob.
Figure C and Figure D show a similar field of view for an image. However, this scene cannot be segmented by gray-scale value because there are two objects in the image which have the same gray-scale value.
For this scene, both the area of the background and the measured features of the blob vary depending on the size of the search window and the portion of the image it encloses. While the image in Figure C could be segmented by gray-scale value, enlarging the search window as shown in Figure D, completely changes the segmented image. Only eliminating one or the other from within the search window can separate these objects.


Figure C: Scene that cannot be segmented by gray-scale value


Figure D: Enlarged search window encloses two objects with the same gray scale value

You should also watch out for situations as shown in Figure E. In this example, the inner blob and part of the background have become connected. This forms one large blob that has very different features from the original center blob that we are trying to segment.


Figure E: Object and background have been connected

Thresholding
The Blob object uses thresholding to determine the weight of each pixel in an image.
Two user defined thresholds are used: ThresholdLow and ThresholdHigh. Pixels whose grayscale values are between the thresholds are assigned a pixel weight of 1 and all others 0.
The ThresholdColor property is used to define the color of the pixels for weight 1. This is the color (black or white) between the thresholds.
Based on these derived pixel weights, the Blob object segments the image into feature (pixels having weight 1) and background (pixels having weight 0). The Polarity property is used to configure the Blob object to find blobs containing either black or white pixels. When Polarity is DarkOnLight, then blobs contain black pixels. When Polarity is LightOnDark, then blobs contain white pixels.

Using Histograms to Determine Thresholds
By using the Vision Guide 8.0 Histogram tool, the user can determine which values to use for ThresholdLow and ThresholdHigh.
As an example, consider an ideal binary image of a black blob on a white background. The figure below illustrates such an image and its histogram.


Ideal binary image and its histogram

Note that only two gray-scale values have non-zero contents in this histogram: the gray-scale value of the blob and the gray-scale value of the background.

Real images never have histograms such as this. The effects of noise from various sources (uneven printing, irregular lighting and electrical noise, for example) combine to spread out the peaks. A more realistic histogram is shown in the figure below.

Setting the threshold values using a histogram
In the histogram in the figure above, each of the peaks is clearly evident. The peak areas are in the same proportion as those in the previous ideal histogram, but each has now spread to involve more than one gray-scale value.
The less populated grayscale values between the two principal peaks represent the edges of the blob, which are neither wholly dark nor wholly light.
You should adjust the threshold values so that the blob feature will have pixels weights of 1.

Connectivity (Connected Blob Analysis)
Connectivity can be defined as analysis based on connected pixels having non zero weight. More simply stated, connectivity is used to find a group of connected pixels that are referred to as blobs.
Connectivity is performed automatically by the Blob object where connectivity is performed and then measurements are computed for the blob(s) that are found. Connectivity for Blob objects returns the number of blobs found based upon the NumberToFind property that is set before running the Blob object.

Blob Results Computations
Once all the other steps for blob analysis are complete, results can be computed for the blob that was found. The list of all results returned for the Blob object is shown in the section Blob Object Results previously described in this section.
A detailed explanation of every result for all vision objects is given in following.
"Vision Guide 8.0 Properties & Result Reference"

Some of the results described in the Vision Guide 8.0 Properties and Results Reference Manual apply to many different vision objects such as the Found, Time, or PixelX results. While the Found and Time results are pretty generic and generally can be applied the same across all vision objects, some of the results such as the position related results have special meaning when applied to Blob objects. These are described below:
MinX, MinY, MaxX, MaxY Results
The MinX, MinY, MaxX and MaxY results when combined together create what is called the Extrema of the blob. The Extrema is the coordinates of the blob’s minimum enclosing rectangle. The best way to understand this is to examine the figure below.


Principle axes, center of mass, and Extrema

Robot Coordinate System, Camera Coordinate System, and Pixel Position Data
Coordinate position results for the Blob object return the position of the center of mass. Keep in mind that the center of mass is not necessarily the center of the part.
This can cause trouble for some parts if you try to use the center of mass to pick up the part. If you use the RobotX, RobotY and RobotU coordinate position results from a Blob object as a pick up position make sure that picking up the part at the center of mass is possible.
If you don’t want to pick up the part at the center of mass you will either have to calculate an offset or use another vision object such as a Correlation object to find the part and return a more useful position.

TotalArea Result
The TotalArea result is the sum of the areas for all results found. This is useful for pixel counting. By setting NumberToFind to 0, the Blob object will find all blobs with areas between MinArea and MaxArea. TotalArea will then show the total area of all results.

Angle Result Limitations for the Blob Object
Just a reminder that the Angle result for the Blob object is limited in its range.
The Angle result for a Blob object returns angle values that range from +90° to -90°. The Blob object is not able to return an angular result that ranges through an entire 360°.

KEY POINTS


It should be noted that a Blob object does not always return results as reliably as a Polar object. Because the range of the Blob object Angle result is limited and in some cases not reliable, we DO NOT recommend using a Blob object Angle result for robot guidance. Instead we strongly recommend using a Polar object to compute angular orientation of a part.

The Polar object can use the X, Y position found as the center of mass from a Blob object and then compute an angle based from the center of mass of the Blob object. This is explained in detail later in Polar Object.

Using Blob Objects
Now that we've reviewed how blob analysis works we have set the foundation for understanding how to use Vision Guide 8.0 Blob objects. This next section will describe the steps required to use Blob objects as listed below:

  • How to create a new Blob object
  • Position and Size the search window
  • Configure the properties associated with the Blob object
  • Test the Blob object & examine the results
  • Make adjustments to properties and test again

Prior to starting the steps shown below, you should have already created a new vision sequence or selected a vision sequence to use. If you have no vision sequence to work with, you can create a new vision sequence by clicking on the [New Sequence] button on the Vision Guide toolbar.
You can also select a sequence which was created previously by clicking on the sequence tree in the Vision Guide window.
Refer to the following for more details on how to create a new vision sequence or select one that was previously defined.
Vision Sequences

Step 1: Create a New Blob Object

  1. Click the [All Tools] - the [New Blob] button on the Vision Guide toolbar.
  2. Move the mouse over the image display. You will see the mouse pointer change to the Blob icon.
  3. Continue moving the mouse until the icon is at the desired position in the image display, then click the left mouse button to create the object.
  4. Notice that a name for the object is automatically created. In the example, it is called "Blob01" because this is the first Blob object created for this sequence. (We will explain how to change the name later.)

Step 2: Position and Size the Search Window
You should now see a Blob object similar to the one shown below:

New Blob Object Layout

  1. Click the name label of the Blob object and, while holding the mouse down, drag the Blob object to the position where you would like the top left position of the search window to reside.
  2. Resize the Blob object search window as required using the search window size handles. (This means click a size handle and drag the mouse.) (The search window is the area within which we will search for Blobs.)

CAUTION


Ambient lighting and external equipment noise may affect vision sequence image and results. A corrupt image may be acquired and the detected position could be any position in an object’s search area. Be sure to create image processing sequences with objects that use search areas that are no larger than necessary.

Step 3: Configure the Blob Object Properties
We can now set property values for the Blob object. Shown below are some of the more commonly used properties that are specific to the Blob object.
Explanations for other properties such as AbortSeqOnFail, Graphics, etc. which are used on many of the different vision objects can be seen in the following.

  • "Vision Guide 8.0 Properties & Result Reference"
  • Blob Object Properties List

KEY POINTS


Ambient lighting and external equipment noise may affect vision sequence image and results. A corrupt image may be acquired and the detected position could be any position in an object’s search area. Properly configure MaxArea, MinArea, RejectOnEdge and other properties to reduce the risk of detection errors.

Item Description
Name property

The default name given to a newly created Blob object is “Blobxx” where xx is a number which is used to distinguish between multiple Blob objects within the same vision sequence.

If this is the first Blob object for this vision sequence then the default name will be “Blob01”.

To change the name, click the Value field of the Name property and type a new name and press the return key. You will notice that every place where the Blob object’s name is displayed is changed to reflect the new name.

Polarity property

Select either one of the following in Polarity property:
- detect a dark object on a light background (DarkOnLight)
- detect a light object on a dark background (LightOnDark) Polarity property is usually used to make choice between these options.

The default setting is DarkOnLight (a dark object on a light background).

If you want to change it, click the Value field of the Polarity property and you will see a drop down list with 2 choices: DarkOnLight or LightOnDark. Click the choice you want to use.

MinArea, MaxArea

These properties define the area limit for a Blob object to be considered “Found”. (i.e. the Found result returned as True)

The default range is set as 25 to 100,000 (MinArea to MaxArea) which is a very broad range. This means that most blobs will be reported as Found when you first run a new Blob object before adjusting the MinArea and MaxArea properties. Normally, you will want to modify these properties to reflect a reasonable range for the blob you are trying to find. This way if you find a blob which is outside of the range you will know it isn't the blob you wanted to find.

RejectOnEdge property Allows you to exclude the parts touching the boundary of the search window. Normally, this should be set to True.

You can test the Blob object now and then come back and set the any other properties as required later.

Step 4: Test the Blob Object and Examine the Results
To run the Blob object, click the [Run] button of the object on the execution panel. Results for the Blob object will now be displayed. The primary results to examine at this time are shown below. There are others that you will find useful in the future as well though.

Item Description
Found result

- Returns whether the blob was found.

If the blob that was found does not meet the area constraints defined by the MinArea and MaxArea properties then the Found result will return as False.

Area result - The area of the blob found. (unit: pixels)
Angle result

- The angle at which the Blob is oriented.

This is computed from the angle of the minor axis and will be a value between +/- 90°.

Time result - The amount of time it took for the Blob object to execute.
PixelX, PixelY - The XY position of the center of mass of the found blob. (unit: pixels)
MinX, MinY, MaxX, MaxY - Combined these 4 values define the Extrema of the blob. (A rectangle which is formed by touching the outermost points of the blob.)

KEY POINTS


The RobotXYU, RobotX, RobotY, RobotU and CameraX, CameraY, CameraXYU results will return “no cal” at this time. This means that no calibration was performed so it is impossible for the vision system to calculate the coordinate results with respect to the robot coordinate system or camera coordinate system. Refer to Vision Calibration for more information.

Step 5: Make Adjustments to Properties and Test Again
After running the Blob object a few times, you may have encountered problems with finding a blob or just want to fine-tune some of the property settings. Some common problems and fine tuning techniques are described below:
Problems: If the Blob object returns a Found result of False, there are a few places to immediately examine.

  • Value of the Polarity property may differ from the actual image. Check the value of the Polarity property, and make sure that the value matches the light and dark of the object you want to detect and its background. Also, it must match the light and dark of the object and its background displayed in the search window.
  • Look at the Area result and compare this area with the values defined in the MinArea and MaxArea properties. If the Area result does not fall between the limits defined by the MinArea and MaxArea properties, then you may want to adjust these properties and run the Blob object again.
  • Use Histograms to examine the distribution of gray-scale values in an image. The Histogram tool is excellent for setting the ThresholdHigh and ThresholdLow properties. Histograms are described in detail in the following.
    Histogram Tool
    Fine Tuning: Fine-tuning of the Blob object may be required for some applications. The primary properties associated with fine-tuning of a Blob object are described below:
  • MinArea, MaxArea - After you have run the Blob object a few times, you will become familiar with the approximate values returned for the Area result. Use these values when determining new values to enter to the MinArea and MaxArea properties. It is generally a good idea to have MinArea and MaxArea properties set to values which constrain the Found result such that only blobs which you are interested in are returned with the Found result equal to True. (This helps eliminate unwanted blobs that are different in area from the desired blob.)
  • ThresholdHigh, ThresholdLow - These properties adjust parameters for the setting the gray levels thresholds for distinguishing between what is background and what is part of the blob. These properties are best set through using the Histogram tool. See below.
    "Vision Properties and Results Reference - ThresholdHigh and ThresholdLow properties"
    Histograms are described in detail in the following.
    Histogram Tool

Once you have completed adjusting and have tested the Geometric object until you are satisfied with the results, you are finished with creating this vision object.
Go on to creating other vision objects or configuring and testing an entire vision sequence.

Other Useful Utilities for Use with Blob Objects
At this point you may want to consider examining the [Histogram] button on the Vision Guide toolbar. Histograms are useful because they graphically represent the distribution of gray-scale values within the search window. The Vision Guide Histogram tool provides a useful mechanism for setting the gray levels for the ThresholdLow and ThresholdHigh properties which then define what is considered a part of the blob and what is considered part of the background. When you are having problems with finding a blob the Histogram feature is invaluable. The details regarding the Vision Guide Histogram usage are described in the following.
Overview

You may also want to use the [Statistics] button on the Vision Guide toolbar to examine the Blob object's results statistically. An explanation of the Vision Guide Statistics features are explained in the following.
Using Vision Guide Statistics

Using Blob Objects as a Pixel Counter
The Blob object can be used as a pixel counter. A pixel counter counts all of the pixels in an image that fall within the blob thresholds.
Follow these steps:

  1. Create a Blob object.
  2. Set desired polarity.
  3. Set High and Low thresholds.
  4. Set NumberToFind to 0. This will cause the Blob object to find all blobs in the image.
  5. Set MinArea to 1 and MaxArea to 999999. Blobs of one pixel or more will be counted.
  6. Run the sequence.
    Use the TotalArea result to read the total number of pixels that fall within the blob thresholds.

Edge Object

Edge Object Description
The Edge object is used to locate edges in an image.
The edge of an object in an image is a change in gray value from dark to light or light to dark. This change may span several pixels.
The Edge object finds the transition from Light to Dark or Dark to Light as defined by the Polarity property and defines that position as the edge position for a single edge. You can also search for edge pairs by changing the EdgeType property. With edge pairs, two opposing edges are searched for, and the midpoint is returned as the result. The Edge object supports multiple results, so you can specify how many single edges or edge pairs you want to find.
An Edge object can be configured to search along a line or along an arc using the SearchType property.
Edge objects with SearchType = Line are similar to Line objects in shape with a search length. The search length of the Edge object is the length of the Edge object. One of the powerful features of the Edge object with SearchType = Line is its ability to be positioned at any angle. This allows a user to maintain an Edge object vector that is perpendicular to the region where you want to find an edge by changing the angle of the Edge object. Normally this is done by making the Edge object relative to a Frame that moves with the region you are interested in.

Edge Object Layouts
The Edge object has two different layouts.

  • Layout when SearchType is Line
    When SearchType = Line, the Edge object's search window is the line along which the Edge object searches. The Edge object searches for a transition (light to dark or dark to light) somewhere along this line in the direction indicated by the Direction of Search Indicator.

    Edge Object Line Layout

    Symbol Description
    a Step Number in Sequence
    b Object Name
    c Size & Direction Handle
    d Direction indicator (Direction of edge search)

    The Edge object can be positioned to search in any direction (not just along the vertical and horizontal directions). This is done by using the size and direction handles of the Edge object to move either end of the Edge object along the direction and distance required to find the edge you are interested in. To move the entire object, drag the label or line.

  • Layout when SearchType is Arc
    When SearchType is Arc, the Edge object's search window is the arc along which the Edge object searches. The Edge object searches for a transition (light to dark or dark to light) somewhere along this arc in the direction indicated by the Direction of Search Indicator.

    Edge Object Arc Layout

    Symbol Description
    a Step Number in Sequence
    b Object Name
    c Direction indicator (Direction of edge search)
    d Size Handle

    To change the size of the arc, drag one of the size handles on either end of the arc. To change the radius, drag the middle size handle. To move the entire object, drag the label or center point.

Edge Object Properties
The following list is a summary of properties for the Edge object. For details on each property, refer to the following.
"Vision Guide 8.0 Properties & Result Reference"

Note that when you change the SearchType property, the property set changes according to the specified type. When SearchType is Line, then the following properties are not visible in the Vision Guide window property grid:

  • AngleEnd
  • AngleStart
  • CenterPointObject
  • CenterPntObjResult
  • CenterPntOffsetX
  • CenterPntOffsetY
  • CenterPntRotOffset

When SearchType is Arc, then the following properties are not visible:

  • X1
  • Y1
  • X2
  • Y2
Property Description
AbortSeqOnFail

Allows the user to specify that if the object fails (not passed), then the entire sequence is aborted at that point and no further objects in the sequence are processed.

Default: False

Accept

Specifies the shape score that a feature must equal or exceed to be considered found.

If the value is small, it may result in false detection.

Default: 100

AngleEnd

Specifies the end angle of the range to perform a circular search

Default: 135

AngleStart

Specifies the start angle of the range to perform a circular search

Default: 45

CalRobotPlacePos Calibrates RobotPlacePos when designing and performing program.
Caption

Used to assign a caption to the Edge object.

Default: Blank

CenterPointObject

Specifies the position to be used as the center point of the object.

When this property is set “Screen”, the object can be configured on arbitrary position. However, when specified to other vision object, the center point in PixelX, PixelY of the object is set.

Default: Screen

CenterPntObjResult

Specifies which result to use from the CenterPointObject property.

If All is specified, the Edge object will be applied to all (NumberFound) of the specified vision object results

Default: 1

CenterPntOffsetX

Sets or returns the X offset after the center point of the search window is positioned with the CenterPointObject.

Default: 0

CenterPntOffsetY

Sets or returns the Y offset after the center point of the search window is positioned with the CenterPointObject.

Default: 0

CenterPntRotOffset

Specifies whether to rotate the XY offset value of the center (CenterPntOffsetX, CenterPntOffsetY) based on the Angle result of CenterPointObject.

If SearchWinType is set to RotatedRectangle, the search window rotates based on the Angle result.

Default: False

CheckClearanceFor Sets the object to confirm a clearance.
ClearanceCondition Specifies the way of decision for a clearance.
ContrastTarget

Sets the desired contrast for the edge search.

Default: 0 (best contrast)

ContrastVariation

Selects the allowed contrast variation for ContrastTarget.

Default: 0

CoordObject

Specifies Coordinates object to copy the result. The copy is executed when the object is executed, and if it didn’t execute because of branch function of Decision, the copy will not be executed.

Default: None

CurrentResult Defines which result to display in the Results list (on the Object window) or which result to return data for when the system is requested to find more than one of a like feature within a single search window.
Description

Sets a user description

Default: Blank

EdgeSort Sets the method of sorting detected edge results
EdgeThreshold

Sets the threshold at which edges below this value are ignored.

Default: 2

EdgeType

Select the type of edge to search for: single or pair.

Default: 1 - Single

Enabled

Specifies whether to execute the object.

Default: True

EndPntObjResult Specifies which result to use from the EndPointObject.
EndPointObject Specifies which vision object to use to define the end point of the line to be inspected.
EndPointType Specifies the type of end point used to define the end point of a line.
FailColor

Selects the color for an object when it is failed.

Default: Red

Frame

Specifies which positioning frame to use.

Default: none

FrameResult

Specifies which number of the Frame results to be used.

Default: 1

Graphics

Specifies which graphics to display.

Default: 1 - All

LabelBackColor

Selects the background color for an object label.

Default: Transparent

Name

Used to assign a unique name to the Edge object.

Default: Edge01

NumberToFind

Defines the number of edges to find.

Default: 1

PassColor

Selects the color for an object when it is passed.

Default: Light Green

PassType

Selects the rule that determines if the object passed.

Default: SomeFound

Polarity

Defines whether the Edge object should search for a LightToDark or DarkToLight transition.

Default: 1 - LightToDark

Radius Defines the distance from the CenterPoint of the object to the outer most search ring of the object.
SearchType

Sets whether to use Line or Arc search.

Default: Line

ScoreWeightContrast

Sets the percentage of the score that depends on contrast.

Default: 50

ScoreWeightStrength

Sets the percentage of the score that depends on edge strength.

Default: 50

SearchWidth

Defines the width of the edge search.

Range is from 3 to 99.

Default: 3

StartPntObjResult Specifies which result to use from the StartPointObject.
StartPointObject Specifies which vision object to use to define the start point of the Line.
StartPointType Specifies the type of start point used to define the start point of a line.
StrengthTarget

Sets the desired edge strength to search for.

Default: 0

StrengthVariation

Sets the amount of variation for StrengthTarget.

Default: 0

X1 The X coordinate position of the start point of the edge.
X2 The X coordinate position of the end point of the edge.
Y1 The Y coordinate position of the start point of the edge.
Y2 The Y coordinate position of the end point of the edge.

Edge Object Results
The following list is a summary of the Edge object results with brief descriptions. For details on each result, refer to the following.
"Vision Guide 8.0 Properties & Result Reference"

Results Description
CameraX Returns the X coordinate position of the found Edge’s position in the camera coordinate system.
CameraY Returns the Y coordinate position of the found Edge’s position in the camera coordinate system.
CameraXYU

Runtime only.

Returns the CameraX, CameraY, and CameraU coordinates of the found part’s position in the camera coordinate system.

ClearanceOK Returns the result of decision for a clearance.
Contrast Returns the contrast of the found Edge.
Found Returns whether the object was found. (i.e. did the feature or part you are looking at have a shape score that is above the Accept property’s current setting.)
NumberFound

Returns the number of Edges found.

(The detected number can be from 0 up to the number set with the NumberToFind property.)

Passed Returns whether the object detection result was accepted.
PixelX Returns the X coordinate position of the found Edge’s position in pixels.
PixelY Returns the Y coordinate position of the found Edge’s position in pixels.
PixelXYU

Runtime only.

Returns the PixelX, PixelY, and PixelU coordinates of the found Edge position in pixels.

RobotX Returns the X coordinate position of the found Edge's position with respect to the Robot's Coordinate System.
RobotY Returns the Y coordinate position of the found Edge's position with respect to the Robot's Coordinate System
RobotXYU

Runtime only.

Returns the RobotX, RobotY, and RobotU coordinates of the found Edge's position with respect to the robot's coordinate system.

Score Returns an integer value that represents the overall score of the found edge.
Strength Returns the strength of the found edge.
Time Returns the amount of time required to process the object (unit: millisecond).

Using Edge Objects
The next few sections guide you through how to create and use an Edge object.

  • How to create a new Edge object
  • Position and Size the search window
  • Configure the properties associated with the Edge object
  • Test the Edge object & examine the results
  • Make adjustments to properties and test again

Prior to starting the steps shown below, you should have already created a new vision sequence or selected a vision sequence to use. If you have no vision sequence to work with, you can create a new vision sequence by clicking on the [New Sequence] button.
You can also select a sequence which was created previously by clicking on the sequence tree in the Vision Guide window.
Refer to the following for more details on how to create a new vision sequence or select one that was previously defined.
Vision Sequences

Step 1: Create a New Edge Object

  1. Click the [All Tools] - the [New Edge] button on the Vision Guide toolbar.
  2. Move the mouse over the image display. You will see the mouse pointer change to the Edge object icon.
  3. Continue moving the mouse until the icon is at the desired position in the image display, then click the left mouse button to create the object.
  4. Notice that a name for the object is automatically created. In the example, it is called “Edge01” because this is the first Edge object created for this sequence. (We will explain how to change the name later.)

Step 2: Positioning the Edge Object
You should now see an Edge object (a) similar to the one shown below:

New Edge Object

After creating the new Edge object, you can change whether to search along a line or an arc by setting the SearchType property. When the SearchType is Line (default), you can change the search length and rotation by clicking down on either size handle, and then dragging that end of the line to a new position. When the SearchType is Arc, you can change the arc by dragging either of the handles on each end of the arc. To change the radius, drag the middle handle.
You can also click the name label of the Edge object or anywhere along the edge line and while holding the mouse down drag the entire Edge object to a new location on the screen. When you find the position you like, release the mouse and the Edge object will stay in this new position on the screen.

Step 3: Configuring Properties for the Edge Object
We can now set property values for the Edge object. To set any of the properties simply click the associated property’s value field and then either enter a new value or if a drop down list is displayed click one of the items in the list.
Shown below are some of the more commonly used properties for the Edge object. Explanations for other properties such as AbortSeqOnFail, Graphics, etc. which are used on many of the different vision objects can be seen in the following.
"Vision Guide 8.0 Properties & Result Reference"

Property Description
EdgeType (Single)

Select the type of the edge to be searched.

For edge pairs, an edge is found from each direction and the center of the pair is reported as the position.

Name property ("Edgexx") The default name given to a newly created Edge object is “Edgexx” where xx is a number which is used to distinguish between multiple Edge objects within the same vision sequence. If this is the first Edge object for this vision sequence then the default name will be “Edge01”. To change the name, click the Value field of the Name property, type a new name and press the return key. You will notice that once the name property is modified, every place where the Edge object's name is displayed is updated to reflect the new name.
NumberToFind(1) You can search for 1 or more edges along the search line.
Polarity (LightToDark) If you are looking for a DarkToLight edge, change polarity.

Step 4: Running the Edge Object and Examining the Results
To run the Edge object, simply do the following:
Click the [Run] button of the Object on the execution panel. Results for the Edge object will now be displayed. The primary results to examine at this time are:

Results Description

PixelX result

PixelY result

The XY position of the edge found along the edge search line.

(unit: pixel)

CameraX result

CameraY result

These define the XY position of the Edge object in the camera's coordinate system.

The CameraX and CameraY results will only return a value if the camera has been calibrated. If it has not then “no cal” will be returned.

RobotX result

RobotY result

These define the XY position of the Edge object in robot coordinates.

The robot can be told to move to this XY position. (No other transformation or other steps are required.) The RobotX and RobotY results will only return a value if the camera has been calibrated. If it has not then “no cal” will be displayed.

Polar Object

Polar Object Description
The Polar object provides a fast method of transforming an image having Cartesian coordinates to a corresponding image having polar coordinates. Polar objects are an excellent tool for determining object rotation. Because of the reliability and speed of the Polar object, we highly recommend using it when you need to calculate the angular rotation of an object.

The major characteristic to remember about Polar objects, is that they are highly insensitive to object rotation about their center, but intolerant of object translation in x-y space. This means that Polar objects are very good at calculating the rotation of an object, but only if that object's center position does not move.

For this reason, Vision Guide 8.0 has a CenterPoint property for the Polar object. The CenterPoint property allows the user to lock the Polar object's center point onto the position result from another object such as the Correlation and Blob objects.
This means that if you first find the XY position with a Correlation or Blob object, the Polar object can then be centered on the XY position and then used to calculate the rotation of the object.

Polar Object Layout
The Polar object layout is quite different looking than the other object layouts described so far. However, its usage is quite similar. The Polar object has a circular basic layout and as such has a center and radius. The position of the Polar object can be moved by clicking on the name of the Polar object (or anywhere on the circle that makes up the outer perimeter) and then dragging the object to a new position.

Polar Object Layout

Symbol Description
a Object Name
b

Search Window

Size Handle

c Search Ring

KEY POINTS


The Polar object center position (defined by the CenterPoint property) can also be based upon the position of another object. This means that even though you may reposition a Polar object, once you run the object or Sequence the Polar object center position may change. For more details see the Polar object later in this chapter.

The search window for a Polar object is circular. It's outer boundary is defined by the outer ring shown as shown below. Its inner ring is a circle that is smaller in size than the outer ring and is located n pixels inside the outer ring. The number of pixels is defined by the Thickness property. As you change the Thickness property you will notice that the “thickness” or distance between the inner and outer ring changes. This provides a visual indicator for the area in which you are searching.
To resize the search window outer boundary for the Polar object, click one of the 4 search window size handles and drag the ring inward or outward as desired.

Polar Object Properties
The following list is a summary of properties for the Polar object. For details on each property, refer to the following.
"Vision Guide 8.0 Properties & Result Reference"

Property Description
AbortSeqOnFail

Specify that the entire sequence to be aborted and no further objects to be processed if the specified object execution fails (i.e. not satisfied PassType).

Default: False

Accept

Specifies the shape score that a feature must equal or exceed to be considered found.

It detects objects that scores exceed the set value. If the value is small, it may result in false detection.

Default: 700

AngleOffset

Defines an offset angle which is applied to the Polar object after teaching to adjust the angle indicator graphic with the feature you are searching for.

Default: 0

CalRobotPlacePos Calibrates RobotPlacePos when designing and performing program.
Caption

Assigns a caption to the Geometric object.

Default: Empty String

CenterPointObject

Specifies the position to be used as the center point of the object.

When this property is set “Screen”, the object can be configured on arbitrary position. However, when specified to other vision object, the center point in PixelX, PixelY of the object is set.

Default: Screen

CenterPntObjResult

Specifies which result to use from the CenterPointObject property.

If All is specified, Polar object will be applied to all of the (NumberFound) for specified vision object results.

Default: 1

CenterPntOffsetX

Sets or returns the X offset of the center of the search window after it is positioned with the CenterPointObject.

Default: 0

CenterPntOffsetY

Sets or returns the Y offset of the center of the search window after it is positioned with the CenterPointObject.

Default: 0

CenterPntRotOffset

Specifies whether to rotate the XY offset value of the center (CenterPntOffsetX, CenterPntOffsetY) based on the Angle result of CenterPointObject.

Default: False

CenterX

Specifies the X coordinate position to be used as the center point for the Polar Search Tool.

This property is filled in automatically when the CenterPointObject property is set to another vision object.

CenterY

Specifies the Y coordinate position to be used as the center point for the Polar Search Tool.

This property is filled in automatically when the CenterPointObject property is set to another vision object.

CheckClearanceFor Sets the object to confirm a clearance.
ClearanceCondition Specifies the way of decision for a clearance.
Confusion

Indicates the amount of confusion expected in the image to be searched.

This is the highest shape score a feature can get that is not an instance of the feature for which you are searching.

Default: 800

CoordObject

Specifies Coordinates object to copy the result. The copy is executed when the object is executed, and if it didn’t execute because of branch function of Decision, the copy will not be executed.

Default: None

CurrentResult Defines which result to display in the Results list (on the Object window) or which result to return data for when the system is requested to find more than one of a like feature within a single search window.
Description

Sets a user description

Default: Blank

Enabled

Specifies whether to execute the object.

Default: True

FailColor

Selects the color for an object when it is failed.

Default: Red

Frame

Specifies which positioning frame to use.

Default: none

FrameResult

Specifies which number of the Frame results to be used.

Default: 1

Graphics

Specifies which graphics to display.

Default: 1 - All

LabelBackColor

Selects the background color for an object label.

Default: Transparent

ModelObject

Determines which model to use for searching.

Default: Self

Name

Used to assign a unique name to the Polar object.

Default: Polar01

PassColor

Selects the color for an object when it is passed.

Default: Light Green

PassType

Selects the rule that determines if the object passed.

Default: SomeFound

Radius

Defines the distance from the CenterPoint of the object to the outer most search ring of the object.

Default: 50

SaveTeachImage Sets whether the camera image should be saved to a file when the model is taught.
ScoreMode Sets or returns threshold for displaying the result at the time of Fail.
ShowModel

Allows the user to see the internal grayscale representation of a taught model.

Can be used to set don’t care pixels.

Thickness

Defines the thickness of the search ring which is the area searched for the Polar object.

The thickness is measured in pixels starting from the outer ring defined by the Radius of the Polar object.

Default: 5

Polar Object Results
The following list is a summary of the Polar object results with brief descriptions. For details on each result, refer to the following.
"Vision Guide 8.0 Properties & Result Reference"

Results Description
Angle Returns the amount of found part rotation in degrees.
CameraX Returns the X coordinate position of the Polar object's CenterPoint position in the camera coordinate system.
CameraY Returns the Y coordinate position of the Polar object's CenterPoint position in the camera coordinate system.
CameraXYU

Runtime only.

Returns the CameraX, CameraY, and CameraU coordinates of the found part's position in the camera coordinate system.

ClearanceOK Returns the result of decision for a clearance.
Found Returns whether the object was found. (i.e. did the feature or part you are looking at have a shape score that is above the Accept property’s current setting.)
Passed Returns whether the object detection result was accepted.
PixelX Returns the X coordinate position of the found part's position (referenced by model origin) in pixels.
PixelY Returns the Y coordinate position of the found part's position (referenced by model origin) in pixels.
PixelXYU

Runtime only.

Returns the PixelX, PixelY, and PixelU coordinates of the found part's position in pixels.

RobotX Returns the X coordinate position of the Polar object's CenterPoint position with respect to the Robot's Coordinate System.
RobotY Returns the Y coordinate position of the Polar object's CenterPoint position with respect to the Robot's Coordinate System.
RobotU Returns the U coordinate position of the Polar object's Angle result with respect to the Robot's Coordinate System.
RobotXYU

Runtime only.

Returns the RobotX, RobotY, and RobotU coordinates of the found part's position with respect to the robot's coordinate system.

Score Returns an integer value that represents the level at which the feature found at runtime matches the model for which Polar object is searching.
Time Returns the amount of time required to process the object (unit: millisecond).

KEY POINTS


All results for the Polar object which return X and Y position coordinates (CameraX, CameraY, PixelX, PixelY, RobotX, RobotY....) receive those coordinate values from the CenterPoint property. Whatever the CenterPoint property is set to before running the Polar object is then passed through as results for the Polar object. The Polar object does not calculate an X and Y center position. However, the X and Y position results are provided to make it easy to get X and Y results all from the Polar object rather than getting the X and Y results from one vision object.

Understanding Polar Objects
The purpose of Polar objects is to find the amount of rotation of a specific object or pattern. This is done by first teaching a polar model which is basically a circular ring with a specified thickness. After teaching we can then run the Polar object and the circular ring is compared against the current rotation of the part we are interested in and an angular displacement is calculated and returned as one of the results for Polar objects.
Let's take a look at an example to help make it easier to see. Consider a part that rotates around a center point and needs to be picked up and placed on a pallet by a robot.

Since this part is always oriented in a different position, it is difficult to pick up unless we can determine the rotation of the object. If this object looked something like the part shown in black & gray in Figure A, we could define a Polar Model which intersects the outer part of the object.
Notice the area within the inner and outer rings. This area defines the model of the Polar object that was taught for this part. You should be able to see that some areas within the ring are very distinctive because they have strong feature differences from the rest of the ring. These areas will be a key in finding the amount of rotation the part will have.


Figure A: Example part for use with Polar Object

Symbol Description
a Thickness
b Outer Ring
c Inner Ring

Figure B shows what the taught model looks like. It is just a ring that is mostly white with one section of black and gray.
When the Polar object is run it will search the search window for this type of pattern where one section is black and gray while the rest of the ring is white.

Figure B: The Polar Model representation of the Part from Figure A

Symbol Description
a Thickness
b Outer Ring
c Inner Ring

In Figure C we show the original part which has rotated about 90° as compared to the model. The Polar object will calculate the angular displacement between the model and the new position of the object.


Figure C: Part rotated 90° from original taught model

Symbol Description
a Thickness
b Outer Ring
c Inner Ring

Figure D shows the original Model taught at 0° on the left and the part that was rotated 90° on the right.


Figure D: Original Polar Model and rotated part's Polar representation

Symbol Description
a Thickness
b Outer Ring
c Inner Ring

The Primary Parameters Associated with Polar Objects
As you can probably derive from looking at the Figures in the last section, there are 3 primary parameters associated with Polar objects that are each important to achieve the best results possible. These parameters are defined by the following properties:

  • CenterPoint property
  • Radius property
  • Thickness property
  • AngleOffset property

The CenterPoint property defines the center position of the Polar object.
As mentioned earlier, the center point of the Polar object must be aligned properly so that the center point of the taught model and the center point of the part you are searching for an angular shift within are perfectly aligned. Otherwise, the translation in XY space will cause the angular results to be inaccurate.
The Radius property defines the distance from the center of the Polar object to the outermost ring of the Polar object.
This defines the outer Search Area boundary.

The Thickness property defines distance (in pixel units) from the outer ring to the imaginary inner ring.
This is in effect the thickness of the Search Area.
The AngleOffset property provides a mechanism for the user to set the angular value for the graphics line that is used to display the rotational position for Polar objects.
This line can be seen at 3 O’clock (the 0° position) after running a Polar object. But since you may want a graphical indicator as to the rotational position of the feature you are most interested in, the Angle Offset property allows you to reposition this graphical indicator according to your needs. (Remember that this AngleOffset property is normally set after Teaching a Model and running the Polar object.)

Determining Object Rotation
A typical application requiring object rotation determination involves an integrated circuit die to be picked up by a robot that must know the die's XY position and its rotational orientation. The die is a different shade of gray from the background, and the surface of the die contains gray level information that indicates its rotation from 0 to 360°.
For this application, a Blob object (called “Blob01”) is used to find the center of the die. (Alternatively, you could also have used a Correlation object.)
A Polar object is created using the XY position result from the Blob object as the center of the Polar object. (This is done by setting the Polar object's CenterPoint property to “Blob01”.)
The proper Radius and Thickness property values are then set for the Polar object. A gray level Polar Model is then trained providing a model of the die at 0°.
When searching for a new die at a different orientation, a Polar search window is constructed from the XY position result from the "Blob01" Blob object, and the Radius and Thickness properties.
This search window is then searched for the rotational equivalent of the model that was previously taught. The angle at which the model is found is then returned as the Angle result (in Pixels) and RobotU result (in Robot coordinates).

KEY POINTS


Important note: While the RobotU result returned will be the actual rotation of the part in the Robot's coordinate system, remember that your gripper was probably not mounted with exactly 0° of rotation. It is probably off by at least a few degrees. So make sure to adjust for this in your program when moving to the part.

Adjusting the Taught Model's Angle Offset
When a Polar Model is taught, the original circular model is considered as 0° where the 0° position is at 3 o’clock. The figure below shows a model which is taught where the area of interest is at about 1 o’clock. When this model is taught, the model’s 0° position will be at 3 o’clock as with all Polar Models.
However, since we want to see a visual indication of the part's actual rotation we must use an Angle Offset to properly adjust the positioning of the polar angle indicator.
For example, our part which was taught pointing at about 1 o’clock requires an Angle Offset of about 60°. (The AngleOffset property for the Polar object should be set to 60°.)

Polar Model where AngleOffset property is set to 60°

Performance Issues for Polar Objects
When teaching Polar objects, the primary concern is that the Polar Model contain enough information to determine the object’s orientation.
Therefore, the desired accuracy and execution speed of the Polar object determine the size of the Polar Model used for the Polar object search.
If the Polar object's rotational result is to be accurate to 1/2 a degree, then the Polar object need be only 180 pixels wide (2° per pixel or 0.5° per Polar object resolution unit, assuming quarter pixel searching accuracy).
Accuracy of the Polar object is also dependent on the gray level information in the Model.
The thickness of the Polar Model (in pixels) is also chosen to contain enough information that the rotation signature yields reliable results even when there is minor variation in the location of the center point of the image you are searching.
Choosing a Thickness property setting of 1 pixel would mean that if the image were misplaced by one pixel, the polar transformation would send completely different pixels to the polar image.
To provide some tolerance in the source image location, a Thickness property of 5 is used so that if the source image location is off by one pixel, the corresponding polar image pixel would only be one fifth off. (This is why the minimum value for the Thickness property is 5.)
The choice of the Polar object's Radius and Thickness properties should be based on the amount of gray information in the Model, and on the desired searching speed.
Searching speed is proportional to the Radius and Thickness properties. The fastest Search times will result in a small Radius property setting with the Thickness property set to 5. However, in many cases a Thickness property set to 5 may not be enough to accurately find the Polar Model.

Using Polar Objects
The next few sections guide you through how to create and use a Polar object.

  • How to create a new Polar object
  • Position and Size the search window
  • Configure the properties associated with the Polar object
  • Test the Polar object & examine the results
  • Make adjustments to properties and test again
    Prior to starting the steps shown below, you should have already created a new vision sequence or selected a vision sequence to use. If you have no vision sequence to work with, you can create a new vision sequence by clicking on the [New Sequence] button.
    You can also select a sequence which was created previously by clicking on the sequence tree in the Vision Guide window. Refer to the following for more details on how to create a new vision sequence or select one that was previously defined.
    Vision Sequences

Step 1: Create a New Polar Object

  1. Click the [All Tools] - the [Polar] button on the Vision Guide toolbar.
  2. Move the mouse over the image display. You will see the mouse pointer change to the Polar object icon.
  3. Continue moving the mouse until the icon is at the desired position in the image display, then click the left mouse button to create the object.
  4. Notice that a name for the object is automatically created. In the example, it is called “Polar01” because this is the first Polar object created for this sequence. (We will explain how to change the name later.)

Step 2: Positioning the Polar Object
You should now see a Polar object similar to the one shown below:

New Polar Object

Symbol Description
a Object Name
b

Search Window

Size Handle

c Search Ring

Polar objects have a circular search window. You can change the position of the entire object or change its radius.
To move the entire object, click the object name or anywhere along the outer circumference and while holding the left mouse button down drag the entire object to a new location on the screen. When you find the position you like, release the mouse and the Polar object will stay in this new position on the screen.
To change the radius, move the mouse pointer over one of the size handles, press the left mouse button down, then move the mouse to change the size of the radius.

Step 3: Configuring Properties for the Polar Object
We can now set property values for the Polar object. To set any of the properties simply click the associated property's value field and then either enter a new value or if a drop down list is displayed click one of the items in the list.
Shown below are some of the more commonly used properties for the Polar object. Explanations for other properties such as AbortSeqOnFail, Graphics, etc. which are used on many of the different vision objects can be seen in the following.

  • "Vision Guide 8.0 Properties & Result Reference"
  • Polar Object Properties in this section
Property Description
Name property The default name given to a newly created Polar object is “Polarxx” where xx is a number which is used to distinguish between multiple Line objects within the same vision sequence. If this is the first Line object for this vision sequence then the default name will be “Polar01”. To change the name, click the Value field of the Name property, type a new name and press the return key. You will notice that once the name property is modified, every place where the Polar object’s name is displayed is updated to reflect the new name.
CenterPointObject property Typically you will set this property to one of the objects that occur previously in the sequence. This will determine the center point of the Polar object at runtime.
Thickness property Typically you will set this property a value large enough to contain sufficient model information to locate the angle of the part.
AngleOffset property Typically you will set this property to position the final angle result at the desired position. For example, if you were looking for the minute hand of a watch, you would adjust AngleOffset so that the displayed angle matched the minute hand.

Step 4: Running the Polar Object and Examining the Results
To run the Polar object, simply do the following:
Click the [Run] button of the object on the execution panel. The CenterPointObject will be run first.
Results for the Polar object will now be displayed. The primary results to examine at this time are:

Results Description
Angle result The angle of the model found in degrees.

OCR Object

OCR Object Description
The OCR object (Optical Character Recognition) is used to recognize single line character strings in an image for a specific font and character size. The OCR object GUI includes a Font wizard that is used to create a font based on SEMI standards, or create a user defined font based on characters in an image or an ASCII description file. You can export the fonts you create to disk and import them into other OCR objects in the same project or other projects.

KEY POINTS


OCR objects will only work if the OCR option is installed and enabled.

OCR Object Layout
The OCR object has a search window and a model window, as shown below.

OCR Object Layout

|Symbol |Description| |a|Object Name| |b|Search Window| |c|Model Window| |d|

Model Window

Size Handles

|

KEY POINTS


Character strings arranged along with an arc can be recognized as a specific font or character size image by using an “Arc” search window.

OCR Object Properties
The following list is a summary of properties for the OCR object. For details on each property, refer to the following.
"Vision Guide 8.0 Properties & Result Reference Manual"

Property Description
AbortSeqOnFail

Allows the user to specify that if the object fails (not passed), then the entire sequence is aborted at that point and no further objects in the sequence are processed.

Default: False

Caption

Assigns a caption to the OCR object.

Default: Empty String

CenterPointObject

Specifies the position to be used as the center point of the object.

When this property is set “Screen”, the object can be configured on arbitrary position. However, when specified to other vision object, the center point in PixelX, PixelY of the object is set.

Default: Screen

CenterPntObjResult

Specifies which result to use from the CenterPointObject property.

If All is specified, OCR object will be applied to all of the (NumberFound) for specified vision object results.

Default: 1

CenterPntOffsetX

Sets or returns the X offset after the center point of the search window is positioned with the CenterPointObject.

Default: 0

CenterPntOffsetY

Sets or returns the Y offset after the center point of the search window is positioned with the CenterPointObject.

Default: 0

CenterPntRotOffset

Specifies whether to rotate the XY offset value of the center (CenterPntOffsetX, CenterPntOffsetY) based on the Angle result of CenterPointObject.

Default: False

CurrentResult

Defines which result to display in the Results list (on the Object window) or which result to return data for when the system is requested to find more than one of a like feature within a single search window.

Grayed out when FindChar=False

Default: 1

Description

Sets a user description

Default: Blank

Dictionarymode

Specifies the dictionary mode

Default: All

Direction

Sets the direction of the characters arranged along an arc.

Default: InsideOut

Enabled

Specifies whether to execute the object.

Default: True

ExportFont Exports the current font to disk.
FailColor

Selects the color of an object when it is failed.

Default: Red

FindChar

Specifies whether to treat each character as an individual object

Default: False

Frame

Specifies which positioning frame to use.

Default: None

FrameResult

Specifies which number of the Frame results to be used.

Default: 1

Graphics

Specifies which graphics to display.

Default: 1 – All

ImportFont Runs a file dialog box from the Vision Guide GUI that allows you to import a font file.
InvalidChar

Sets or returns the character used in the Text result to represent an invalid character.

Default:“?”

LabelBackColor

Selects the background color for an object label.

Default: Transparent

ModelWin

Runtime only.

Sets or returns the model window left, top, height, width parameters in one call.

ModelWinAngle Defines the angle of the model window.
ModelWinCenterX Defines the X coordinate value of the center of the model window.
ModelWinCenterY Defines the Y coordinate value of the center of the model window.
ModelWinLeft Defines the left most position of the model window.
ModelWinHeight

Defines the height of the model window.

Default: 50

ModelWinTop Defines the top most position of the model window.
ModelWinType Defines the model window type.
ModelWinWidth

Defines the width of the model window.

Default: 50

Name

Used to assign a unique name to the OCR object.

Default: Ocr01

PassColor

Selects the color for an object when it is passed.

Default: Light Green

PassType

Selects the rule that determines if the object passed.

Default: SomeFound

Polarity

Defines the differentiation between objects and background. (Either “Dark Object on Light Background” or “Light Object on Dark Background”.)

Default: 1 - DarkOnLight

SearchWin

Runtime only.

Sets or returns the following parameters in one call. Search window left, top, height, width, X coordinate of the center, Y coordinate of the center, radius size of inner circumference, radius size of outer circumference

SearchWinAngleEnd Defines the end angle of the area to be searched.
SearchWinAngleStart Defines the start angle of the area to be searched.
SearchWinCenterX Defines the X coordinate value of the center of the area to be searched.
SearchWinCenterY Defines the Y coordinate value of the center of the area to be searched.
SearchWinHeight

Defines the height of the area to be searched (unit: pixel).

Default: 100

SearchWinLeft Defines the left most position of the area to be searched (unit: pixel).
SearchWinRadiusInner Defines the circle inner radius of the area to be searched.
SearchWinRadiusOuter Defines the circle outer radius of the area to be searched.
SearchWinTop Defines the upper most position of the area to be searched (unit: pixel).
SearchWinType Defines the type of the area to be searched (Rectangle, Arc)
SearchWinWidth

Defines the width of the area to be searched (unit: pixel).

Default: 100

OCR Object Results
The following list is a summary of the OCR object results with brief descriptions. For details on each result, refer to the following.
"Vision Guide 8.0 Properties & Result Reference Manual"

Results Description
Angle Returns the amount of found character rotation in degrees.
Found Returns whether the object was found.
Passed Returns whether the object detection result was accepted.
CameraX Returns the X coordinate position of the found character in the camera coordinate system.
CameraY Returns the Y coordinate position of the found character in the camera coordinate system.
NumberFound Returns the number of characters found.
MaxX Returns the maximum X pixel coordinate of the character Extrema in pixels.
MaxY Returns the maximum Y pixel coordinate of the character Extrema in pixels.
MinX Returns the minimum X pixel coordinate of the character Extrema in pixels.
MinY Returns the minimum Y pixel coordinate of the character Extrema in pixels.
PixelX Returns the X coordinate position of the found character’s position in pixels.
PixelY Returns the Y coordinate position of the found character’s position in pixels.
RobotX Returns the X coordinate position of the found character in the robot coordinate system.
RobotY Returns the Y coordinate position of the found character in the robot coordinate system.
ShowAllResults

Displays a dialog box which allows you to see all results for a specified vision object in a table form.

This makes it easy to compare results.

Text Returns the text found in a search operation.
Time Returns the amount of time required to process the object (unit: millisecond).

Using OCR Objects
This next section will describe the steps required to use OCR objects as listed below:

  • Creating a New OCR Object
  • Create or import a font for the new object.
  • Calibrate the font (if required).
  • Configure properties associated with the OCR object
  • Test the OCR object and examine the results
  • Make adjustments to properties and test again

Prior to starting the steps shown below, you should have already created a new vision sequence or selected a vision sequence to use. If you have no vision sequence to work with, you can create a new vision sequence by clicking on the [New Sequence] button. You can also select a sequence which was created previously by clicking on the sequence tree in the Vision Guide window and clicking on the OCR object.

Creating a New OCR Object

  1. Click the [All Tools] - the [OCR] button on the Vision Guide toolbar.
  2. You will see an OCR icon appear above the OCR object button.
  3. Click the OCR icon and drag to the image display of the Vision Guide window.
  4. Notice that a name for the object is automatically created. In the example, it is called "Ocr01" because this is the first OCR object created for this sequence. (We will explain how to change the name later.)

    OCR Object Layout
Symbol Description
a Step Number in Sequence
b Object Name
c Search Window
d Model Window

Create a font
To use OCR objects, a font need to be created according to the image to be scanned. You can import a font from other projects by using the ImportFont property. When you import a font, you don't need to create a font.
Also, common Western and Japanese fonts are embedded in the system. A font may need to be created depending on the size of the input image.
A user-defined font can be created from the image file.

Follow these steps to create a font:

  1. Click the OCR object on the flow chart. Adjust the model window to the character.
  2. Click the [Teach] button. The [Teach OCR Font] dialog box will be displayed.
  3. To register a new character, click the [New Character] button. Then, click the [Teach] button. You can register multiple images to one character.
    You can register Japanese and alphanumeric characters, and symbols. To delete the registered character from the font, click the [Existing Character] button and select the character to be deleted from the dropdown list. Then, click the [Delete] button.

CodeReader Object

CodeReader Object Description
The CodeReader object is used to read bar codes or 2 dimensional codes.
Codes supported are:

Code Specification
EAN13 Digits 0 to 9, fixed length.
Code39 0 to 9, A to Z, ./+-%$Spc, variable length.
DataMatrix 2 dimensional code *1
Interleaved 2 of 5 Digits 0 to 9, fixed length.
Code128 Full ASCII, variable length.
Codabar Digits 0 to 9, $,- , :, /, ., +.
PDF417 2 dimensional code
QR 2 dimensional code
EAN 8 Digits 0 to 9, fixed length
UPC A Digits 0 to 9, fixed length.
UPC E Digits 0 to 9, fixed length

*1: Supports only ECC200 codes.

CoderReader Object Layout
The CodeReader object only has a search window, as shown below.

CodeReader Object Layout

Symbol Description
a Object Name
b Search Window
c Size Handles

CodeReader Object Properties
The following list is a summary of properties for the CodeReader object. For details on each property, refer to the following.
"Vision Guide 8.0 Properties & Result Reference Manual"

Property Description
AbortSeqOnFail

Allows the user to specify that if the object fails (not passed), then the entire sequence is aborted at that point and no further objects in the sequence are processed.

Default: False

Caption

Used to assign a caption to the CodeReader object.

Default: Blank

CenterPointObject

Specifies the position to be used as the center point of the object.

When this property is set “Screen”, the object can be configured on arbitrary position. However, when specified to other vision object, the center point in PixelX, PixelY of the object is set.

Default: Screen

CenterPntObjResult

Specifies which result to use from the CenterPointObject property. If All is specified, CodeReader object will be applied to all of the (NumberFound) for specified vision object results.

Default: 1

CenterPntOffsetX

Sets or returns the X offset after the center point of the search window is positioned with the CenterPointObject.

Default: 0

CenterPntOffsetY

Sets or returns the Y offset after the center point of the search window is positioned with the CenterPointObject.

Default: 0

CenterPntRotOffset

Specifies whether to rotate the XY offset value of the center (CenterPntOffsetX, CenterPntOffsetY) based on the Angle result of CenterPointObject.

Default: False

CodeType

Sets or returns which type of bar code or two-dimensional code to search for with the CodeReader object.

Default: 0 - Auto

CoordObject

Specifies Coordinates object to copy the result. The copy is executed when the object is executed, and if it didn’t execute because of branch function of Decision, the copy will not be executed.

Default: None

CurrentResult Defines which result to display in the Results list (on the Object window) or which result to return data for when the system is requested to find more than one of a like feature within a single search window.
Description

Sets a user description

Default: Blank

Enabled

Specifies whether to execute the object.

Default: True

FailColor

Selects the color of an object when it is not accepted.

Default: Red

Frame

Defines the current object searching position with respect to the specified frame.

Default: none

FrameResult

Specifies which number of the Frame results to be used.

Default: 1

Graphics

Specifies which graphics to display.

Default: 1 - All

LabelBackColor

Sets the background color for the object label.

Default: Transparent

Name

Used to assign a unique name to the CodeReader object.

Default: Code01

NumberToFind

Defines the number of codes to find.

Max: 8

Default: 1

Orientation Defines the direction of a barcode
PassColor

Selects the color for an object when it is passed.

Default: LightGreen

PassType

Selects the rule that determines if the object passed.

Default: SomeFound

SearchWin Runtime only. Sets or returns the search window left, top, height, width parameters in one call.
SearchWinHeight

Defines the height of the area to be searched (unit: pixel).

Default: 100

SearchWinLeft Defines the left most position of the area to be searched (unit: pixel).
SearchWinTop Defines the upper most position of the area to be searched (unit: pixel).
SearchWinWidth

Defines the width of the area to be searched (unit: pixel).

Default: 100

CodeReader Object Results
The following list is a summary of the CodeReader object results with brief descriptions. For details on each result, refer to the following.
"Vision Guide 8.0 Properties & Result Reference Manual"

Results Description
Angle Returns the angle for inclination of the detected code.
CameraX Returns the X coordinate of the detected object in the camera coordinate system. (Unit: mm)
CameraY Returns the Y coordinate of the detected object in the camera coordinate system. (Unit: mm)
Found Returns whether the object was found.
FoundCodeType Returns the detected code type.
NumberFound

Returns the detected code.

(The detected number can be from 0 up to the number set with the NumberToFind property.)

Passed Returns whether the object detection result was accepted.
PixelX Returns the X coordinate of the detected object in pixels.
PixelY Returns the Y coordinate of the detected object in pixels.
PixelXYU

Runtime only.

Returns the PixelX, PixelY, and PixelU coordinates of the found part's position in pixels.

RobotX Returns the X coordinate of the detected object in the robot coordinate system.
RobotY Returns the Y coordinate of the detected object in the robot coordinate system.
RobotU Returns the U coordinate of the detected object in the robot coordinate system.
RobotXYU

Runtime only.

Returns the RobotX, RobotY, and RobotU coordinates of the detected object in the robot coordinate system.

ShowAllResults

Displays a dialog box which allows you to see all results for a specified vision object in a table form.

This makes it easy to compare results.

Text Returns the text found in a search operation.
Time Returns the amount of time required to process the object (unit: millisecond).

Using CodeReader Objects
This next section will describe the steps required to use CodeReader objects as listed below:

  • Create a new CodeReader object
  • Position and Size the search window
  • Configure properties associated with the CodeReader object
  • Test the CodeReader object and examine the results
  • Make adjustments to properties and test again

Prior to starting the steps shown below, you should have already created a new vision sequence or selected a vision sequence to use. If you have no vision sequence to work with, you can create a new vision sequence by clicking on the [New Sequence] button. You can also select a sequence which was created previously by clicking on the sequence tree in the Vision Guide window and clicking on the CodeReader object.

Step 1: Create a New CodeReader Object

  1. Click the [All Tools] - the [CodeReader] button on the Vision Guide toolbar.
  2. You will see a CodeReader icon appear above the CodeReader object button.
  3. Click the CodeReader icon and drag to the image display of the Vision Guide window.
  4. Notice that a name for the object is automatically created. In the example, it is called “Code01” because this is the first CodeReader object created for this sequence. (We will explain how to change the name later.)

Step 2: Position and Size the Search Window
You should now see a CodeReader object similar to the one shown below:

New CodeReader Object

Symbol Description
a Step Number in Sequence
b Object Name
c Search Window
  1. Click the name label of the CodeReader object and while holding the mouse down drag the CodeReader object to the position where you would like the top left position of the search window to reside.
  2. Resize the CodeReader object search window as required using the search window size handles. (This means click a size handle and drag the mouse.) (The search window is the area within which we will search.)

KEY POINTS


It is important to allow space on both sides of a bar code (known as the quiet zone) or the search will fail. One cell or more blank space is necessary around it if it is a two dimensional bar code.

ColorMatch Object

ColorMatch Object Description
The ColorMatch object is used to detect colors that match one or more color models.
ColorMatch Object Layout
The ColorMatch object has a rectangle, rotated rectangle, or circular layout and as such has a center point and radius. The position of the ColorMatch object can be moved by clicking on the name of the object or anywhere on the outer perimeter of the search window and then dragging the object to a new position.

ColorMatch Object Layout

Symbol Description
a Object Name
b Window Size Handle

KEY POINTS


The ColorMatch object center position (defined by the CenterPoint property) can also be based upon the position of another object. This means that even though you may reposition a ColorMatch object, once you run the object or Sequence the object center position may change.

The search area for a ColorMatch object can be a rectangle, rotated rectangle, or circle.
To resize the search window outer boundary for the ColorMatch object where the SearchWinType is Circle, click one of the search window size handles and drag the ring inward or outward as desired to change the radius.

ColorMatch Properties
The following list is a summary of properties for the ColorMatch object. For details on each property, refer to the following.
"Vision Guide 8.0 Properties & Result Reference Manual"

Property Description
AbortSeqOnFail

Allows the user to specify that if the object fails (not passed), then the entire sequence is aborted at that point and no further objects in the sequence are processed.

Default: False

Accept

Specifies the score that a feature must equal or exceed to be considered found.

If the value is small, it may result in false detection.

Default: 700

Caption

Used to assign a caption to the ColorMatch object.

Default: Empty String

CenterPointObject

Specifies the position to be used as the center point of the object.

When this property is set “Screen”, the object can be configured on arbitrary position. However, when specified to other vision object, the center point in PixelX, PixelY of the object is set.

Default: Screen

CenterPntObjResult

Specifies which result to use from the CenterPointObject property.

If All is specified, ColorMatch object will be applied to all of the (NumberFound) for specified vision object results.

Default: 1

CenterPntOffsetX

Sets or returns the X offset after the center point of the search window is positioned with the CenterPointObject.

Default: 0

CenterPntOffsetY

Sets or returns the Y offset after the center point of the search window is positioned with the CenterPointObject.

Default: 0

CenterPntRotOffset

Specifies whether to rotate the XY offset value of the center (CenterPntOffsetX, CenterPntOffsetY) based on the Angle result of CenterPointObject.

If SearchWinType is set to RotatedRectangle, the search window rotates based on the Angle result.

Default: False

CenterX Specifies the X coordinate position to be used as the center point for the object. This property is filled in automatically when the CenterPoint property is set to another vision object.
CenterY Specifies the Y coordinate position to be used as the center point for the ColorMatch object. This property is filled in automatically when the CenterPoint property is set to another vision object.
ColorMode

Sets which color space (RGB/HSV) to use.

Default: RGB

CoordObject

Specifies Coordinates object to copy the result. The copy is executed when the object is executed, and if it didn’t execute because of branch function of Decision, the copy will not be executed.

Default: None

CurrentModel

Runtime only. This specifies which model to use for the ModelColor property and also for VTeach. CurrentModel values are 1 to the NumberOfModels.

Default: 1

CurrentResult

Defines which result to display in the Results list (on the Object window) or which result to return data for when the system is requested to find more than one of a like feature within a single search window.

Default: 1

Description

Sets a user description

Default: Blank

Enabled

Specifies whether to execute the object.

Default: True

FailColor

Selects the color of an object when it is not accepted.

Default: Red

Frame

Defines the current object searching position with respect to the specified frame. (Allows the object to be positioned with respect to a frame.)

Default: None

FrameResult

Specifies which number of the Frame results to be used.

Default: 1

Graphics

Specifies which graphics to display.

Default: 1 - All

LabelBackColor

Sets the background color for the object label.

Default: Transparent

ModelColor

Runtime only. This property is used at runtime to teach a model manually by setting the RGB color value directly.

Default: RGB(0, 0, 0)

ModelColorTol

Runtime only. This property is used at runtime to set the color tolerance for a model color. If a pixel color is within the tolerance of a model color, then the pixel is unchanged.

Default: 10 (ColorMode = RGB), 0,0,50 (ColorMode = HSV)

ModelName Runtime only. This property is used at runtime to set the name of the current model.
ModelObject

Determines which model to use for searching.

Default: Self

Name

Used to assign a unique name to the ColorMatch object.

Default: ColorMatch01

NumberOfModels

Runtime only. This is the number of color models used. At runtime, you can set the NumberOfModels, then use CurrentModel and VTeach to teach each color model.

Default: 1

NumberToFind

Defines the number of features to find in the current search window.

Default: 1

PassColor

Selects the color for an object when it is passed.

Default: Light Green

PassType

Selects the rule that determines if the object passed.

Default: SomeFound

Radius

Defines the distance from the CenterPoint of the object to the outer most search ring of the object.

Default: 50

SearchWin

Runtime only.

Sets or returns the search window left, top, height, width parameters in one call.

SearchWinAngle Defines the angle of the area to be searched.
SearchWinCenterX Defines the X coordinate value of the center of the area to be searched.
SearchWinCenterY Defines the Y coordinate value of the center of the area to be searched.
SearchWinHeight Defines the height of the area to be searched (unit: pixel).
SearchWinLeft Defines the left most position of the area to be searched (unit: pixel).
SearchWinTop Defines the upper most position of the area to be searched (unit: pixel).
SearchWinType Defines the type of the area to be searched (i.e. Rectangle, RotatedRectangle, Circle).
SearchWinWidth Defines the width of the area to be searched (unit: pixel).

ColorMatch Results
The following list is a summary of the ColorMatch object results with brief descriptions. For details on each result, refer to the following.
"Vision Guide 8.0 Properties & Result Reference Manual"

Results Description
CameraX Returns the X coordinate position of the found part’s position (referenced by model origin) in the camera coordinate system. Values are in millimeters.
CameraY Returns the Y coordinate position of the found part’s position (referenced by model origin) in the camera coordinate system. Values are in millimeters.
ColorIndex Returns the index of the found color model.
ColorName Returns the name of the found color model.
ColorValue Returns the RGB or HSV value of the found color, depending on the ColorMode setting.
CameraXYU Runtime only. Returns the CameraX, CameraY, and CameraU coordinates of the found part's position in the camera coordinate system. (Units: mm)
Found Returns whether a color was matched from one of the color models.
NumberFound

Returns the number of objects found.

(The detected number can be from 0 up to the number set with the NumberToFind property.)

Passed Returns whether the object detection result was accepted.
PixelX Returns the X coordinate position of the found parts position in pixels.
PixelY Returns the Y coordinate position of the found parts position in pixels.
PixelXYU

Runtime only.

Returns the PixelX, PixelY, and PixelU coordinates of the found part's position in pixels.

RobotX Returns the X coordinate of the detected object in the robot coordinate system.
RobotY Returns the Y coordinate of the detected object in the robot coordinate system.
RobotXYU Runtime only. Returns the RobotX, RobotY, and RobotU coordinates of the detected object in the robot coordinate system.
Score

Represents the degree to which the color of the detected object matches the color of the model when it executed.

Returns an INTEGER value from 0 to 1000.

ShowAllResults

Displays a dialog box which allows you to see all results for a specified vision object in a table form.

This makes it easy to compare results.

Time Returns the amount of time required to process the object (unit: millisecond).

Using ColorMatch Objects
Now that we have reviewed how ColorMatch and searching works we have set the foundation for understanding how to use Vision Guide ColorMatch objects. This next section will describe the steps required to use ColorMatch objects as listed below:

  • Create a new ColorMatch object
  • Position and size the search window
  • Configure properties associated with the ColorMatch object
  • Teach one or more color models
  • Test the ColorMatch object and examine the results
  • Make adjustments to properties and test again
  • Working with Multiple Results from a single ColorMatch object

Step 1: Create a new ColorMatch object
Click the [All Tools] - the [ColorMatch] button on the Vision Guide toolbar.
The mouse cursor will change to a ColorMatch icon.
Move the mouse cursor over the image display of the Vision Guide window and click the left mouse button to place the ColorMatch object on the image display
Notice that a name for the object is automatically created. In the example, it is called “ColorMatch01” because this is the first ColorMatch object created for this sequence. (We will explain how to change the name later.)

Step 2: Position and Size the Search Window
You should now see a ColorMatch object similar to the one shown below:

New ColorMatch object layout

Symbol Description
a Object Name
b Window Size Handle
  1. Click the name label of the ColorMatch object and while holding the mouse down drag the object to the position where you would like the search window to reside.
  2. Resize the ColorMatch object window as required using the window size handles. (This means click a size handle and drag the mouse.) The window is the area within which we will match colors.

Step 3: Teach models for the ColorMatch object
Before you can use the ColorMatch object to detect colors, you must first teach one or more color models. When a color model is taught, the average color is determined from all of the pixels inside the ColorMatch object window. You can name each model.

  1. Make sure that the ColorMatch object is the currently displayed. See the flow chart or the object tree to check which object is the object you are currently working on or you can look at the image display and see which object is highlighted in magenta.
  2. Click the [Teach] button on the execution panel. A window will be opened as shown below:
    While the teach window is displayed, you can change the position of the ColorMatch object as needed to teach each color model.
  3. Position the ColorMatch object over the color you want to teach. Try to fill the entire window with the color.
  4. Click the Add button to add a new model.
  5. Select the model you want to teach by clicking in any field of the desired model's row.
  6. Click the Teach button to teach the color.
  7. Enter a meaningful name for the color. This name is used as the ColorName result.
  8. The Tolerance default value when ColorMode is RGB is 0, and the tolerance default values when ColorMode is HSV are 0, 0, 50. You can change the tolerance values to aid in matching colors with small variations or where lighting is not consistent.
  9. To add more models, repeat steps 3 - 8.

KEY POINTS


Although in most cases you will teach the color model using the ColorMatch object window, you can also enter the RBG (or HSV) values manually in the teach window.

LineFinder Object

LineFinder Object Description
LineFinder objects are used to identify the position of a line in the image.
LineFinder objects process multiple Edge objects automatically to identify the edge position and obtain the line identified from each edge position.
The edge of an object in an image is a change in gray value from dark to light or light to dark. This change may span several pixels.
The LineFinder object finds the transition from Light to Dark or Dark to Light as defined by the Polarity property and defines that position as the edge position for a single edge. You can also search for edge pairs by changing the EdgeType property. With edge pairs, two opposing edges are searched for, and the midpoint is returned as the result.

LineFinder Object Layout
The LineFinder object has a different look than the Correlation and Blob objects. The LineFinder object’s search window is the line along which the Edge object searches. The LineFinder object searches for a transition (light to dark or dark to light) somewhere along this line in the direction indicated by the Direction Indicator.


LineFinder Object Layout

Symbol Description
a Step Number in Sequence
b Object Name
c Size & Direction Handle
d Direction indicator (Direction of edge search)

The LineFinder object can be positioned to search in any direction (not just along the vertical and horizontal directions). Like SearchWinType=AngledRectangle of the Blob object, this is done by using the size and direction handles of the LineFinder object to move the LineFinder object along the direction (and per a user-specified distance) required to find the edge you are interested in.

LineFinder Object Properties
The following list is a summary of properties for the LineFinder object. For details on each property, refer to the following.
"Vision Guide 8.0 Properties & Result Reference"

Property Description
AbortSeqOnFail

Allows the user to specify that if the object fails (not passed), then the entire sequence is aborted at that point and no further objects in the sequence are processed.

Default: False

Accept

Specifies the shape score that a feature must equal or exceed to be considered found. If the value is small, it may result in false detection.

Default: 100

AngleBase

Sets the reference angle.

Default: 0

AngleMode

Sets the angle output format.

Default: 1 - Default

AngleStart Specifies the center of angle search.
Caption

Used to assign a caption to the LineFinder object.

Default: Empty String

CenterPntObjResult

Specifies which result to use from the CenterPointObject property.

If All is specified, LineFinder object will be applied to all of the (NumberFound) for specified vision object results.

CenterPntOffsetX Sets or returns the X offset after the center point of the search window is positioned with the CenterPointObject.
CenterPntOffsetY Sets or returns the Y offset after the center point of the search window is positioned with the CenterPointObject.
CenterPntRotOffset

Specifies whether to rotate the XY offset value of the center (CenterPntOffsetX, CenterPntOffsetY) based on the Angle result of CenterPointObject.

If SearchWinType is set to RotatedRectangle, the search window rotates based on the Angle result.

CenterPointObject

Specifies the position to be used as the center point of the object.

When this property is set “Screen”, the object can be configured on arbitrary position. However, when specified to other vision object, the center point in PixelX, PixelY of the object is set.

CheckClearanceFor Sets the object to confirm a clearance.
ClearanceCondition Specifies the way of decision for a clearance.
ContrastTarget

Sets the desired contrast for the edge search.

Default: 0 (best contrast)

ContrastVariation

Selects the allowed contrast variation for ContrastTarget.

Default: 0

CurrentResult Defines which result to display in the Results list (on the Object window) or which result to return data for when the system is requested to find more than one of a like feature within a single search window.
Description

Sets a user description

Default: Blank

Directed

Specifies whether to set the angle using the line direction

Default: True

EdgeSort Sets the method of sorting detected edge results.
EdgeThreshold

Sets the threshold at which edges below this value are ignored.

Default: 2

EdgeType

Select the type of edge to search for: single or pair.

Default: 1 - Single

Enabled

Specifies whether to execute the object.

Default: True

FailColor

Selects the color for an object when it is failed.

Default: Red

FittingThreshold Specifies the edge results to use for linear fittings.
Frame

Specifies which positioning frame to use.

Default: none

FrameResult

Specifies which number of the Frame results to be used.

Default: 1

Graphics

Specifies which graphics to display.

Default: 1 - All

LabelBackColor

Selects the background color for an object label.

Default: Transparent

Name

Used to assign a unique name to the LineFinder object.

Default: LineFind01

NumberOfEdges

Defines the number of edges to find.

Default: 5

PassColor

Selects the color for an object when it is passed.

Default: Light Green

PassType

Selects the rule that determines if the object passed.

Default: SomeFound

Polarity

Defines whether the LineFinder object should search for a LightToDark or DarkToLight transition.

Default: 1 - LightToDark

ScoreWeightContrast

Sets the percentage of the score that depends on contrast.

Default: 50

ScoreWeightStrength

Sets the percentage of the score that depends on edge strength.

Default: 50

SearchWidth

Defines the width of the edge search. Range is from 3 to 99.

Default: 3

SearchWin

Runtime only.

Sets or returns the search window left, top, height, width parameters in one call.

SearchWinAngle Defines the angle of the area to be searched.
SearchWinCenterX Defines the X coordinate value of the center of the area to be searched.
SearchWinCenterY Defines the Y coordinate value of the center of the area to be searched.
SearchWinHeight Defines the height of the area to be searched (unit: pixel).
SearchWinLeft Defines the left most position of the area to be searched (unit: pixel).
SearchWinTop Defines the upper most position of the area to be searched (unit: pixel).
SearchWinWidth Defines the width of the area to be searched (unit: pixel).
StrengthTarget

Sets the desired edge strength to search for.

Default: 0

StrengthVariation

Sets the amount of variation for StrengthTarget.

Default: 0

X1 The X coordinate position of the start point of the edge.
X2 The X coordinate position of the end point of the edge.
Y1 The Y coordinate position of the start point of the edge.
Y2 The Y coordinate position of the end point of the edge.

LineFinder Object Results
The following list is a summary of the Edge object results with brief descriptions. For details on each result, refer to the following.
"Vision Guide 8.0 Properties & Result Reference"

Results Description
Angle Returns the detection edge angle in the image coordinate system.
CameraX1 Returns the X coordinate for the start point of the detected edge line in the camera coordinate system.
CameraY1 Returns the Y coordinate for the start point of the detected edge line in the camera coordinate system.
CameraX2 Returns the X coordinate for the end point of the detected edge line in the camera coordinate system.
CameraY2 Returns the Y coordinate for the end point of the detected edge line in the camera coordinate system.
ClearanceOK Returns the result of decision for a clearance.
Contrast Returns the average contrast of the detected edges.
EdgeCameraXYU Returns the CameraX, CameraY, and Angle coordinate position of the found edge in searching.
EdgePixelXYU Returns the PixelX, PixelY, and Angle coordinate position of the found edge in searching.
EdgeRobotXYU Returns the RobotX, RobotY, and Angle coordinate position of the found edge in searching.
FitError Returns the distance between each edge point and the Line detected as the root mean square (RMS).
Found Returns whether the object was found. (i.e. did the feature or part you are looking at have a shape score that is above the Accept property’s current setting.)
Passed Returns whether the object detection result was accepted.
PixelLine

Runtime only.

Returns the four line coordinates X1, Y1, X2, Y2 in pixels.

Length Returns the length of the detected edge line in millimeters.
NumberFound

Returns the number of lines found.

(The detected number can be from 0 up to the number set with the NumberToFind property.)

PixelLength Returns the length of the detected edge line in pixels.
PixelX1 Returns the X coordinate position of the start point of the detected edge line in the image coordinate system.
PixelY1 Returns the Y coordinate position of the start point of the detected edge line in the image coordinate system.
PixelX2 Returns the X coordinate position of the end point of the detected edge line in the image coordinate system.
PixelY2 Returns the Y coordinate position of the end point of the detected edge line in the image coordinate system.
RobotX1 Returns the X coordinate position of the start point of the detected edge line in the Robot coordinate system.
RobotY1 Returns the Y coordinate position of the start point of the detected edge line in the Robot coordinate system.
RobotX2 Returns the X coordinate position of the end point of the detected edge line in the Robot coordinate system.
RobotY2 Returns the Y coordinate position of the end point of the detected edge line in the Robot coordinate system.
RobotU Returns the angle of the detected edge line in the Robot coordinate system.
Strength Returns the average strength of the detected edges.
Time Returns the amount of time required to process the object (unit: millisecond).

Using LineFinder Objects
The next few sections guide you through how to create and use a LineFinder object.

  • How to create a new LineFinder object
  • Position and Size the search window
  • Configure the properties associated with the LineFinder object
  • Test the LineFinder object & examine the results
  • Make adjustments to properties and test again
    Prior to starting the steps shown below, you should have already created a new vision sequence or selected a vision sequence to use. If you have no vision sequence to work with, you can create a new vision sequence by clicking on the [New Sequence] button.
    You can also select a sequence which was created previously by clicking on the sequence tree in the Vision Guide window.
    Refer to the following for more details on how to create a new vision sequence or select one that was previously defined.
    Vision Sequences

Step 1: Create a New LineFinder Object

  1. Click the [All Tools] - the [New LineFinder] button on the Vision Guide toolbar.
  2. Move the mouse over the image display. You will see the mouse pointer change to the LineFinder object icon.
  3. Continue moving the mouse until the icon is at the desired position in the image display, then click the left mouse button to create the object.
  4. Notice that a name for the object is automatically created. In the example, it is called “LineFind01” because this is the first LineFinder object created for this sequence. (We will explain how to change the name later.)

Step 2: Position and Size the Search Window
You should now see a LineFinder object similar to the one shown below:

New LineFinder Object

Symbol Description
a Step Number in Sequence
b Object Name
c Size & Direction Handle
d Direction indicator (Direction of edge search)
  1. Click the name label of the LineFinder object and, while holding the mouse down, drag the LineFinder object to the position where you would like the top left position of the search window to reside.
  2. Resize the LineFinder object search window as required using the search window size handles.

Step 3: Configuring Properties for the LineFinder Object
We can now set property values for the LineFinder object. To set any of the properties simply click the associated property's value field and then either enter a new value or if a drop down list is displayed click one of the items in the list.
Shown below are some of the more commonly used properties for the LineFinder object. Explanations for other properties such as AbortSeqOnFail, Graphics, etc. which are used on many of the different vision objects can be seen in the following.
"Vision Guide 8.0 Properties & Result Reference"
Use the AngleMode property to set the angle output format. For more details, refer to Line Object.

Property Description
EdgeType (Single)

Select the type of the edge to be searched.

For edge pairs, an edge is found from each direction and the center of the pair is reported as the position.

Name property (“LineFindxx”) The default name given to a newly created LineFinder object is “LineFindxx” where xx is a number which is used to distinguish between multiple LineFinder objects within the same vision sequence. If this is the first LineFinder object for this vision sequence then the default name will be “LineFind01”. To change the name, click the Value field of the Name property, type a new name and press the return key. You will notice that once the name property is modified, every place where the LineFinder object's name is displayed is updated to reflect the new name.
NumberOfEdges(1) You can search for 1 or more edges along the search line.
Polarity (LightToDark) If you are looking for a DarkToLight edge, change polarity.

Step 4: Running the LineFinder Object and Examining the Results
To run the LineFinder object, simply do the following:
Click the [Run] button of the object on the execution panel. Results for the LineFinder object will now be displayed. The primary results to examine at this time are:

Results Description
Angle Result Returns the angle of the detected edge line in the Image coordinate system.
Length Result

Returns the length of the detected edge line in the Camera coordinate system.

Unit: mm

PixelLength Result

Returns the length of the detected edge line in the Image coordinate system.

Unit: pixel

PixelX1 Result

PixelY1 Result

Returns the XY coordinate position for the start point of the detected edge line in the Image coordinate system.

PixelX2 Result

PixelY2 Result

Returns the XY coordinate position for the end point of the detected edge line in the Image coordinate system.

CameraX1 Result

CameraY1 Result

Returns the XY coordinate position for the start point of the detected edge line in the Camera coordinate system.

If the calibration is not performed, “no cal” will be returned.

CameraX2 Result

CameraY2 Result

Returns the XY coordinate position for the end point of the detected edge line in the Camera coordinate system.

If the calibration is not performed, “no cal” will be returned.

RobotX1 Result

RobotY1 Result

Returns the XY coordinate position for the start point of the detected edge line in the Robot coordinate system.

If the calibration is not performed, “no cal” will be returned.

RobotX2 Result

RobotY2 Result

Returns the XY coordinate position for the end point of the detected edge line in the Robot coordinate system.

If the calibration is not performed, “no cal” will be returned.

RobotU Result Returns the angle of the detected edge line in the Robot coordinate system.

LineInspector Object

LineInspector Object Description
LineInspector objects are used to inspect a line in the image.
LineInspector objects process multiple Edge objects automatically to identify the defects in the line being inspected.
The edge of an object in an image is a change in gray value from dark to light or light to dark. This change may span several pixels.
The LineInspector object finds the transition from Light to Dark or Dark to Light as defined by the Polarity property and defines that position as the edge position for a single edge. You can also search for edge pairs by changing the EdgeType property. With edge pairs, two opposing edges are searched for, and the midpoint is returned as the result.

LineInspector Object Layout
The LineInspector object looks similar to the LineFinder tool. The LineInspector object’s search window contains several edge search lines. The LineInspector object searches for a transition (light to dark or dark to light) somewhere along each search line in the direction indicated by the direction indicators. The data from the edge searches is used to determine defects along the line.

LineInspector Object Layout

Symbol Description
a Step Number in Sequence
b Object Name
c Edge search lines
d Line position, size & offset handles
e Line being inspected
f Direction indicator (Direction of edge search)

The LineInspector object can be positioned and rotated to search in any direction (not just along the vertical and horizontal directions).

To rotate, drag one of the outer size handles in a clockwise or counter-clockwise direction. The LineFinder result can be used for the line to be inspected. In this case, set the LineFinder to be specified to LineObject.

Symbol Description
a Outer size handle (X1, Y1)
b Outer size handle (X2, Y2)

To change the width of the line to be inspected, drag one of the outer size handles away from or toward the center of the line.
To change the size of the edge searches, drag one of the inner size handles.

LineInspector Search
The image below shows part of an object with a defect.

The LineInspector finds the defect as shown below. Note that each edge search position distance from the line being inspected must exceed either the DefectLevelThreshPos or DefectLevelThreshNeg property values for a defect to be found. Also, the defect area must be greater than MinArea and less than MaxArea.

Symbol Description
a Defect
b DefectLevelThreshNeg
c Line being inspected
d DefectLevelThreshPos

LineInspector Object Properties
The following list is a summary of properties for the LineInspector object. For details on each property, refer to the following.
"Vision Guide 8.0 Properties & Result Reference"

Property Description
AbortSeqOnFail

Allows the user to specify that if the object fails (not passed), then the entire sequence is aborted at that point and no further objects in the sequence are processed.

Default: False

Accept

Specifies the shape score that a feature must equal or exceed to be considered found. If the value is small, it may result in false detection.

Default: 100

CalRobotPlacePos Calibrates RobotPlacePos when designing and performing program.
Caption

Used to assign a caption to the LineInspector object.

Default: Empty String

ContrastTarget

Sets the desired contrast for the edge search.

Default: 0 (best contrast)

ContrastVariation

Selects the allowed contrast variation for ContrastTarget.

Default: 0

CoordObject

Specifies Coordinates object to copy the result. The copy is executed when the object is executed, and if it didn’t execute because of branch function of Decision, the copy will not be executed.

Default: None

CurrentResult Defines which result to display in the Results list (on the Object window) or which result to return data for when the system is requested to find more than one of a like feature within a single search window.
DefectAreaExtended

Sets whether to extend the defect area interpolation.

Default: False

DefectLevelThreshNeg

Sets defect threshold below the line.

Default: 2

DefectLevelThreshPos

Sets defect threshold above the line.

Default: 2

Description

Sets a user description

Default: Blank

EdgeSort Sets the method of sorting detected edge results.
EdgeThreshold

Sets the threshold at which edges below this value are ignored.

Default: 2

EdgeType

Select the type of edge to search for: single or pair.

Default: 1 - Single

Enabled

Specifies whether to execute the object.

Default: True

EndPntObjResult

Specifies which result to use from the EndPointObject.

Default: 1

EndPointObject

Specifies which vision object to use to define the end point of the line to be inspected.

Default: Screen

EndPointType

Defines the type of end point used to define the end point of a line.

Default: 0 - Point

FailColor

Selects the color of an object when it is not accepted.

Default: Red

Frame

Specifies which positioning frame to use.

Default: none

FrameResult

Specifies which number of the Frame results to be used.

Default: 1

Graphics

Specifies which graphics to display.

Default: 1 - All

InspectEndOffset

Sets the offset from the end of the line where the inspection should stop.

Default: 15

InspectStartOffset

Sets the offset from the start of the line where the inspection should start.

Default: 15

LabelBackColor

Selects the background color for an object label.

Default: Transparent

LineObject

Defines an object that will search for the line before it is inspected.

Default: None

LineObjResult

Defines which result to use from the LineObject.

Default: 1

MaxArea

Defines the upper Area limit for a defect.

For a defect to be found it must have an Area result below the value set for MaxArea property.

Default: 100,000

MinArea

Defines the lower Area limit for a defect.

For a defect to be found it must have an Area result above the value set for the MinArea property.

Default: 25

MissingEdgeType

Defines how to handle a missing edge.

Default: Interpolate

Name

Used to assign a unique name to the LineInspector object.

Default: LineInsp01

NumberOfEdges

Defines the number of edges to use for the inspection.

Default: 20

NumberToFind

Defines the number of defects to find in the search area.

Default: 1

PassColor

Selects the color for an object when it is passed.

Default: Light Green

PassType

Selects the rule that determines if the object passed.

Default: AllNotFound

Polarity

Defines whether the LineInspector object should search for a LightToDark or DarkToLight transition.

Default: 1 - LightToDark

ScoreWeightContrast

Sets the percentage of the score that depends on contrast.

Default: 50

ScoreWeightStrength

Sets the percentage of the score that depends on edge strength.

Default: 50

SearchWidth

Defines the width of the edge search. Range is from 3 to 99.

Default: 3

SearchWinHeight Defines the height of the area to be searched (unit: pixel).
SearchWinLeft Defines the left most position of the area to be searched (unit: pixel).
SearchWinTop Defines the upper most position of the area to be searched (unit: pixel).
SearchWinWidth Defines the width of the area to be searched (unit: pixel).
SizeToFind

Selects which size of defects to find.

Default: 1 - Largest

StartPntObjResult

Specifies which result to use from the StartPointObject.

Default: 1

StartPointObject

Specifies which vision object to use to define the start point of the Line.

Default: Screen

StartPointType

Defines the type of start point used to define the start point of a line.

Default: 0 - Point

StrengthTarget

Sets the desired edge strength to search for.

Default: 0

StrengthVariation

Sets the amount of variation for StrengthTarget.

Default: 0

X1 The X coordinate position of the start point of the edge.
X2 The X coordinate position of the end point of the edge.
Y1 The Y coordinate position of the start point of the edge.
Y2 The Y coordinate position of the end point of the edge.

LineInspector Object Results
The following list is a summary of the LineInspector object results with brief descriptions. For details on each result, refer to the following.
"Vision Guide 8.0 Properties & Result Reference"

Results Description
Area Returns the area of the defect in pixels.
CameraX Returns the X coordinate of the defect in the Camera coordinate system.
CameraY Returns the Y coordinate of the defect in the Camera coordinate system.
Contrast Returns the average contrast of the detected edges.
DefectLevel Returns the level of the defect.
Found Returns whether the object was found. (i.e. did the feature or part you are looking at have a shape score that is above the Accept property’s current setting.)
Length Returns the length of the defect in millimeters.
Passed Returns whether the object detection result was accepted.
PixelLength Returns the length of the defect in pixels.
NumberFound

Returns the number of defects found.

(The detected number can be from 0 up to the number set with the NumberToFind property.)

PixelX Returns the X coordinate position of the defect.
PixelY Returns the Y coordinate position of the defect.
PixelXYU

Runtime only.

Returns the PixelX, PixelY, and PixelU coordinates of the found part’s position in pixels.

RobotX Returns the X coordinate position of the defect in the Robot coordinate system.
RobotY Returns the Y coordinate position of the defect in the Robot coordinate system.
RobotU Returns the U coordinate position of the defect in the Robot coordinate system.
RobotXYU

Runtime only.

Returns the RobotX, RobotY, and RobotU coordinates of the detected object in the robot coordinate system.

Strength Returns the average strength of the detected edges.
Time Returns the amount of time required to process the object (unit: millisecond).
TotalArea Returns the total area of all defects found in pixels.

Using LineInspector Objects
The next few sections guide you through how to create and use a LineInspector object.

  • How to create a new LineInspector object
  • Position and Size the search window
  • Configure the properties associated with the LineInspector object
  • Test the LineInspector object & examine the results
  • Make adjustments to properties and test again
    Prior to starting the steps shown below, you should have already created a new vision sequence or selected a vision sequence to use. If you have no vision sequence to work with, you can create a new vision sequence by clicking on the [New Sequence] button.
    You can also select a sequence which was created previously by clicking on the sequence tree in the Vision Guide window.
    Refer to the following for more details on how to create a new vision sequence or select one that was previously defined.
    Vision Sequences

Step 1: Create a New LineInspector Object

  1. Click the [All Tools] - the [LineInspector] button on the Vision Guide toolbar.
  2. Move the mouse over the image display. You will see the mouse pointer change to the LineInspector object icon.
  3. Continue moving the mouse until the icon is at the desired position in the image display, then click the left mouse button to create the object.
  4. Notice that a name for the object is automatically created. In the example, it is called “LineInsp01” because this is the first LineInspector object created for this sequence. (We will explain how to change the name later.)

Step 2: Position and Size the Search Window
You should now see a LineInspector object similar to the one shown below:

New LineInspector Object

  1. Click the name label of the LineInspector object and, while holding the mouse down, drag the LineInspector object to the position where you would like the top left position of the search window to reside.
  2. Resize the LineInspector object search window as required using the search window size handles.

Step 3: Configuring Properties for the LineInspector Object
We can now set property values for the LineInspector object. To set any of the properties simply click the associated property’s value field and then either enter a new value or if a drop down list is displayed click one of the items in the list.
Shown below are some of the more commonly used properties for the LineInspector object. Explanations for other properties such as AbortSeqOnFail, Graphics, etc. which are used on many of the different vision objects can be seen in the following.
"Vision Guide 8.0 Properties & Result Reference"

Property Description
EdgeType (Single)

Select the type of the edge to be searched.

For edge pairs, an edge is found from each direction and the center of the pair is reported as the position.

Name property (“LineInspxx”) The default name given to a newly created LineInspector object is “LineInspxx” where xx is a number which is used to distinguish between multiple LineInspector objects within the same vision sequence. If this is the first LineInspector object for this vision sequence then the default name will be “LineInsp01”. To change the name, click the Value field of the Name property, type a new name and press the return key. You will notice that once the name property is modified, every place where the LineInspector object’s name is displayed is updated to reflect the new name.
NumberToFind (1) You can search for 1 or more defects along the search line.
Polarity (LightToDark) If you are looking for a DarkToLight edge, change polarity.

Step 4: Running the LineInspector Object and Examining the Results
To run the LineInspector object, simply do the following:
Click the [Run] button of the object on the execution panel. Results for the LineInspector object will now be displayed. The primary results to examine at this time are:

Results Description
Area result The area (in pixels) of the defect found.

PixelX result

PixelY result

Returns the XY coordinates of the defect in the image coordinate system.

CameraX result

CameraY result

Returns the XY coordinates of the defect in the Camera coordinate system.

If the calibration is not performed, “no cal” will be returned.

RobotX result

RobotY result

Returns the XY coordinates of the defect in the Robot coordinate system.
If the calibration is not performed, “no cal” will be returned.

ArcFinder Object

ArcFinder Object Description
ArcFinder objects are used to identify the position of an arc of circle / ellipse in the image.
To find the arc of circle / ellipse, a series of edge searches are executed to determine radius and center point of the arc, and major/minor axes and angle of the ellipse.
The edge of an object in an image is a change in gray value from dark to light or light to dark. This change may span several pixels.
Each edge search of the ArcFinder object finds the transition from Light to Dark or Dark to Light as defined by the Polarity property and defines that position as the edge position for a single edge. You can also search for edge pairs by changing the EdgeType property. With edge pairs, two opposing edges are searched for, and the midpoint is returned as the result.
The type of edges (circle/ellipse) can be specified by the ArcSearchType property.

ArcFinder Object Layout
The ArcFinder object has a different look than the Correlation and Blob objects. The ArcFinder object’s search window is circular and is defined by a start angle, end angle, outer radius, and inner radius. The edge search lines are evenly spanned between the start angle and end angle. The number of edge search lines is specified with the NumberOfEdges property. The Direction property can specify search from the inner radius to the outer radius, or vice versa.

ArcFinder Object Layout

Symbol Description
a Step Number in Sequence
b Object Name
c Direction indicator (Direction of edge search)
d Size & Direction Handle

The ArcFinder is positioned to search for an arc by moving the center point to the approximate center of the arc to be found, and then adjusting the RadiusInner and RadiusOuter properties so that the arc to be found is somewhere within the search area.

ArcFinder Object Properties
The following list is a summary of properties for the ArcFinder object. For details on each property, refer to the following.
"Vision Guide 8.0 Properties & Result Reference"

Property Description
AbortSeqOnFail

Allows the user to specify that if the object fails (not passed), then the entire sequence is aborted at that point and no further objects in the sequence are processed.

Default: False

Accept

Specifies the shape score that a feature must equal or exceed to be considered found.

Default: 100

AngleEnd

Specifies the end angle of the range to perform a circular/elliptic search

Default: 135

AngleStart

Specified the start angle of the range to perform a circular/elliptic search

Default: 45

ArcSearchType Specifies the type of edges (circle/ellipse) to be searched for.
CalRobotPlacePos Calibrates RobotPlacePos when designing and performing program.
Caption

Used to assign a caption to the ArcFinder object.

Default: Empty String

CenterPointObject

Specifies the position to be used as the center point of the object.

If the property is set to “Screen”, the object can be placed to anywhere in the screen. If other vision objects are specified, the center point will be set to the PixelX and PixelY results of the object.

Default: Screen

CenterPntObjResult

Specifies which result to use from the CenterPointObject property.

If All is specified, ArcFinder object will be applied to all of the (NumberFound) for specified vision object results.

Default: 1

CenterPntOffsetX

Sets or returns the X offset after the center point of the search window is positioned with the CenterPointObject.

Default: 0

CenterPntOffsetY

Sets or returns the Y offset after the center point of the search window is positioned with the CenterPointObject.

Default: 0

CenterPntRotOffset

Specifies whether to rotate the XY offset value of the center (CenterPntOffsetX, CenterPntOffsetY) based on the Angle result of CenterPointObject.

Default: False

CenterX

Specifies the X coordinate position to be used as the center point for the object.

This property is filled in automatically when the CenterPointObjet property is set to another vision object.

CenterY

Specifies the Y coordinate position to be used as the center point for the object.

This property is filled in automatically when the CenterPointObjet property is set to another vision object.

CheckClearanceFor Sets the object to confirm a clearance.
ClearanceCondition Specifies the way of decision for a clearance.
ContrastTarget

Sets the desired contrast for the edge search.

Default: 0 (best contrast)

ContrastVariation

Selects the allowed contrast variation for ContrastTarget.

Default: 0

CoordObject

Specifies Coordinates object to copy the result. The copy is executed when the object is executed, and if it didn’t execute because of branch function of Decision, the copy will not be executed.

Default: None

CurrentResult Defines which result to display in the Results list (on the Object window) or which result to return data for when the system is requested to find more than one of a like feature within a single search window.
Description

Sets a user description

Default: Blank

Direction

Sets the direction for the edge search.

Default: InsideOut

EdgeSort Sets the method of sorting detected edge results.
EdgeThreshold

Sets the threshold at which edges below this value are ignored.

Default: 2

EdgeType

Select the type of edge to search for: single or pair.

Default: 1 - Single

Enabled

Specifies whether to execute the object.

Default: True

FailColor

Selects the color of an object when it is not accepted.

Default: Red

FittingThreshold Specifies the edge results to use for linear fittings.
Frame

Specifies which positioning frame to use.

Default: none

FrameResult

Specifies which number of the Frame results to be used.

Default: 1

Graphics

Specifies which graphics to display.

Default: 1 - All

LabelBackColor

Selects the background color for an object label.

Default: Transparent

Name

Used to assign a unique name to the ArcFinder object.

Default: ArcFind01

NumberOfEdges

Specified the number of edges to be detected.

Default: 5

PassColor

Selects the color for an object when it is passed.

Default: LightGreen

PassType

Selects the rule that determines if the object passed.

Default: SomeFound

Polarity

Defines whether the ArcFinder object should search for a LightToDark or DarkToLight transition.

Default: 1 - LightToDark

RadiusInner Specifies the inner diameter of the detection range.
RadiusOuter Specifies the outer diameter of the detection range.
ScoreWeightContrast

Sets the percentage of the score that depends on contrast.

Default: 50

ScoreWeightStrength

Sets the percentage of the score that depends on edge strength.

Default: 50

SearchWidth

Defines the width of the edge search.

Range is from 3 to 99.

Default: 3

ShowExtension Defines whether to display the edge line with the both ends extended.
StrengthTarget

Sets the desired edge strength to search for.

Default: 0

StrengthVariation

Sets the amount of variation for StrengthTarget.

Default: 0

ArcFinder Object Results
The following list is a summary of the ArcFinder object results with brief descriptions. For details on each result, refer to the following.
"Vision Guide 8.0 Properties & Result Reference"

Results Description
Angle Returns the angle of the detected ellipse.
Angle1 Returns the start point angle of the detected edge in the Image coordinate system.
Angle2 Returns the end point angle of the detected edge in the Image coordinate system.
CameraX Returns the central X coordinate of the detected circular/elliptic edge in the Camera coordinate system.
CameraY Returns the central Y coordinate of the detected circular/elliptic edge in the Camera coordinate system.
CameraXYU

Runtime only.

Returns the CameraX, CameraY, and CameraU coordinates of the found part’s position in the camera coordinate system.

ClearanceOK Returns the result of decision for a clearance.
Contrast Returns the contrast of the detected circular/elliptic edge.
EdgeCameraXYU Returns the CameraX, CameraY, and Angle coordinate position of the found edge in searching.
EdgePixelXYU Returns the PixelX, PixelY, and Angle coordinate position of the found edge in searching.
EdgeRobotXYU Returns the RobotX, RobotY, and Angle coordinate position of the found edge in searching.
FitError Returns the distance between each edge point and the circular/elliptic detected as the root mean square (RMS).
Found Returns whether the object was found. (i.e. did the feature or part you are looking at have a shape score that is above the Accept property’s current setting.)
FoundMajorDiam Length of the major axis of the detected elliptic edge.
FoundMinorDiam Length of the minor axis of the detected elliptic edge.
FoundRadius Radius of the detected circular edge.
MaxError Returns the maximum difference from the detected circular/elliptic edge in pixel length.
NumberFound Returns the number of detected circular arc.
Passed Returns whether the object detection result was accepted.
PixelMajorDiam Returns the major diameter long of elliptic arc detected by ArcFinder.
PixelMinorDiam Returns the minor diameter long of elliptic arc detected by ArcFinder.
PixelRadius Returns the radius of detected circular arc (unit: pixel).
PixelX Returns the X coordinate at the center of the detected circular /elliptic edge in the image coordinate system.
PixelY Returns the Y coordinate at the center of the detected circular /elliptic edge in the image coordinate system.
PixelXYU

Runtime only.

Returns the PixelX, PixelY, and PixelU coordinates of the detected circular/elliptic edge position in pixels.

RobotX Returns the X coordinate at the center of the detected circular /elliptic edge in the Robot coordinate system.
RobotY Returns the Y coordinate at the center of the detected circular /elliptic edge in the Robot coordinate system.
RobotXYU Returns the RobotX, RobotY, and RobotU coordinates at the center of the detected circular/elliptic edge position in the robot coordinate system.
Strength Returns the strength of the detected edge.
Time Returns the amount of time required to process the object (unit: millisecond).

Using ArcFinder Objects
The next few sections guide you through how to create and use an ArcFinder object.

  • How to create a new ArcFinder object
  • Position and size the search window
  • Configure the properties associated with the ArcFinder object
  • Test the ArcFinder object & examine the results
  • Make adjustments to properties and test again

Prior to starting the steps shown below, you should have already created a new vision sequence or selected a vision sequence to use. If you have no vision sequence to work with, you can create a new vision sequence by clicking on the [New Sequence] button.
You can also select a sequence which was created previously by clicking on the sequence tree in the Vision Guide window.
Refer to Vision Sequences for details on how to create a new vision sequence or select one which was previously defined.

Step 1: Create a New ArcFinder Object

  1. Click the [All Tools] - the [ArcFinder] button on the Vision Guide toolbar.
  2. Move the mouse over the image display. You will see the mouse pointer change to the ArcFinder object icon.
  3. Continue moving the mouse until the icon is at the desired position in the image display, then click the left mouse button to create the object. then click the left mouse button to create the object.
  4. Notice that a name for the object is automatically created. In the example, it is called “ArcFind01” because this is the first ArcFinder object created for this sequence. (We will explain how to change the name later.)

Step 2: Position and Size the Search Window
You should now see an ArcFinder object similar to the one shown below:

New ArcFinder Object

Symbol Description
a Step Number in Sequence
b Object Name
c Direction indicator (Direction of edge search)
d Size & Direction Handle
  1. Click the name label of the ArcFinder object and, while holding the mouse down, drag the ArcFinder object to the position where you would like the top left position of the search window to reside.
  2. Resize the ArcFinder object search window as required using the search window size handles.

Step 3: Configuring Properties for the ArcFinder Object
We can now set property values for the ArcFinder object. To set any of the properties simply click the associated property’s value field and then either enter a new value or if a drop down list is displayed click one of the items in the list.
Shown below are some of the more commonly used properties for the ArcFinder object. Explanations for other properties such as AbortSeqOnFail, Graphics, etc. which are used on many of the different vision objects can be seen in the following.
"Vision Guide 8.0 Properties & Result Reference"

Property Description
EdgeType (Single)

Select the type of the edge to be searched.

For edge pairs, an edge is found from each direction and the center of the pair is reported as the position.

Name property (“ArcFindxx”) The default name given to a newly created ArcFinder object is “ArcFindxx” where xx is a number which is used to distinguish between multiple ArcFinder objects within the same vision sequence. If this is the first ArcFinder object for this vision sequence then the default name will be “ArcFind01”. To change the name, click the Value field of the Name property, type a new name and press the return key. You will notice that once the name property is modified, every place where the ArcFinder object’s name is displayed is updated to reflect the new name.
NumberOfEdges(5) You can search for five edges to find circular edges.
Polarity (LightToDark)

Search for edges using “LightToDark” polarity.

If you are looking for a DarkToLight edge, change polarity.

Step 4: Running the ArcFinder Object and Examining the Results
To run the ArcFinder object, simply do the following:
Click the [Run] button of the object on the execution panel. Results for the ArcFinder object will now be displayed. The primary results to examine at this time are:

Results Description
Angle1 Result Returns the start point angle of the detected circular edge in the Image coordinate system.
Angle2 Result Returns the end point angle of the detected circular edge in the Image coordinate system.
FoundRadius Result

Returns the radius of the detected circular edge in the pixel length.

Unit: pixel

MaxError Result

Returns the maximum difference from the detected circular edge in pixel length.

Unit: pixel

PixelX result

PixelY result

Returns the XY coordinates at the center of the detected circular edge in the image coordinate system.

CameraX result

CameraY result

Returns the central XY coordinates of the detection circular edge in the Camera coordinate system.

If the calibration is not performed, “no cal” will be returned.

RobotX result

RobotY result

Returns the central XY coordinates of the detection circular edge in the Robot coordinate system.

If the calibration is not performed, “no cal” will be returned.

ArcInspector Object

ArcInspector Object Description
ArcInspector objects are used to search for defects along an arc of circle/ellipse.
To find defects, a series of edge searches are executed to determine abnormalities in the arc of circle/ellipse being inspected.
The edge of an object in an image is a change in gray value from dark to light or light to dark. This change may span several pixels.
Each edge search of the ArcInspector object finds the transition from Light to Dark or Dark to Light as defined by the Polarity property and defines that position as the edge position for a single edge. You can also search for edge pairs by changing the EdgeType property. With edge pairs, two opposing edges are searched for, and the midpoint is returned as the result.

ArcInspector Object Layout
The ArcInspector object looks similar to the ArcFinder object. It searches for edges along lines from the center of the arc of circle/ellipse to the outer radius of the inspection area. Each edge search line searches for a transition (light to dark or dark to light) somewhere along this line in the direction indicated by the Direction of Search Indicator. The number of edges uses for inspection is set by the NumberOfEdges property. The AngleStart property specifies the start angle of the arc to be inspected. The AngleEnd property specifies the end angle of the arc to be inspected. The edge searches are spanned evenly between AngleStart + InspectStartOffset and AngleEnd - InspectEndOffset.


ArcInspector Object Layout

Symbol Description
a Step Number in Sequence
b Object Name
c Direction indicator (Direction of edge search)
d RadiusOuter Handles
e Size & Offset Handles
f RadiusInner Handles

The ArcInspector is positioned to search for defects on an arc by aligning CenterX and CenterY to the center of the arc, and then adjusting RadiusInner and RadiusOuter to position the search area so that the radius of the arc to be inspected is within the search area. The ArcInspector object can search for defects from the inner radius to the outer radius (default), or vice versa, depending on the Direction setting.
You can also use an ArcFinder object to first find an arc, and then inspect that arc with ArcInspector. Use the ArcObject property to specify the ArcFinder to use.

ArcInspector Search
The image below shows part of a round object with a defect.

The ArcInspector finds the defect as shown below. Note that each edge search position distance from the arc being inspected must exceed either the DefectLevelThreshPos or DefectLevelThreshNeg property values for a defect to be found. Also, the defect area must be greater than MinArea and less than MaxArea.

Symbol Description
a Defect
b DefectLevelThreshPos
c Arc to be inspected
d DefectLevelThreshNeg

ArcInspector Object Properties
The following list is a summary of properties for the ArcInspector object. For details on each property, refer to the following.
"Vision Guide 8.0 Properties & Result Reference"

Property Description
AbortSeqOnFail

Allows the user to specify that if the object fails (not passed), then the entire sequence is aborted at that point and no further objects in the sequence are processed.

Default: False

Accept

Specifies the shape score that a feature must equal or exceed to be considered found.

The score value must equal or exceed to be considered found.

Default: 100

AngleEnd

Specified the end angle of the range to perform a circular/elliptic search

Default: 135

AngleStart

Specified the start angle of the range to perform a circular/elliptic search

Default: 45

ArcObject

Specifies an ArcFinder object that will search for the arc of circle/ellipse to be inspected.

Default: None

ArcObjResult

Defines which result to use from the ArcObject.

Default: 1

ArcSearchType

Specifies the type of edges (circle/ellipse) to be searched for.

When specifying the ArcObject property, match this property with ArcType of the specified ArcFinder.

CalRobotPlacePos Calibrates RobotPlacePos when designing and performing program.
Caption

Used to assign a caption to use for the ArcInspector object label.

Default: Empty String

CenterPointObject

Specifies the position to be used as the center point of the object.

If the property is set to “Screen”, the object can be placed to anywhere in the screen. If a vision objects is specified, the center point will be set to the PixelX and PixelY results of the specified center point object.

Default: Screen

CenterPntObjResult

Specifies which result to use from the CenterPointObject.

If All is specified, ArcInspector object will be applied to all of the (NumberFound) for specified vision object results.

Default: 1

CenterPntOffsetX

Sets or returns the X offset after the center point of the search window is positioned with the CenterPointObject.

Default: 0

CenterPntOffsetY

Sets or returns the Y offset after the center point of the search window is positioned with the CenterPointObject.

Default: 0

CenterPntRotOffset

Specifies whether to rotate the XY offset value of the center (CenterPntOffsetX, CenterPntOffsetY) based on the Angle result of CenterPointObject.

Default: False

CenterX

Specifies the X coordinate position to be used as the center point for the object.

This property is filled in automatically when the CenterPointObjet property is set to another vision object.

CenterY

Specifies the Y coordinate position to be used as the center point for the object.

This property is filled in automatically when the CenterPointObjet property is set to another vision object.

ContrastTarget

Sets the desired contrast for the edge search.

Default: 0 (best contrast)

ContrastVariation

Selects the allowed contrast variation for ContrastTarget.

Default: 0

CoordObject Specifies Coordinates object to copy the result. The copy is executed when the object is executed, and if it didn’t execute because of branch function of Decision, the copy will not be executed. Default: None
CurrentResult Defines which result to display in the Results list (on the Object window) or which result to return data for when the system is requested to find more than one of a like feature within a single search window.
Description

Sets a user description

Default: Blank

DefectAreaExtended

Sets whether to extend the defect area interpolation.

Default: False

DefectLevelThreshNeg

Sets defect threshold below the line.

Default: 2

DefectLevelThreshPos

Sets defect threshold above the line.

Default: 2

Direction

Sets the direction for the edge search.

Default: InsideOut

EdgeSort Sets the method of sorting detected edge results.
EdgeThreshold

Sets the threshold at which edges below this value are ignored.

Default: 2

EdgeType

Select the type of edge to search for: single or pair.

Default: 1 - Single

EllipseAngle Specifies the angle of elliptic arc by a detection base line of ArcInspector.
EllipseMajorDiam Specifies the major diameter long of elliptic arc by a detection base line of ArcInspector.
EllipseMinorDiam Specifies the minor diameter long of elliptic arc by a detection base line of ArcInspector.
Enabled

Specifies whether to execute the object.

Default: True

FailColor

Selects the color of an object when it is not accepted.

Default: Red

Frame

Specifies which positioning frame to use.

Default: none

FrameResult

Specifies which number of the Frame results to be used.

Default: 1

Graphics

Specifies which graphics to display.

Default: 1 - All

InspectStartOffset

Sets the offset from the start of the arc where the inspection should start.

Default: 5

InspectEndOffset

Sets the offset from the end of the arc where the inspection should stop.

Default: 5

LabelBackColor

Selects the background color for an object label.

Default: Transparent

MaxArea

Defines the upper Area limit for a defect.

For a defect to be found it must have an Area result below the value set for MaxArea property.

Default: 100,000

MinArea

Defines the lower Area limit for a defect.

For a defect to be found it must have an Area result above the value set for MinArea property.

Default: 25

MissingEdgeType

Defines how to handle a missing edge.

Default: Interpolate

Name

Used to assign a unique name to the ArcInspector object.

Default: ArcInsp01

NumberOfEdges

Specified the number of edges to be detected.

Default: 20

NumberToFind

Defines the number of defects to find in the search area.

Default: 1

PassColor

Selects the color for an object when it is passed.

Default: LightGreen

PassType

Selects the rule that determines if the object passed.

Default: AllNotFound

Polarity

Defines whether the ArcInspector object should search for a LightToDark or DarkToLight transition.

Default: 1 - LightToDark

Radius Defines the distance from the CenterPoint of the object to the outer most search ring of the object.
RadiusInner Specifies the inner diameter of the detection range.
RadiusOuter Specifies the outer diameter of the detection range.
ScoreWeightContrast

Sets the percentage of the score that depends on contrast.

Default: 50

ScoreWeightStrength

Sets the percentage of the score that depends on edge strength.

Default: 50

SearchWidth

Defines the width of the edge search.

Range is from 3 to 99.

Default: 3

SizeToFind

Selects which size of defects to find.

Default: 1 - Largest

StrengthTarget

Sets the desired edge strength to search for.

Default: 0

StrengthVariation

Sets the amount of variation for StrengthTarget.

Default: 0

ArcInspector Object Results
The following list is a summary of the ArcInspector object results with brief descriptions. For details on each result, refer to the following.
"Vision Guide 8.0 Properties & Result Reference"

Results Description
Area Returns the area of the defect in pixels.
CameraX Returns the X coordinate of the defect in the Camera coordinate system.
CameraY Returns the Y coordinate of the defect in the Camera coordinate system.
CameraXYU

Runtime only.

Returns the CameraX, CameraY, and CameraU coordinates of the found defect position in the camera coordinate system.

Contrast Returns the average contrast of the detected circular edges.
DefectLevel Returns the level of the defect.
Length Returns the length of the defect in millimeters.
PixelLength Returns the length of the defect in pixels.
NumberFound

Returns the number of defects found.

(The detected number can be from 0 up to the number set with the NumberToFind property.)

Passed Returns whether the object detection result was accepted.
PixelX Returns the X coordinate of the defect in the image coordinate system.
PixelY Returns the Y coordinate of the defect in the image coordinate system.
PixelXYU

Runtime only.

Returns the PixelX, PixelY, and PixelU coordinates of the detected defect position in pixels.

RobotX Returns the X coordinate of the defect in the Robot coordinate system.
RobotY Returns the Y coordinate of the defect in the Robot coordinate system.
RobotU Returns the U coordinate of the defect in the Robot coordinate system.
RobotXYU

Runtime only.

Returns the RobotX, RobotY, and RobotU coordinates of the detected defect position in the robot coordinate system.

Strength Returns the average strength of the detected edges.
Time Returns the amount of time in milliseconds required to process the object.
TotalArea Returns the sum of areas of all defects found in pixels.

Using ArcInspector Objects
The next few sections guide you through how to create and use an ArcInspector object.

  • How to create a new ArcInspector object
  • Position and Size the search window
  • Configure the properties associated with the ArcInspector object
  • Test the ArcInspector object & examine the results
  • Make adjustments to properties and test again

Prior to starting the steps shown below, you should have already created a new vision sequence or selected a vision sequence to use. If you have no vision sequence to work with, you can create a new vision sequence by clicking on the [New Sequence] button.
You can also select a sequence which was created previously by clicking on the sequence tree in the Vision Guide window.
Refer to the following for more details on how to create a new vision sequence or select one that was previously defined.
Vision Sequences

Step 1: Create a new ArcInspector object

  1. Click the [All Tools] - the [New ArcInspector] button on the Vision Guide toolbar.
  2. Move the mouse over the image display. You will see the mouse pointer change to the ArcInspector object icon.
  3. Continue moving the mouse until the icon is at the desired position in the image display, then click the left mouse button to create the object.
  4. Notice that a name for the object is automatically created. In the example, it is called "ArcInsp01" because this is the first ArcInspector object created for this sequence. (We will explain how to change the name later.)

Step 2: Positioning the ArcInspector Object
You should now see an ArcInspector object similar to the one shown below:


New ArcInspector Object

Symbol Description
a Arc to be inspected
b RadiusOuter Handles
c AngleStart handle
d RadiusInner handles
e Center point
f AngleEnd handle
  1. Click the name label of the ArcInspector object and, while holding the mouse down, drag the ArcInspector object to position the center point to be near the center of the arc to be inspected.
  2. Resize the ArcInspector object search window as required using the RadiusOuter, RadiusInner, AngleStart, and AngleEnd size handles.

Step 3: Configuring Properties for the ArcInspector Object
We can now set property values for the ArcInspector object. To set any of the properties simply click the associated property’s value field and then either enter a new value or if a drop down list is displayed click one of the items in the list.
Shown below are some of the more commonly used properties for the ArcInspector object. Explanations for other properties such as AbortSeqOnFail and Graphics, which are used on many of the different vision objects, can be seen in the following.
"Vision Guide 8.0 Properties & Result Reference"

Property Description
EdgeType (Single)

Select the type of the edge to be searched.

For edge pairs, an edge is found from each direction and the center of the pair is reported as the position.

Name property ("ArcInspxx")

The default name given to a newly created ArcInspector object is “ArcInspxx” where xx is a number which is used to distinguish between multiple ArcInspector objects within the same vision sequence. If this is the first ArcInspector object for this vision sequence then the default name will be “ArcInsp01”. To change the name, click the Value field of the Name property, type a new name and press the return key.

You will notice that once the name property is modified, every place where the ArcInspector object's name is displayed is updated to reflect the new name.

NumberOfEdges

Specify how many edge searches to be used to find defects.

The default is 15, and the maximum is 99.

Polarity

The default edge search polarity is LightToDark.

If you are looking for a DarkToLight edges, change the polarity.

Direction Specify whether the edge searches should be InsideOut (from RadiusInner to RadiusOuter), or OutsideIn (from RadiusOuter to RadiusInner).
DefectLevelThreshPos Specify the minimum distance above the arc being inspected from edges found to the arc.
DefectLevelThreshNeg Specify the minimum distance below the arc being inspected from edges found to the arc.
MinArea Specify the minimum area of a defect in pixels.
MaxArea Specify the maximum area of a defect in pixels.

Step 4: Running the ArcInspector object and Examining the results
To run the ArcInspector object, simply do the following:
Click the [Run] button of the object on the execution panel.
Results for the ArcFinder object will now be displayed. The primary results to examine at this time are:

Results Description
Area result The area (in pixels) of the defect found.

PixelX result

PixelY result

Returns the XY coordinates of the defect in the image coordinate system.

CameraX result

CameraY result

Returns the XY coordinates of the defect in the Camera coordinate system.

If the calibration is not performed, “no cal” will be returned.

RobotX result

RobotY result

Returns the XY coordinates of the defect in the Robot coordinate system.

If the calibration is not performed, “no cal” will be returned.

DefectFinder Object

DefectFinder Object Description
DefectFinder objects are used to identify differences between a template image and an input image.
During the search for defects, first the absolute difference image between the search area and the template is computed. Next, blob analysis is performed on the difference image to find the defects.
The following defect features are obtained:

  • Area and perimeter
  • Center of mass
  • Principal axes and moments
  • Connectivity
  • Extrema
  • Coordinate positions of the center of mass in pixel, camera and robot coordinate systems
  • Holes, roughness, and compactness of defects (blobs)

DefectFinder Object Layout
The DefectFinder object layout is rectangular similar to the Blob object. However, DefectFinder object requires teaching a model (the template). For model teaching, the entire area defined by the search window is used. There is no separate model window as used for the Correlation object.
The search window defines the area within which DefectFinder searches for defects (image differences). Also, it defines the area of the template image. An example of the DefectFinder object is shown below:


DefectFinder Object Layout

Symbol Description
a Step Number in Sequence
b Object Name
c Search Window

DefectFinder Object Properties
The following list is a summary of properties for the DefectFinder object. For details on each property, refer to the following.
"Vision Guide 8.0 Properties & Result Reference"

Property Description
AbortSeqOnFail

Allows the user to specify that if the object fails (not passed), then the entire sequence is aborted at that point and no further objects in the sequence are processed.

Default: False

CalRobotPlacePos Calibrates RobotPlacePos when designing and performing program.
Caption

Assigns a caption to the DefectFinder object.

Default: Empty String

CenterPointObject

Specifies the position to be used as the center point of the object.

When this property is set “Screen”, the object can be configured on arbitrary position. However, when specified to other vision object, the center point in PixelX, PixelY of the object is set.

Default: Screen

CenterPntObjResult

Specifies which result to use from the CenterPointObject.

If All is specified, DefectFinder object will be applied to all of the (NumberFound) for specified vision object results.

Default: 1

CenterPntOffsetX

Sets or returns the X offset after the center point of the search window is positioned with the CenterPointObject.

Default: 0

CenterPntOffsetY

Sets or returns the Y offset after the center point of the search window is positioned with the CenterPointObject.

Default: 0

CenterPntRotOffset

Specifies whether to rotate the XY offset value of the center (CenterPntOffsetX, CenterPntOffsetY) based on the Angle result of CenterPointObject.

If SearchWinType is set to RotatedRectangle, the search window rotates based on the Angle result.

Default: False

CheckClearanceFor Sets the object to confirm a clearance.
ClearanceCondition Specifies the way of decision for a clearance.
CoordObject

Specifies Coordinates object to copy the result. The copy is executed when the object is executed, and if it didn’t execute because of branch function of Decision, the copy will not be executed.

Default: None

CurrentResult

Defines which result to display in the Results list (on the Object window) or which result to return data for when the system is requested to find more than one of a like feature within a single search window.

Default: 1

Description

Sets a user description

Default: Blank

Enabled

Specifies whether to execute the object.

Default: True

FailColor

Selects the color of an object when it is not accepted.

Default: Red

Frame

Specifies which positioning frame to use.

Default: None

FrameResult

Specifies which number of the Frame results to be used.

Default: 1

Graphics

Specifies which graphics to display.

Default: 1 - All

KernelHeight Allowable amount of pixel difference when computing the image difference from the registered image. (in vertical direction)
KernelWidth Allowable amount of pixel difference when computing the image difference from the registered image. (in horizontal direction)
LabelBackColor

Selects the background color for an object label.

Default: Transparent

LuminanceCorrection

Sets the use of luminance correction preprocessing.

Default: None

MaxArea

Defines the upper Area limit for a defect.

For a defect to be found it must have an Area result below the value set for MaxArea property.

Default: 100,000

MinArea

Defines the lower Area limit for a defect.

For a defect to be found it must have an Area result above the value set for MinArea property.

Default: 25

MinMaxArea

Runtime only.

Sets or returns both MinArea and MaxArea in one statement.

Name

Used to assign a unique name to the DefectFinder object.

Default: DefFind01

NumberToFind

Defines the number of objects to find in the search window.

Default: 1

PassColor

Selects the color for an object when it is passed.

Default: Light Green

PassType

Selects the rule that determines if the object passed.

Default: AllNotFound

Polarity

Sets the polarity of defects to detect.

Default: Both

RejectOnEdge

If the property is set to True, the system ignores defectives detected on the edge of the search window.

Default: False

SaveTeachImage Sets whether the camera image should be saved to a file when the model is taught.
SearchWin

Runtime only.

Sets or returns the search window left, top, height, width parameters in one call.

SearchWinAngle Defines the angle of the area to be searched.
SearchWinCenterX Defines the X coordinate value of the center of the area to be searched.
SearchWinCenterY Defines the Y coordinate value of the center of the area to be searched.
SearchWinHeight

Defines the height of the area to be searched (unit: pixel).

Default: 100

SearchWinLeft Defines the left most position of the area to be searched (unit: pixel).
SearchWinTop Defines the upper most position of the area to be searched (unit: pixel).
SearchWinType Defines the type of the area to be searched (i.e. Rectangle, RotatedRectangle, Circle).
SearchWinWidth

Defines the width of the area to be searched (unit: pixel).

Default: 100

ShowModel

Displays the registered image.

Can be used to set don’t care pixels.

SizeToFind

Selects which size of defects to find.

Default: 1 - Largest

Sort

Selects the sort order used for the results of an object.

Default: 0 - None

ThresholdHigh

Works with the ThresholdLow property to define the gray level regions that represent the feature (or object), the background and the edges of the image.

The ThresholdHigh property defines the upper bound of the gray level region for the feature area of the image. Any part of the image that falls within gray level region defined between ThresholdLow and ThresholdHigh will be assigned a pixel weight of 1. (i.e. it is part of the feature.)

Default: 128

ThresholdLow

Works with the ThresholdHigh property to define the gray level regions that represent the feature (or object), the background and the edges of the image.

The ThresholdLow property defines the lower bound of the gray level region for the feature area of the image. Any part of the image that falls within gray level region defined between ThresholdLow and ThresholdHigh will be assigned a pixel weight of 1. (i.e. it is part of the feature.)

Default: 0

DefectFinder Object Results
The following list is a summary of the DefectFinder object results with brief descriptions. For details on each result, refer to the following.
"Vision Guide 8.0 Properties & Result Reference"

Results Description
Angle Returns the amount of detected defect rotation in degrees.
Area Returns the area of the defect in pixels.
CameraX Returns the X coordinate position of the found defect part’s position in the camera coordinate system.
CameraY Returns the Y coordinate position of the found defect part’s position in the camera coordinate system.
CameraXYU Runtime only. Returns the CameraX, CameraY, and CameraU coordinates of the found defect part's position in the camera coordinate system.
ClearanceOK Returns the result of decision for a clearance.
Compactness Returns the compactness of a defect.
Extrema Runtime only. Returns MinX, MaxX, MinY, MaxY pixel coordinates of the defect Extrema.
Found Returns whether the object was found.
FoundOnEdge Returns True when a defect object is found too close to the edge of the search window.
Holes Returns the number of holes found in the defect.
MajorDiameter Returns the major diameter in the similar case of a ellipse of the found defect.
MaxFeretDiameter Returns the maximum feret diameter of the found defect.
MaxX Returns the maximum X pixel coordinate of the defect Extrema in pixels.
MaxY Returns the maximum Y pixel coordinate of the defect Extrema in pixels.
MinorDiameter Returns the minor diameter in the similar case of a ellipse of the found defect.
MinX Returns the minimum X pixel coordinate of the defect Extrema in pixels.
MinY Returns the minimum Y pixel coordinate of the defect Extrema in pixels.
NumberFound

Returns the number of defects found.

(This number can be anywhere from 0 up to the number of defects you requested the defect object to find with the NumberToFind property.)

Passed Returns whether the object detection result was accepted.
Perimeter The number of pixels along the outer edge of the found defect.
PixelX Returns the X coordinate position of the found part’s position in pixels.
PixelY Returns the Y coordinate position of the found parts position in pixels.
PixelXYU

Runtime only.

Returns the PixelX, PixelY, and PixelU coordinates of the found part’s position in pixels.

RobotX Returns the X coordinate of the detected object in the robot coordinate system.
RobotY Returns the Y coordinate of the detected object in the robot coordinate system.
RobotU Returns the U coordinate of the detected object in the robot coordinate system.
RobotXYU

Runtime only.

Returns the RobotX, RobotY, and RobotU coordinates of the detected object in the robot coordinate system.

Roughness Returns the roughness of a defect.
ShowAllResults

Displays a dialog box which allows you to see all results for a specified vision object in a table form.

This makes it easy to compare results.

Time Returns the amount of time required to process the object (unit: millisecond).
TotalArea Returns the sum of defect areas of all results found.

DefectFinder objects perform the Blob processing to the image difference of the template image and returns results. For details on results, refer to Blob Object.

Using DefectFinder Objects
Now that we've reviewed how blob analysis works we have set the foundation for understanding how to use Vision Guide 8.0 DefectFinder objects. This next section will describe the steps required to use DefectFinder objects as listed below:

  • How to create a new DefectFinder object
  • Position and Size the search window
  • Configure the properties associated with the DefectFinder object
  • Register the template image
  • Test the DefectFinder object & examine the results
  • Make adjustments to properties and test again

Prior to starting the steps described above, you have to create a new vision sequence or select a vision sequence to use. If you have no vision sequence to work with, you can create a new vision sequence by clicking on the [New Sequence] button.
You can also select a sequence which was created previously by clicking on the sequence tree in the Vision Guide window.
Refer to the following for more details on how to create a new vision sequence or select one that was previously defined.
Vision Sequences

Step 1: Create a new DefectFinder object

  1. Click the [All Tools] - [Defect Finder] button on the Vision Guide toolbar.
  2. Move the mouse over the image display. You will see the mouse pointer change to the DefectFinder icon.
  3. Continue moving the mouse until the icon is at the desired position in the image display, then click the left mouse button to create the object.
  4. A name for the object is automatically created. In the example, it is called “DefFind01” because this is the first DefectFinder object created for this sequence. (We will explain how to change the name later.)

Step 2: Position and Size the Search Window
You should now see a DefectFinder object similar to the one shown below:


New DefectFinder Object Layout

Symbol Description
a Step Number in Sequence
b Object Name
c Search Window
  1. Click the name label of the DefectFinder object and, while holding the mouse down, drag the DefectFinder object to the position where you would like the top left position of the search window to reside.
  2. Resize the DefectFinder object search window as required using the search window size handles. (This means click a size handle and drag the mouse.) (The search window is the area within which we will search for Blobs.)

CAUTION


Ambient lighting and external equipment noise may affect vision sequence image and results. A corrupt image may be acquired and the detected position could be any position in an object’s search area. Be sure to create image processing sequences with objects that use search areas that are no larger than necessary.

Step 3: Configure the DefectFinder Object Properties
We can now set property values for the DefectFinder object. Shown below are some of the more commonly used properties that are specific to the DefectFinder object.
Explanations for other properties such as AbortSeqOnFail, Graphics, etc. which are used on many of the different vision objects can be seen in the following.

  • "Vision Guide 8.0 Properties & Result Reference"
  • DefectFinder Object Properties List

CAUTION


Ambient lighting and external equipment noise may affect vision sequence image and results. A corrupt image may be acquired and the detected position could be any position in an object’s search area. Properly configure MaxArea, MinArea, RejectOnEdge and other properties to reduce the risk of detection errors.

Property Description
Name property

The default name given to a newly created DefectFinder object is “DefFindxx” where xx is a number which is used to distinguish between multiple DefectFinder objects within the same vision sequence.

If this is the first DefectFinder object for this vision sequence then the default name will be “DefFind01”.

To change the name, click the Value field of the Name property and type a new name and press the return key. You will notice that every place where the DefectFinder object's name is displayed is changed to reflect the new name.

KernelWidth,

KernelHeight properties

Specifies an allowable amount of pixel difference when computing the image difference from the registered image.

Large setting value can make the search resistant to ambient lighting or the gap between the input image and the template image. However, the search will not be able to find small defects. Set this value according to the defect size and the gap amount of the input image.

MinArea,

MaxArea properties

These properties define the area limit for a DefectFinder object to be considered “Found”.

(i.e. the Found result returned as True) The default range is set as 25 to 100,000 (MinArea to MaxArea) which is a very broad range. This means that most defects will be reported as Found when you first run a new DefectFinder object before adjusting the MinArea and MaxArea properties. Normally, you will want to modify these properties to reflect a reasonable range for the defect you are trying to find. This way if you find a defect which is outside of the range you will know it isn’t the defect you wanted to find.

RejectOnEdge property Excludes the parts touching the boundary of the search window.
PassType property

Selects the acceptance condition for DefectFinder object detection.

Normally, no defects found means “Passed”. Set the property to “AllNotFound”.

Now, the DefectFinder object can be tested. Other necessary properties will be set after the test.

Step 4: Register and Confirm the Template Image
To register the DefectFinder object template image, click the [Teach] button on the execution panel. To view the registered template image, click ShowModel property in the property list.

Step 5: Test the DefectFinder object and examine the results
To run the DefectFinder object, click the [Run] button of the object on the execution panel. Results for the DefectFinder object will now be displayed. The primary results to examine at this time are shown below. There are others that you will find useful in the future as well though.

Results Description
Found result

Returns whether the defect was found.

If the defect that was found does not meet the area constraints defined by the MinArea and MaxArea properties then the Found result will return as False.

Passed result Returns whether the detection result of the DefectFinder object was accepted.
Area result The area of the defect found. (unit: pixel)
Angle result

The angle at which the defect is oriented.

This is computed from the angle of the minor axis and will be a value between +/- +90°.

Time result The amount of time it took for the DefectFinder object to execute.
PixelX, PixelY The XY position of the center of mass of the found defect. (unit: pixel)
MinX, MinY, MaxX, MaxY Combined these 4 values return circumscribed rectangle of the defect.

KEY POINTS


The RobotXYU, RobotX, RobotY, RobotU and CameraX, CameraY, CameraXYU results will return “no cal” at this time. This means that no calibration was performed so it is impossible for the vision system to calculate the coordinate results with respect to the robot coordinate system or camera coordinate system. Refer to the following for details.

Vision Calibration

Step 6: Make Adjustments to Properties and Test Again
After running the DefectFinder object a few times, you may have encountered problems with finding a defect or just want to fine-tune some of the property settings. Some common problems and fine tuning techniques are described below:

Problems: If the DefectFinder object returns a Found result of False, there are a few places to immediately examine.

  • Look at the value defined for the Polarity property. Are you looking for a light object on a dark background or a dark object on a light background? Make sure that the Polarity property coincides with what you are looking for and with what you see within the search window.
  • Look at the Area result and compare this area with the values defined in the MinArea and MaxArea properties. If the Area result does not fall between the limits defined by the MinArea and MaxArea properties, then you may want to adjust these properties and run the Blob object again.
  • Adjust KernelWidth and KernelHeight property. To change the value bigger, a false detection of the small defects can be avoided.
  • Use Histograms to examine the distribution of grayscale values in an image. The Histogram tool is excellent for setting the ThresholdHigh and ThresholdLow properties. Histograms are described in detail in 8. Histograms Tools.

Fine Tuning: Fine-tuning of the DefectFinder object may be required for some applications. The primary properties associated with fine-tuning of a DefectFinder object are described below:

  • MinArea, MaxArea - After you have run the DefectFinder object a few times, you will become familiar with the approximate values returned for the Area result. Use these values when determining new values to enter to the MinArea and MaxArea properties. It is generally a good idea to have MinArea And MaxArea properties set to values which constrain the Found result such that only blobs which you are interested in are returned with the Found result equal to True. (This helps eliminate unwanted blobs that are different in area from the desired defect.)
  • ThresholdHigh, ThresholdLow - These properties adjust parameters for the setting the gray levels thresholds for distinguishing between what is background and what is part of the defect. These properties are best set through using the Histogram tool. Refer to the following.
    "Vision Guide 8.0 Properties & Results Reference - ThresholdHigh and ThresholdLow properties"

Once you have completed making adjustments to properties and have tested the DefectFinder until you are satisfied with the results you are finished with creating this vision object and can go on to creating other vision objects or configuring and testing an entire vision sequence.

Frame Object

Frame Object Description
Frame objects provide a type of dynamic positioning reference for vision objects.
Once a Frame object is defined other vision objects can be positioned with respect to that Frame. This proves useful for a variety of situations.
It also helps reduce application cycle time because once a rough position is found and a Frame defined, the other vision objects that are based on that Frame object need not have large search windows. (Reducing the size of search windows helps reduce vision-processing time.)
Frame objects are best used when there is some type of common reference pattern on a part (such as fiducials on a printed circuit board) which can then be used as a base position on which other vision objects search window locations are based.

Frame Object Layout
The Frame object looks like 2 Line objects that intersect.
The user can adjust the position of the Frame object by clicking on the Frame object's name and then dragging the object to a new position.
However, in most cases the Frame object position and orientation will be based on other vision object's position.


Frame Object Layout

Symbol Description
a Object Name

Frame Object Properties
The following list is a summary of properties for the Frame object. For details on each property, refer to the following.
"Vision Guide 8.0 Properties & Result Reference"

Property Description
AbortSeqOnFail

Allows the user to specify that if the object fails (not passed), then the entire sequence is aborted at that point and no further objects in the sequence are processed.

Default: False

Caption

Used to assign a caption to the Frame object.

Default: Empty String

CurrentResult

Defines which result to display in the Results list (on the Object window) or which result to return data for when the system is requested to find more than one of a like feature within a single search window.

Default: 1

Description

Sets a user description

Default: Blank

Enabled

Specifies whether to execute the object.

Default: True

FailColor

Selects the color of an object when it is not accepted.

Default: Red

Graphics

Specifies which graphics to display.

Default: 1 - All

LabelBackColor

Sets the background color for the object's label.

Default: Transparent

Name

Used to assign a unique name to the Frame object.

Default: Frame01

OriginAngleEnabled

Enables single point frame usage which causes the frame to rotate with the angle of the origin object instead of based on the rotation of the vector between the OriginPoint and the YaxisPoint as in two point Frames.

Default: False

OriginPntObjResult

Specifies the result to use from the vision object specified in the OriginPoint property.

If “All” is specified, the Frame object will be applied to all the specified vision objects.

Default: 1 (Use the first result)

OriginPoint

Defines the vision object to be used as the origin point for the Frame object.

Default: Screen

PassColor

Selects the color for passed objects.

Default: LightGreen

PassType

Selects the rule that determines if the object passed.

Default: SomeFound

ShowExtensions

Specifies whether to display extensions of the frame.

Default: False

YAxisPoint

Defines vision object to be used as the Y-axis point for the frame. (Defines the direction of the Frame object.)

Default: Screen

YAxisPntObjResult

Specifies the result to use from the vision object specified in the YAxisPoint property.

Default: 1 (Use the first result.)

Frame Object Results
We have provided a list of the Frame object results and a brief explanation of each below.
The details for each result used with Frame objects are explained in the following.
"Vision Guide 8.0 Properties & Result Reference"

Results Description
Angle Returns the amount of found part rotation in degrees.
Found Returns whether the object was found.

Two Point Frames
Two point Frame objects require an origin (defined by the OriginPoint property) and a Y-Axis direction (defined by the YAxisPoint property). The combination of Origin and Y Axis direction define what can be thought of as a local coordinate system within which other vision objects can be based.
The power of the Frame object is that when the Frame object moves, all the vision objects which are defined within that frame move with it. (i.e. Their search windows adjust based on the XY change of the vision object defined as the OriginPoint and the rotation created with the movement of the YaxisPoint property.) This allows you to keep your search windows small that in turn improves reliability and-processing time.

Defining a Frame Object
Once a new Frame object is created, it requires 2 vision objects to be used as reference positions for the OriginPoint and YAxisPoint of the Frame.
These are defined with the OriginPoint property and YAxisPoint property. Any vision object which has XY position results can be used to define a Frame Origin or YAxisPoint.
This means that Blob, Correlation, Edge, Polar, and Point objects can all be used to define the Origin or YAxisPoint property for a Frame object.

Single Point Frames
Single point frames are an optional usage method for the Frame object. With this usage the OriginPoint property specifies a vision object to be used as an XY positional origin reference.
When the OriginAngleEnabled property is set to FALSE, the frame adjusts position based on the XY position change of the vision object used as the OriginPoint. No rotation is taken into account. This is useful for simple XY shifts where one object (such as a blob or correlation) finds the XY position of a part, and then the rest of the objects in the frame adjust in X and Y accordingly.
In some cases, you may also need to account for rotation within your frame. Assume that a Blob object is used to find a part’s X, Y, and U position (XY coordinate + rotation). Then let’s assume that a variety of other vision objects are required to find features on the part. The Blob object can be used to define a single point frame including rotation of the frame, and the other objects within the frame will then shift in X,Y and rotate based on the rotation returned by the Blob object. Hence, only one vision object was required to define both the XY shift and rotation of the part. This shows why the YaxisPoint property is not required for Single Point Frames.

Using Frame Objects
The next few sections guide you through how to create and use a Frame object.

  • How to create a new Frame object
  • Position and Size the search window
  • Configure the properties associated with the Frame object
  • Test the Frame object & examine the results
  • Make adjustments to properties and test again

Prior to starting the steps shown below, you should have already created a new vision sequence or selected a vision sequence to use. If you have no vision sequence to work with, you can create a new vision sequence by clicking on the [New Sequence] button. You can also select a sequence which was created previously by clicking on the sequence tree in the Vision Guide window.
Refer to the following for more details on how to create a new vision sequence or select one that was previously defined.
Vision Sequences

Step 1: Create a new Frame object

  1. Click the [All Tools] - [Frame] button on the Vision Guide toolbar.
  2. Move the mouse over the image display. You will see the mouse pointer change to the Frame object icon.
  3. Continue moving the mouse until the icon is at the desired position in the image display, then click the left mouse button to create the object.
  4. Notice that a name for the object is automatically created. In the example, it is called “Frame01” because this is the first Frame object created for this sequence. (We will explain how to change the name later.)

Step 2: Positioning the Frame Object
You should now see a Frame object similar to the one shown below:


New Frame Object

Symbol Description
a Object Name

Frame objects do not have a sizeable window. Click the name label of the Frame object or anywhere along one of its axes and while holding the mouse down drag the entire Frame object to a new location on the screen. When you find the position you like, release the mouse and the Frame object will stay in this new position on the screen.

Step 3: Configuring Properties for the Frame Object
We can now set property values for the Frame object. To set any of the properties simply click the associated property’s value field and then either enter a new value or if a drop down list is displayed click one of the items in the list.
Shown below are some of the more commonly used properties for the Frame object. Explanations for other properties such as AbortSeqOnFail, Graphics, etc. which are used on many of the different vision objects can be seen in the following. - "Vision Guide 8.0 Properties & Result Reference"

  • Frame Object Properties
Property Description
Name property The default name given to a newly created Frame object is “Framexx” where xx is a number which is used to distinguish between multiple Frame objects within the same vision sequence. If this is the first Frame object for this vision sequence then the default name will be “Frame01”. To change the name, click the Value field of the Name property, type a new name and press the return key. You will notice that once the name property is modified, every place where the Frame object's name is displayed is updated to reflect the new name.
OriginPoint property

Typically you will set this property to one of the objects that occur previously in the sequence.

This will determine the origin of the frame at runtime.

YAxisPoint property

Typically you will set this property to one of the objects that occur previously in the sequence.

This will determine the direction of the Y axis of the Frame at runtime.

Step 4: Running the Frame Object and Examining the Results
To run the Frame object, simply do the following:
Click the [Run] button of the object on the execution panel. If either the OriginPoint or YAxisPoint properties are not Screen, then the respective objects will be run first. For example, if the OriginPoint is a Blob object, then the blob will be run first to determine the position of the origin of the frame.
Results for the Frame object will now be displayed. The primary results to examine at this time are:

Results Description
Angle The angle of the frame.

Line Object

Line Object Description
Line objects are used to define a line between 2 points.
The Points can be either based on positions on the screen or on other vision object positions. For example, shown below are just some of the situations where a Line object can be created:

  • between 2 Blob objects
  • between 2 Correlation objects
  • between 2 Point objects
  • between a Blob object and a Correlation object
  • between 2 results on a single Blob object
  • between result 1 on a Blob object and result 3 on a Correlation object
  • between a Point object and a Correlation object
  • any other variety of combinations between objects that have an XY position associated with them.

Line objects are useful for the following situations:

  • To measure the distance between two vision objects (or vision object results when multiple results are used) The distance can also be checked to make sure it falls between a minimum and maximum distance as per your application requirements.
  • To calculate the amount of rotation between 2 vision objects (use the angle of the line returned in Robot Coordinates called the RobotU result)
  • To create a building block for computing the midpoint of a line or intersection point between 2 lines

Line Object Layout
The Line object layout appears just as you would expect it to. It's just a line with a starting point, an ending point, and an object name. To reposition the Line object, simply click the name of the Line object (or anywhere on the line) and then drag the line to a new position. To resize a Line object, click either the starting or ending point (shown by the sizing handles) of the line and then drag it to a new position.


Line Object Layout

Symbol Description
a Object Name
b Sizing Handle

Line Object Properties
The following list is a summary of properties for the Line object.
The details for each property are explained in the following.
"Vision Guide 8.0 Properties & Result Reference"

Property Description
AbortSeqOnFail

Allows the user to specify that if the object fails (not passed), then the entire sequence is aborted at that point and no further objects in the sequence are processed.

Default: False

AngleBase

Sets the reference angle

Default: 0

AngleMode

Sets the angle output format.

Default: 1 - Default

LabelBackColor

Sets the background color for the object's label.

Default: Transparent

Caption

Used to assign a caption to the Line object.

Default: Empty String

CurrentResult Defines which result to display in the Results list (on the Object window) or which result to return data for when the system is requested to find more than one of a like feature within a single search window.
Description

Sets a user description

Default: Blank

Directed

Specifies whether to set the angle using the line direction

Default: True

Enabled

Specifies whether to execute the object.

Default: True

EndPntObjResult

Specifies which result to use from the EndPointObject.

Default: 1

EndPointObject

Specifies which vision object to use to define the end point of the Line.

Default: Screen

EndPointType

Defines the type of end point used to define the end point of a line.

Default: 0 - Point

FailColor

Selects the color for an object when it is failed.

Default: Red

Frame

Specifies which positioning frame to use.

Default: none

Graphics

Specifies which graphics to display.

Default: 1 - All

MaxLength

Defines the upper length limit for the Line object.

For a Line to be found it must have a Length result below the value set for MaxLength property.

Default: 9999

MaxPixelLength

Defines the upper pixel length limit for the Line object.

For a Line to be found it must have a PixelLength result below the value set for MaxPixelLength property.

Default: 9999

MinLength

Defines the lower length limit for the Line object.

For a Line to be found it must have a Length result above the value set for MinLength property.

Default: 0

MinPixelLength

Defines the lower length limit for the Line object.

For a Line to be found it must have a PixelLength result above the value set for MinPixelLength property.

Default: 0

Name

Used to assign a unique name to the Line object.

Default: Line01

PassColor

Selects the color for passed objects.

Default: LightGreen

PassType

Selects the rule that determines if the object passed.

Default: SomeFound

ShowExtensions

When set to True, this causes the graphics display of the line to be extended to the end of the image display.

Default: False

StartPntObjResult

Specifies which result to use from the StartPointObject.

Default: 1

StartPointObject

Specifies which vision object to use to define the start point of the Line.

Default: Screen

StartPointType

Defines the type of start point used to define the start point of a line.

Default: 0 - Point

X1 The X coordinate position of the start point of the line.
X2 The X coordinate position of the end point of the line.
Y1 The Y coordinate position of the start point of the line.
Y2 The Y coordinate position of the end point of the line.

Line Object Results
The following list is a summary of the Line object results with brief descriptions. For details on each result, refer to the following.
"Vision Guide 8.0 Properties & Result Reference"

Results Description
Angle Returns the angle of the Line formed from StartPoint to EndPoint with respect to the 0o position at 3 O’clock.
CameraX1 Returns the X coordinate position of the starting point of the line in the camera coordinate system.
CameraX2 Returns the X coordinate position of the ending point of the line in the camera coordinate system.
CameraY1 Returns the Y coordinate position of the starting point of the line in the camera coordinate system.
CameraY2 Returns the Y coordinate position of the ending point of the line in the camera coordinate system.
Found

Returns whether the object was found.

(i.e. If the vision objects used to define the Line object are not found, then the Line object is not found.)

Length

Returns the length of the line in millimeter units.

(The camera must be calibrated or “no cal” will be returned as the length.)

If the Line object fails due to the MaxLength and MinLength constraints, the Length result is shown in red in the Results list.

NumberFound Returns the number of line found.
PixelLength

Returns the length of the line in pixels.

If the Line object fails due to the MaxPixelLength and MinPixelLength constraints, the PixelLength result is shown in red in the Results list.

PixelLine

Runtime only.

Returns the four line coordinates X1, Y1, X2, Y2 in pixels.

PixelX1 Returns the X coordinate position of the starting point of the line in pixels.
PixelX2 Returns the X coordinate position of the ending point of the line in pixels.
PixelY1 Returns the Y coordinate position of the starting point of the line in pixels.
PixelY2 Returns the Y coordinate position of the ending point of the line in pixels.
RobotX1 Returns the X coordinate position of the start point of the detected edge line in the Robot coordinate system.
RobotX2 Returns the X coordinate position of the end point of the detected edge line in the Robot coordinate system.
RobotY1 Returns the Y coordinate position of the start point of the detected edge line in the Robot coordinate system.
RobotY2 Returns the Y coordinate position of the end point of the detected edge line in the Robot coordinate system.
RobotU Returns the angle of the Line formed from StartPoint to EndPoint with respect to the Robot Coordinate System.

Using Line Objects
The next few sections guide you through how to create and use an Line object.

  • How to create a new Line object
  • Position and Size the search window
  • Configure the properties associated with the Line object
  • Test the Line object & examine the results

Prior to starting the steps shown below, you should have already created a new vision sequence or selected a vision sequence to use. If you have no vision sequence to work with, you can create a new vision sequence by clicking on the [New Sequence] button. You can also select a sequence which was created previously by clicking on the sequence tree.
Refer to the following for more details on how to create a new vision sequence or select one that was previously defined.
Vision Sequences

Step 1: Create a new Line object

  1. Click the [All Tools] - [New Line] button on the Vision Guide toolbar.
  2. Move the mouse over the image display. You will see the mouse pointer change to the Line object icon.
  3. Continue moving the mouse until the icon is at the desired position in the image display, then click the left mouse button to create the object.
  4. Notice that a name for the object is automatically created. In the example, it is called “Line01” because this is the first Line object created for this sequence. (We will explain how to change the name later.)

Step 2: Positioning the Line Object
You should now see a Line object similar to the one shown below:


New Line Object

Symbol Description
a Object Name
b Sizing Handle

Line objects do not have a window. You can change the length and rotation by clicking down on either size handle, and then dragging that end of the line to a new position. You can also click the name label of the Line object or anywhere along the line and while holding the mouse down drag the entire Line object to a new location on the screen. When you find the position you like, release the mouse and the Line object will stay in this new position on the screen.

Step 3: Configuring Properties for the Line Object
We can now set property values for the Line object. To set any of the properties simply click the associated property’s value field and then either enter a new value or if a drop down list is displayed click one of the items in the list.
Shown below are some of the more commonly used properties for the Line object. Explanations for other properties such as AbortSeqOnFail, Graphics, etc. which are used on many of the different vision objects can be seen in the following.

  • "Vision Guide 8.0 Properties & Result Reference"
  • Line Object Properties
Property Description
Name property The default name given to a newly created Line object is “Linexx” where xx is a number which is used to distinguish between multiple Line objects within the same vision sequence. If this is the first Line object for this vision sequence then the default name will be “Line01”. To change the name, click the Value field of the Name property, type a new name and press the return key. You will notice that once the name property is modified, every place where the Line object’s name is displayed is updated to reflect the new name.
StartPointObject property

Typically you will set this property to one of the objects that occur previously in the sequence.

This will determine the starting point of the line at runtime.

EndPointObject property

Typically you will set this property to one of the objects that occur previously in the sequence.

This will determine the end point of the line at runtime.

Set the AngleMode property to “2-UseAngleBase” to specify the angle output format using Directed and AngleBase property settings. The Directed property specifies whether angles are dependent on the orientation of Line objects. Angles are output using the angle set with AngleBase as a reference.
For example, when performing measurements based on the angle set when teaching, the angle output will be the angle measured around the angle set to the AngleBase property as the center. Set the AngleMode property to “1-Default” to output angles from 0 to 360°.
When doing so, the following configuration will result in the same action being performed.

  • AngleMode property: 2-UseAngleBase
  • AngleBase: 0
  • Directed: True

Step 4: Running the Line Object and Examining the Results
To run the Line object, simply do the following:
Click the [Run] button of the object on the execution panel. If either the StartPointObject or EndPointObject properties are not Screen, then the respective objects will be run first. For example, if the StartPointObject is a Blob object, then the blob will be run first to determine the position of the starting point of the line.
Results for the Line object will now be displayed. The primary results to examine at this time are:

Results Description
Length results

The length of the line in millimeters.

There must be a calibration associated with the sequence that contains the line for Length to be determined.

PixelX1 Result

PixelY1 Result

PixelX2 Result

PixelY2 Result

The XY position for both ends of the line. (unit: pixel)
PixelLength results The length of the line in pixels.

Point Object

Point Object Description
Point object is used for defining points. Point objects can be thought of as a type of utility object that is normally used with other vision objects.
Point objects are most useful in defining a position references for Polar and Line objects as follows:

Object Description
Polar Objects: Point objects can be used to define the CenterPoint property which is used as the center of the Polar object (the CenterX and CenterY property).
Line Objects: Point objects can be used to define the start point, midpoint, or end point of a single line or the intersection point of 2 lines.

Point Object Layout
The Point object layout appears as a cross hair on the screen. The only real manipulation for a Point object is to change the position. This is done by clicking on the Point object and dragging it to a new position. In most cases Point objects are attached to other objects so the position is calculated based on the position of the associated object.


Point Object Layout:

Symbol Description
a Object Name

Point Object Properties
The following list is a summary of properties for the Point object. For details on each result, refer to the following.
"Vision Guide 8.0 Properties & Result Reference"

Property Description
AbortSeqOnFail

Allows the user to specify that if the object fails (not passed), then the entire sequence is aborted at that point and no further objects in the sequence are processed.

Default: False

AngleObject

Specifies objects with an output angle.

Default: Screen

AngleObjectResult Specifies the result for AngleObject property to use.
CalRobotPlacePos Calibrates RobotPlacePos when designing and performing program.
Caption

Used to assign a caption to the Point object.

Default: Blank

CenterPntObjResult

Specifies which result to use from the CenterPointObject.

If All is specified, DefectFinder object will be applied to all of the (NumberFound) for specified vision object results.

CenterPntOffsetX Sets or returns the X offset after the center point of the search window is positioned with the CenterPointObject.
CenterPntOffsetY Sets or returns the Y offset after the center point of the search window is positioned with the CenterPointObject.
CenterPntRotOffset Specifies whether to rotate the XY offset value of the center (CenterPntOffsetX, CenterPntOffsetY) based on the Angle result of CenterPointObject.
CenterPointObject

Specifies the position to be used as the center point of the object.

When this property is set “Screen”, the object can be configured on arbitrary position. However, when specified to other vision object, the center point in PixelX, PixelY of the object is set.

CoordObject Specifies Coordinates object to copy the result. The copy is executed when the object is executed, and if it didn’t execute because of branch function of Decision, the copy will not be executed. Default: None
CurrentResult Defines which result to display in the Results list (on the Object window) or which result to return data for when the system is requested to find more than one of a like feature within a single search window.
Description

Sets a user description

Default: Blank

Enabled

Specifies whether to execute the object.

Default: True

FailColor

Selects the color for an object when it is failed.

Default: 1 - Red

Frame

Specifies which positioning frame to use.

Default: None

FrameResult Specifies which number of the Frame results to be used.
Graphics

Specifies which graphics to display.

Default: 1 - All

LabelBackColor

Sets the background color for the object's label.

Default: Transparent

LineObj1Result Sets which result of the object to be set for LineObj1 property is used
LineObj2Result Sets which result of the object to be set for LineObj2 property is used
LineObject1

Defines the Line object when specify the midpoint of straight line of Point object.

Also, when defining the intersection of two lines, specifies the first Line object.

Default: none

LineObject2

Defines the 2nd Line object used to define the position of the point as the intersection of 2 lines.

Default: none

Name

Used to assign a unique name to the Point object.

Default: Point01

PassColor

Selects the color for passed objects.

Default: LightGreen

PassType

Selects the rule that determines if the object passed.

Default: SomeFound

PointType

Defines the position type for the point.

Default: Screen

X The X coordinate position of the Point object in pixels.
Y The Y coordinate position of the Point object in Pixels.

Point Object Results
The following list is a summary of the Point object results with brief descriptions. For details on each result, refer to the following.
"Vision Guide 8.0 Properties & Result Reference"

Results Description
Angle Returns the point detected in degrees.
CameraX Returns the X coordinate position of the Point object’s position in the camera coordinate system.
CameraY Returns the Y coordinate position of the Point object’s position in the camera coordinate system.
CameraXYU

Runtime only.

Returns the CameraX, CameraY, and CameraU coordinates of the Point object’s position in the camera coordinate system.

ColorValue Returns the grayscale value or color value of the pixel at the current location.
NumberFound Returns the number of the point found.
Found

Returns whether the Point object was found.

(i.e. If the vision objects used to define the position of the Point object are not found, then the Line object is not found.)

Passed Returns whether the object detection result was accepted.
PixelX Returns the X coordinate position of the Point object’s position in pixels.
PixelY Returns the Y coordinate position of the Point object’s position in pixels.
PixelXYU

Runtime only.

Returns the PixelX, PixelY, and PixelU coordinates of the found part’s position in pixels.

RobotX Returns the X coordinate position of the found part’s position with respect to the Robot’s Coordinate System.
RobotY Returns the Y coordinate position of the found part's position with respect to the Robot’s Coordinate System.
RobotU

Returns the amount of rotation of the found part's position with respect to the Robot’s Coordinate System.

(Keep in mind that the U coordinate value has no meaning since a Point object has no rotation.)

RobotXYU

Runtime only.

Returns the RobotX, RobotY coordinates and the amount of rotation of the found object’s position with respect to the Robot Coordinate System. (Keep in mind that the U coordinate value has no meaning since a Point object has no rotation.)

Understanding Point Objects
Point objects were created to give users a method to mark a position on the screen or with respect to Line objects so that these positions could be used by other vision objects as reference positions.
There are 2 fundamental parts to understand Point objects:

  • Defining the Position of the Point Object
  • Using the Position of the Point Object as a Reference Position for Other Objects

Defining the Position of the Point Object
The position of a Point object can either be based upon the position on the screen where you place the Point object, the midpoint of a Line object, or the intersection between 2 Line objects.
The PointType property is used to define which item the Point object’s position is based upon. The PointType property can be set to Screen, MidPoint, or Intersection.

  • When the PointType property is Set to Screen
    When a Point object is first created the default PointType is Screen which means that the position of the Point object is based upon the position on the screen where the Point object was placed.
    As long as the PointType is set to Screen, the Point object’s position only changes when you move it manually with the mouse or change the values of the X property or Y property.

  • Setting the PointType property to MidPoint
    To define a Point object’s position as the Midpoint of a line can be done by setting the PointType property to MidPoint.
    However, it is important to note that you must first define the line to be used prior to setting PointType to MidPoint. Otherwise, Vision Guide 8.0 would have no idea about which line to use for computing the midpoint. The LineObject1 property is used to define the Line object to for which the midpoint will be computed and the position of the Point object then set to.
    Any valid Line object that is ahead of the Point object in the vision sequence Step list can be selected from the dropdown list of the LineObject1 property.
    In fact, Vision Guide automatically checks which Line objects are ahead of the Point object in the sequence Step list and only displays these lines in the LineObject1 drop down list. This makes the system easier to use.
    If you try to set the PointType property to MidPoint without first specifying the Line object to be used with the LineObject1 property, then an error message will appear telling you that you must first select a line for the LineObject1 property.

  • Setting the PointType property to Intersection
    To define a Point object’s position as the intersection point between 2 lines requires that the 2 lines first be defined. This is done with the LineObject1 and LineObject2 properties.
    The LineObject1 and LineObject2 properties must each define a different Line object. Once a Line object is defined for each, the PointType property can be set to Intersection which indicates that the position of the Point object will be the intersection point of the 2 lines defined by the LineObject1 and LineObject2 properties.
    Any valid Line object which is ahead of the Point object in the vision sequence Step list can be selected from the drop down list of the LineObject1 property to be used as the 1st line required for the intersection.
    Then any valid remaining Line object which is ahead of the Point object in the vision sequence Step list can be selected from the drop down list of the LineObject2 property to be used as the 2nd line required for the intersection.
    Vision Guide automatically takes care of displaying only those Line objects that are valid Line objects for use as LineObject1 or LineObject2 in the associated drop down lists.
    If you try to set the PointType property to Intersection without first specifying the Line objects to use for both LineObject1 and LineObject2, then an error message will appear. The error message will tell you that you must first define the line for LineObject1 or LineObject2 prior to setting the PointType property to Intersection.
    Depending upon whichever Line object is not defined.

KEY POINTS


Using the intersection of 2 lines to define a position is useful for computing things such as the center of an object. For example, consider a rectangular object. Once you find the 4 corners, you can create 2 diagonal lines that intersect at the center of the rectangle. A Point object positioned on this intersection point is then positioned at the center of the rectangle.

Using the Position of the Point Object as a Reference Position for Other Objects
The Point object’s primary purpose is to act as a reference position for other vision objects such as the Line and Polar objects.
This means that the position of the Point object is used as a base position for a Line or Polar object.
This is powerful because it allows you to create such situations as a Point object which is defined as the intersection of 2 lines to calculate the center of an object, which can then be used as an end point for a Line object which calculates the distance between the centers of 2 objects.

Point Objects used as a Reference Position for Polar Objects
The Polar object requires a CenterPoint position to base its Polar Search upon. When first using the Polar object, you may try using the screen as the reference point for the CenterPoint property but you will soon find that as the feature you are searching moves the Polar object must also move with respect to the CenterPosition of the feature.
This is where being able to apply the XY position from another vision object (such as the Point object) to the Polar object becomes very powerful.
There may be times where you want to apply the position of the Point object as the CenterPoint of a Polar object. There are 2 primary situations for which Point objects can be used as the CenterPoint of a Polar object.

  • CenterPoint defined as the midpoint of a line
  • CenterPoint defined as the intersection point of 2 lines
    For example, after you find the midpoint of a line, you may want to do a Polar search from this midpoint. You may also want to base a Polar search CenterPoint on the intersection point of 2 lines.
    This is the more common use for Polar objects with Point objects.

Point Objects used as a Reference Position for Line Objects
A line requires a starting and ending position. Many times the starting and ending positions of lines are based on the XY position results from other vision objects such as the Blob or Correlation object.
However, you can also use a Point object position as the reference position of a line. A line can be defined with both its starting and ending positions being based on Point objects or it can have just one of the endpoints of the line be based on a Point object.
One of the more common uses of the Point object is to use it to reference the intersection point of 2 lines.
The figure below shows an example of 2 line objects (Line1 and Line2) which intersect at various heights. Line 3 is used to calculate the height of the intersection point. The Point object is shown at the intersection point between Line1 and Line2.)


Point object defined as intersection point of Lines 1 and 2

Symbol Description
a Point Object
b Height

Using Point Objects
The next few pages take you through how to create and use a Point object. We will review the following items

  • Create a new Point object
  • Positioning the Point object on the screen
  • Configuring properties associated with the Point object
  • Running the Point object and Examining the results

Prior to starting the steps shown below, you should have already created a new vision sequence or selected a vision sequence to use.
If you have no vision sequence to work with, you can create a new vision sequence by clicking on the [New Sequence] button.
You can also select a sequence which was created previously by clicking on the sequence tree in the Vision Guide window.
Refer to the following for more details on how to create a new vision sequence or select one that was previously defined.
Vision Sequences

Step 1: Create a new Point object

  1. Click the [All Tools] - [Point] button on the Vision Guide toolbar
  2. You will see a point icon appear above the Point object button
  3. Click the point icon and drag to the image display of the Vision Guide window
  4. Notice that a name for the object is automatically created. In the example, it is called “Pnt01” because this is the first Point object created for this sequence. (We will explain how to change the name later.)

Step 2: Positioning the Point object
You should now see a Point object similar to the one shown below:


New Point Object Layout

Symbol Description
a Object Name

Point objects cannot be resized since they are just a point and have no height or thickness. However, they can be positioned with the mouse or by setting position values into the X and Y properties.
Since Point objects are created with a PointType property setting of “0 - Screen”, we can move the Point object around a little with the mouse as described below:

  1. Click the name label of the Point object and while holding the mouse down drag the Point object to a new location on the screen.
  2. When you find the position you like, release the mouse and the Point object will stay in this new position on the screen.

Step 3: Configuring Properties for the Point Object
We can now set property values for the Point object.
To set any of the properties simply click the associated property’s value field and then either enter a new value or if a drop down list is displayed click one of the items in the list.
Shown below are some of the more commonly used properties for the Point object. Explanations for other properties such as Allows the user to specify that if the specified object fails (not found), then the entire sequence is aborted at that point and no further objects in the sequence are processed., Graphics, etc. which are used on many of the different vision objects can be seen in the Vision Properties and Results Reference Manual or in the Point Object Layout.

Point Object Layout
The Point object layout appears as a cross hair on the screen. The only real manipulation for a Point object is to change the position. This is done by clicking on the Point object and dragging it to a new position. In most cases Point objects are attached to other objects so the position is calculated based on the position of the associated object.


Point Object Layout

Symbol Description
a Object Name

Point Object Properties
We should not have to set any of these properties to test our Point object because the default values are fine since we are not positioning the Point object at the midpoint of a single line or intersection point of 2 lines. However, you may want to read through this information if you are working with Point objects for the first time.

Property Description
Name property (“Pointxx”) The default name given to a newly created Point object is “Pointxx” where xx is a number which is used to distinguish between multiple Point objects within the same vision sequence. If this is the first Point object for this vision sequence then the default name will be “Point01”. To change the name, click the Value field of the Name property, type a new name and press the return key. You will notice that once the name property is modified, every place where the Point object’s name is displayed is updated to reflect the new name.
LineObject1 (None)

If you will set the Point object’s PointType property to Midpoint (specifying the midpoint of a line), then this property will specify which line to use.

It is also used to specify the 1st of 2 lines required if the PointType will be an intersection point. Default is set to “None”.

LineObject2 (None)

If you will set the Point object's PointType property to Intersection (specifying the intersection point of 2 lines), then this property will specify the 2nd line to use.

The 1st line for the intersection is specified by the LineObject1 property. Default is set to “None”.

PointType (Screen)

This property is used to define the position of the Point object.

It can be based on the Screen position, midpoint of a line specified by the LineObject1 property, or the intersection point of 2 lines specified by the LineObject1 and LineObject2 properties.

Default: Screen

Since there are no Line objects specified in this example, the LineObject1, LineObject2, and PointType properties cannot be changed from their default state.
Normally, we would select a Line Object for the LineObject1 property if we want to make this Point the midpoint of a line.
Or we would select a Line object for the LineObject1 property and a 2nd Line object for the LineObject2 property if we want to make this Point an intersection point between two lines.
For more details, refer to Defining the Position of the Point Object earlier in the section.
Object angles specified using the AngleObject property can be set as the angle output from the Point object. This not only allows Point object output to include the XY position, but for U angle information to be obtained together with the XY position, such as in the form of RobotXYU, by setting the angle of the object preceding the Point object as the Point object angle.

Step 4: Running the Point Object and Examining the Results
To run the Point object, click the [Run] button of the object on the execution panel.

  1. Click the [Run] button of the object on the execution panel.

Results for the Point object will now be displayed. The primary results to examine at this time are:

Results Description

PixelX result

PixelY result

The XY position (in pixels) of the Point object.

If the PointType property for the Point object was set to midpoint, then the PixelX and PixelY results would return the XY position (in Pixels) of the midpoint of the Line object specified by LineObject1.

CameraX result

CameraY result

These define the XY position of the Point object in the camera’s coordinate system.

The CameraX and CameraY results will only return a value if the camera has been calibrated. If it has not then [No Cal] will be returned.

RobotX result

RobotY result

These define the XY position of the Point object in robot coordinates.

The robot can be told to move to this XY position. (No other transformation or other steps are required.) The RobotX and RobotY results will only return a value if the camera has been calibrated. If it has not then [No Cal] will be returned.

BoxFinder Object

BoxFinder Object Description
BoxFinder objects are used to identify the position of rectangle edges (including squares) in an image.
BoxFinder objects process multiple Edge objects automatically to identify the edge position and obtain the rectangle identified from each edge position.
The edge of an object in an image is a change in gray value from dark to light or light to dark. This change may span several pixels.
Each edge search of the BoxFinder object finds the transition from Light to Dark or Dark to Light as defined by the Polarity property and specifies the best line between the detected edge positions. You can also search for edge pairs by changing the EdgeType property. With edge pairs, two opposing edges are searched for, and the midpoint is returned as the result.

BoxFinder Object Layout
Similar to LineFinder, BoxFinder objects have a direction indicator showing the edge search direction within the search window. This differs from LineFinder in that a direction indicator covers each direction for all four sides of the search window. The number of edge search lines is specified with the NumberOfEdges property. You can specify the search direction using the Direction property.


BoxFinder Object Layout

Symbol Description
a Step Number in Sequence
b Object Name
c

Size & Direction Handle

Direction indicator (Direction of edge search)

d Direction indicator (Direction of edge search)

The BoxFinder object can be positioned to search in any direction (not just along the vertical and horizontal directions). Similar to SearchWinType=AngledRectangle for Blob objects, use the handle for rotating the BoxFinder object search window to move the BoxFinder object in the intended edge detection direction.

BoxFinder Object Properties
The following list is a summary of properties for the BoxFinder object. For details on each property, refer to the following.
"Vision Guide 8.0 Properties & Result Reference"

Property Description
AbortSeqOnFail

Allows the user to specify that if the object fails (not passed), then the entire sequence is aborted at that point and no further objects in the sequence are processed.

Default: False

Accept

Specifies the shape score that a feature must equal or exceed to be considered found.

If the value is small, it may result in false detection.

Default: 100

Caption

Used to assign a caption to the BoxFinder object.

Default: Blank

CenterPointObject

Specifies the position to be used as the center point of the object.

When this property is set “Screen”, the object can be configured on arbitrary position. However, when specified to other vision object, the center point in PixelX, PixelY of the object is set.

Default: Screen

CenterPntObjResult

Specifies which result to use from the CenterPointObject.

If All is specified, BoxFinder object will be applied to all of the (NumberFound) for specified vision object results.

Default: 1

CenterPntOffsetX

Sets or returns the X offset after the center point of the search window is positioned with the CenterPointObject.

Default: 0

CenterPntOffsetY

Sets or returns the Y offset after the center point of the search window is positioned with the CenterPointObject.

Default: 0

CenterPntRotOffset

Specifies whether to rotate the XY offset value of the center (CenterPntOffsetX, CenterPntOffsetY) based on the Angle result of CenterPointObject.

Default: False

ContrastTarget

Sets the desired contrast for the edge search.

Default: 0 (best contrast)

ContrastVariation

Selects the allowed contrast variation for ContrastTarget.

Default: 0

CoordObject
Specifies Coordinates object to copy the result. The copy is executed when the object is executed, and if it didn’t execute because of branch function of Decision, the copy will not be executed. Default: None

CurrentResult
Defines which result to display in the Results list on the object window or which result to return data for when the system is requested to find more than one of a like feature within a single search window.

Description

Sets a user description

Default: Blank

Direction

Sets the direction for the edge search.

Default: InsideOut

EdgeSort

Sets the method of sorting detected edge results

Default: Score

EdgeThreshold

Sets the threshold at which edges below this value are ignored.

Default: 2

EdgeType

Select the type of edge to search for: single or pair.

Default: 1 - Single

Enabled

Specifies whether to execute the object.

Default: True

FailColor

Selects the color of an object when it is not accepted.

Default: Red

FittingThreshold

Specifies the edge results to use for linear fittings.

Default: 10

Frame

Specifies which positioning frame to use.

Default: none

FrameResult

Specifies which number of the Frame results to be used.

Default: 1

Graphics

Specifies which graphics to display.

Default: 1 - All

LabelBackColor

Selects the background color for an object label.

Default: Transparent

MissingEdgeType

Defines how to handle a missing edge.

Default: Interpolate

Name

Used to assign a unique name to the BoxFinder object.

Default: BoxFind01

NumberOfEdges

Specified the number of edges to be detected.

Default: 5

PassColor

Selects the color for an object when it is passed.

Default: Light Green

PassType

Selects the rule that determines if the object passed.

Default: SomeFound

Polarity

Specifies whether the BoxFinder object should search for a LightToDark or DarkToLight transition.

Default: 1 - LightToDark

ScoreWeightContrast

Sets the percentage of the score that depends on contrast.

Default: 50

ScoreWeightStrength

Sets the percentage of the score that depends on edge strength.

Default: 50

SearchLength

Defines the length of the edge search range.

The following SearchLength1 to 4 values can be set together.

SearchLength1 Sets the SearchLength1 length in the figure “BoxFinder object properties, positional relationship of results”.
SearchLength2 Sets the SearchLength2 length in the figure “BoxFinder object properties, positional relationship of results”.
SearchLength3 Sets the SearchLength3 length in the figure “BoxFinder object properties, positional relationship of results”.
SearchLength4 Sets the SearchLength4 length in the figure “BoxFinder object properties, positional relationship of results”.
SearchWidth

Defines the width of the edge search.

Range is from 3 to 99.

Default: 3

SearchWin Runtime only. Sets or returns the search window left, top, height, width parameters in one call.
SearchWinAngle Defines the angle of the area to be searched.
SearchWinCenterX Defines the X coordinate value of the center of the area to be searched.
SearchWinCenterY Defines the Y coordinate value of the center of the area to be searched.
SearchWinHeight Defines the height of the area to be searched. (unit: pixel)
SearchWinLeft Defines the left most position of the area to be searched. (unit: pixel)
SearchWinTop Defines the upper most position of the area to be searched. (unit: pixel)
SearchWinWidth Defines the width of the area to be searched. (unit: pixel)
StrengthTarget

Sets the desired edge strength to search for.

Default: 0

StrengthVariation

Sets the amount of variation for StrengthTarget.

Default: 0


BoxFinder object properties, positional relationship of results

BoxFinder Object Results
The following list is a summary of the BoxFinder object results with brief descriptions. For details on each result, refer to the following.
"Vision Guide 8.0 Properties & Result Reference"

Results Description
Angle Returns the detected rectangle angle.
CameraX Returns the central X coordinate of the detected rectangular edge in the Camera coordinate system.
CameraY Returns the central Y coordinate of the detected rectangular edge in the Camera coordinate system.
CameraXYU

Runtime only.

Returns the CameraX, CameraY, and CameraU coordinates at the center of the detected rectangular edge position in the camera coordinate system.

CameraX1 Returns the Corner1 X coordinate in the figure “BoxFinder object properties, positional relationship of results” in the Camera coordinate system.
CameraY1 Returns the Corner1 Y coordinate in the figure “BoxFinder object properties, positional relationship of results” in the Camera coordinate system.
CameraX2 Returns the Corner2 X coordinate in the figure “BoxFinder object properties, positional relationship of results” in the Camera coordinate system.
CameraY2 Returns the Corner2 Y coordinate in the figure “BoxFinder object properties, positional relationship of results” in the Camera coordinate system.
CameraX3 Returns the Corner3 X coordinate in the figure “BoxFinder object properties, positional relationship of results” in the Camera coordinate system.
CameraY3 Returns the Corner3 Y coordinate in the figure “BoxFinder object properties, positional relationship of results” in the Camera coordinate system.
CameraX4 Returns the Corner4 X coordinate in the figure “BoxFinder object properties, positional relationship of results” in the Camera coordinate system.
CameraY4 Returns the Corner4 Y coordinate in the figure “BoxFinder object properties, positional relationship of results” in the Camera coordinate system.
Contrast Returns the contrast of the detected rectangular edge.
FitError Returns the distance between each edge point and the Line detected as the root mean square (RMS).
Found Returns whether the object was found. (i.e. did the feature or part you are looking at have a shape score that is above the Accept property’s current setting.)
MaxError Returns the maximum difference from the detected rectangular edge in pixel length.
Passed Returns whether the object detection result was accepted.
Perimeter Returns the number of pixels of the perimeter of the detected rectangle.
PixelX Returns the central X coordinate of the detected rectangular edge in the Image coordinate system.
PixelY Returns the central Y coordinate of the detected rectangular edge in the Image coordinate system.
PixelX1 Returns the Corner1 X coordinate in the figure “BoxFinder object properties, positional relationship of results” in the Image coordinate system.
PixelY1 Returns the Corner1 Y coordinate in the figure “BoxFinder object properties, positional relationship of results” in the Image coordinate system.
PixelX2 Returns the Corner2 X coordinate in the figure “BoxFinder object properties, positional relationship of results” in the Image coordinate system.
PixelY2 Returns the Corner2 Y coordinate in the figure “BoxFinder object properties, positional relationship of results” in the Image coordinate system.
PixelX3 Returns the Corner3 X coordinate in the figure “BoxFinder object properties, positional relationship of results” in the Image coordinate system.
PixelY3 Returns the Corner3 Y coordinate in the figure “BoxFinder object properties, positional relationship of results” in the Image coordinate system.
PixelX4 Returns the Corner4 X coordinate in the figure “BoxFinder object properties, positional relationship of results” in the Image coordinate system.
PixelY4 Returns the Corner4 Y coordinate in the figure “BoxFinder object properties, positional relationship of results” in the Image coordinate system.
PixelXYU Returns the PixelX, PixelY, and PixelU coordinates at the center of the detected rectangular edge position in pixels.
RobotX Returns the central X coordinate of the detected rectangular edge in the Robot coordinate system.
RobotY Returns the central Y coordinate of the detected rectangular edge in the Robot coordinate system.
RobotU Returns the central U coordinate of the detected rectangular edge in the Robot coordinate system.
RobotXYU

Runtime only.

Returns the RobotX, RobotY, and RobotU coordinates at the center of the detected rectangular edge position with respect to the Robot Coordinate System.

RobotX1 Returns the Corner1 X coordinate in the figure “BoxFinder object properties, positional relationship of results” in the Robot coordinate system.
RobotY1 Returns the Corner1 Y coordinate in the figure “BoxFinder object properties, positional relationship of results” in the Robot coordinate system.
RobotX2 Returns the Corner2 X coordinate in the figure “BoxFinder object properties, positional relationship of results” in the Robot coordinate system.
RobotY2 Returns the Corner2 Y coordinate in the figure “BoxFinder object properties, positional relationship of results” in the Robot coordinate system.
RobotX3 Returns the Corner3 X coordinate in the figure “BoxFinder object properties, positional relationship of results” in the Robot coordinate system.
RobotY3 Returns the Corner3 Y coordinate in the figure “BoxFinder object properties, positional relationship of results” in the Robot coordinate system.
RobotX4 Returns the Corner4 X coordinate in the figure “BoxFinder object properties, positional relationship of results” in the Robot coordinate system.
RobotY4 Returns the Corner4 Y coordinate in the figure “BoxFinder object properties, positional relationship of results” in the Robot coordinate system.
ShowAllResults

Displays a dialog box which allows you to see all results for a specified vision object in a table form.

This makes it easy to compare results.

Strength Returns the strength of the found edge.
Time Returns the amount of time required to process the object (unit: millisecond).

Using BoxFinder Objects
The next few sections guide you through how to create and use a BoxFinder object.

  • How to create a new BoxFinder object
  • Position and Size the search window
  • Configure the properties associated with the BoxFinder object
  • Test the BoxFinder object & examine the results
  • Make adjustments to properties and test again

Prior to starting the steps shown below, you should have already created a new vision sequence or selected a vision sequence to use. If you have no vision sequence to work with, you can create a new vision sequence by clicking on the [New Sequence] button.
You can also select a sequence which was created previously by clicking on the sequence tree in the Vision Guide window.
Refer to the following for more details on how to create a new vision sequence or select one that was previously defined.
Vision Sequences

Step 1: Create a New BoxFinder Object

  1. Click the [All Tools] - [BoxFinder] button on the Vision Guide toolbar.
  2. Move the mouse over the image display. You will see the mouse pointer change to the BoxFinder object icon.
  3. Continue moving the mouse until the icon is at the desired position in the image display.
  4. Click the left mouse button to create the object.
  5. Notice that a name for the object is automatically created. In the example, it is called “BoxFind01” because this is the first BoxFinder object created for this sequence. (We will explain how to change the name later.)

Step 2: Position and Size the Search Window
You should now see a BoxFinder object similar to the one shown below:


New BoxFinder Object Layout

Symbol Description
a Step Number in Sequence
b Object Name
c Size & Direction Handle
d Direction indicator (Direction of edge search)
  1. Click the name label of the BoxFinder object and, while holding the mouse down, drag the BoxFinder object to the position where you would like the search window to reside.
  2. Resize the BoxFinder object search window as required using the search window size handles. (This means click a size handle and drag the mouse.)

Step 3: Configuring Properties for the BoxFinder Object
We can now set property values for the BoxFinder object. To set any of the properties simply click the associated property’s value field and then either enter a new value or if a drop down list is displayed select one of the items in the list.
Shown below are some of the more commonly used properties for the BoxFinder object. Explanations for other properties such as AbortSeqOnFail, Graphics, etc. which are used on many of the different vision objects can be seen in the following.
"Vision Guide 8.0 Properties & Result Reference"

Property Description
Name property

The default name given to a newly created BoxFinder object is “BoxFind**” where ** is a number which is used to distinguish between multiple BoxFinder objects within the same vision sequence.

If this is the first BoxFinder object for this vision sequence, the default name will be “BoxFind01”.

To change the name, click the Value field of the Name property, type a new name and press the return key. Once the name property is changed, everywhere the BoxFinder object's name is displayed is updated to reflect the new name.

EdgeType (Single)

Select the type of the edge to be searched.

For edge pairs, an edge is found from each direction and the center of the pair is reported as the position.

NumberOfEdges(5) To find edges, you can search for five edges in each corner of the search window.
Polarity (LightToDark)

Search for edges using “LightToDark” polarity.

If you are looking for a DarkToLight edge, change polarity.

Step 4: Running the BoxFinder Object and Examining the Results
The next few sections guide you through how to run a BoxFinder object.
Click [Run] of the object on the execution panel. Results for the BoxFinder object will now be displayed. The primary results to examine at this time are:

Results Description
Angle Result Returns the angle of the detected rectangular edge in the Image coordinate system.
MaxError Result Returns the maximum difference from the detected rectangular edge in pixel length.

PixelX result

PixelY result

Returns the XY coordinate positions at the center of the detected rectangular edge in the Image coordinate system.

CameraX result

CameraY result

Returns the XY coordinate positions at the center of the detected rectangular edge in the Camera coordinate system. If the calibration is not performed for the XY coordinate positions, “no cal” will be returned.

RobotX result

RobotY result

Returns the XY coordinate positions at the center of the detected rectangular edge in the Robot coordinate system. If the calibration is not performed for the XY coordinate positions, “no cal” will be returned.

CornerFinder Object

CornerFinder Object Description
CornerFinder objects are used to identify the position of the corner of two lines in the image.
CornerFinder objects process multiple Edge objects automatically to identify the edge position and obtain the corner of two lines identified from each edge position.
The edge of an object in an image is a change in gray value from dark to light or light to dark. This change may span several pixels.
Each edge search of the CornerFinder object finds the transition from Light to Dark or Dark to Light as defined by the Polarity property and specifies the best line between the detected edge positions. You can also search for edge pairs by changing the EdgeType property. With edge pairs, two opposing edges are searched for, and the midpoint is returned as the result.

CornerFinder Object Layout
Similar to LineFinder, CornerFinder objects have a direction indicator showing the edge search direction within the search window. This differs from LineFinder in that a direction indicator covers each direction for two sides of the search window. The number of edge search lines is specified with the NumberOfEdges property. The search direction can be specified by the Direction property.


CornerFinder Object Layout

Symbol Description
a Step Number in Sequence
b Object Name
c Size & Direction Handle
d Direction indicator (Direction of edge search)

The CornerFinder object can be positioned to search in any direction (not just along the vertical and horizontal directions). Similar to SearchWinType=AngledRectangle for Blob objects, use the handle for rotating the CornerFinder object search window to move the CornerFinder object in the intended edge detection direction.

CornerFinder Object Properties
The following list is a summary of properties for the CornerFinder object. For details on each property, refer to the following.
"Vision Guide 8.0 Properties & Result Reference"

Property Description
AbortSeqOnFail

Allows the user to specify that if the object fails (not passed), then the entire sequence is aborted at that point and no further objects in the sequence are processed.

Default: False

Accept

Specifies the shape score that a feature must equal or exceed to be considered found.

Default: 100

Caption

Used to assign a caption to the CornerFinder object.

Default: Empty String

CenterPointObject

Specifies the position to be used as the center point of the object.

If the property is set to “Screen”, the object can be placed to anywhere in the screen. If other vision objects are specified, the center point will be set to the PixelX and PixelY results of the object.

Default: Screen

CenterPntObjResult

Specifies which result to use from the CenterPointObject.

If All is specified, Geometric object will be applied to all of the (NumberFound) for specified vision object results.

Default: 1

CenterPntOffsetX

Sets or returns the X offset after the center of the search window is positioned with the CenterPointObject property.

Default: 0

CenterPntOffsetY

Sets or returns the Y offset after the center of the search window is positioned with the CenterPointObject property.

Default: 0

CenterPntRotOffset

Specifies whether to rotate the XY offset value of the center (CenterPntOffsetX, CenterPntOffsetY) based on the Angle result of CenterPointObject.

Default: False

ContrastTarget

Sets the desired contrast for the edge search.

Default: 0 (best contrast)

ContrastVariation

Selects the allowed contrast variation for ContrastTarget.

Default: 0

CoordObject

Specifies Coordinates object to copy the result. The copy is executed when the object is executed, and if it didn’t execute because of branch function of Decision, the copy will not be executed.

Default: None

CurrentResult Defines which result to display in the Results list on the object window or which result to return data for when the system is requested to find more than one of a like feature within a single search window.
Description

Sets a user description

Default: Blank

Direction

Sets the direction for the edge search.

Default: InsideOut

EdgeSort

Sets the method of sorting detected edge results.

Default: Default:

EdgeThreshold

Sets the threshold at which edges below this value are ignored.

Default: 2

EdgeType

Select the type of edge to search for: single or pair.

Default: 1 - Single

Enabled

Specifies whether to execute the object.

Default: True

FailColor

Selects the color of an object when it is not accepted.

Default: Red

FittingThreshold

Specifies the edge results to use for linear fittings.

Default: 10

Frame

Specifies which positioning frame to use.

Default: none

FrameResult

Specifies which number of the Frame results to be used.

Default: 1

Graphics

Specifies which graphics to display.

Default: 1 - All

LabelBackColor

Selects the background color for an object label.

Default: Transparent

MissingEdgeType

Defines how to handle a missing edge.

Default: Interpolate

Name

Used to assign a unique name to the CornerFinder object.

Default: CornerFind01

NumberOfEdges

Specified the number of edges to be detected.

Default: 5

PassColor

Selects the color for an object when it is passed.

Default: LightGreen

PassType

Selects the rule that determines if the object passed.

Default: SomeFound

Polarity

Specifies whether the CornerFinder object should search for a LightToDark or DarkToLight transition.

Default: 1 - LightToDark

ScoreWeightContrast

Sets the percentage of the score that depends on contrast.

Default: 50

ScoreWeightStrength

Sets the percentage of the score that depends on edge strength.

Default: 50

SearchLength

Defines the length of the edge search range.

The following SearchLength1, 2 values can be set together.

SearchLength1 Sets the SearchLength1 length in the figure “Positional relationship of CornerFinder object properties”.
SearchLength2 Sets the SearchLength2 length in the figure “Positional relationship of CornerFinder object properties”.
SearchWidth

Defines the width of the edge search.

Range is from 3 to 99.

Default: 3

SearchWin

Runtime only.

Sets or returns the search window left, top, height, width parameters in one call.

SearchWinAngle Defines the angle of the area to be searched.
SearchWinCenterX Defines the X coordinate value of the center of the area to be searched.
SearchWinCenterY Defines the Y coordinate value of the center of the area to be searched.
SearchWinHeight Defines the height of the area to be searched. (unit: pixel)
SearchWinLeft Defines the left most position of the area to be searched. (unit: pixel)
SearchWinTop Defines the upper most position of the area to be searched. (unit: pixel)
SearchWinWidth Defines the width of the area to be searched. (unit: pixel)
StrengthTarget

Sets the desired edge strength to search for.

Default: 0

StrengthVariation

Sets the amount of variation for StrengthTarget.

Default: 0


Positional relationship of CornerFinder object properties

CornerFinder Object Results
The following list is a summary of the CornerFinder object results with brief descriptions. For details on each result, refer to the following.
"Vision Guide 8.0 Properties & Result Reference"

Results Description
Angle Returns the angle of the detected corner.
CameraX Returns the X coordinate of the detected corner in the Camera coordinate system.
CameraY Returns the Y coordinate of the detected corner in the Camera coordinate system.
CameraXYU

Runtime only.

Returns the CameraX, CameraY, and CameraU coordinates of the detected corner position in the camera coordinate system.

Contrast Returns the contrast of the found Edge.
FitError Returns the distance between each edge point and detected corner in the root mean square (RMS).
Found Returns whether the object was found. (i.e. did the feature or part you are looking at have a shape score that is above the Accept property’s current setting.)
MaxError Returns the maximum difference from the detected line edge in pixel length.
Passed Returns whether the object detection result was accepted.
PixelX Returns the X coordinate of the detected corner in the Image coordinate system.
PixelY Returns the Y coordinate of the detected corner in the Image coordinate system.
PixelXYU

Runtime only.

Returns the PixelX, PixelY, and PixelU coordinates of the detected corner position in pixels.

RobotX Returns the X coordinate of the detected corner in the Robot coordinate system.
RobotY Returns the Y coordinate of the detected corner in the Robot coordinate system.
RobotU Returns the U coordinate of the detected corner in the Robot coordinate system.
RobotXYU

Runtime only.

Returns the RobotX, RobotY, and RobotU coordinates of the detected corner position with respect to the Robot Coordinate System.

ShowAllResults

Displays a dialog box which allows you to see all results for a specified vision object in a table form.

This makes it easy to compare results.

Strength Returns the strength of the found edge.
Time Returns the amount of time required to process the object (unit: millisecond).

Using CornerFinder Objects
The next few sections guide you through how to create and use a CornerFinder object.

  • How to create a new CornerFinder object
  • Position and Size the search window
  • Configure the properties associated with the CornerFinder object
  • Test the CornerFinder object & examine the results
  • Make adjustments to properties and test again

Prior to starting the steps shown below, you should have already created a new vision sequence or selected a vision sequence to use. If you have no vision sequence to work with, you can create a new vision sequence by clicking on the [New Sequence] button.
You can also select a sequence which was created previously by clicking on the sequence tree in the Vision Guide window.
Refer to the following for more details on how to create a new vision sequence or select one that was previously defined.
Vision Sequences

Step 1: Create a New CornerFinder Object

  1. Click the [All Tools] - [CornerFinder] button on the Vision Guide toolbar.
  2. Move the mouse over the image display. You will see the mouse pointer change to the CornerFinder object icon.
  3. Continue moving the mouse until the icon is at the desired position in the image display.
  4. Click the left mouse button to create the object.
  5. Notice that a name for the object is automatically created. In the example, it is called “CornerFind01” because this is the first CornerFinder object created for this sequence. (We will explain how to change the name later.)

Step 2: Position and Size the Search Window
You should now see a CornerFinder object similar to the one shown below:


New CornerFinder Object Layout

Symbol Description
a Step Number in Sequence
b Object Name
c Size & Direction Handle
d Direction indicator (Direction of edge search)
  1. Click the name label of the CornerFinder object and, while holding the mouse down, drag the CornerFinder object to the position where you would like the search window to reside.
  2. Resize the CornerFinder object search window as required using the search window size handles. (This means click a size handle and drag the mouse.)

Step 3: Configuring Properties for the CornerFinder Object
We can now set property values for the CornerFinder object. To set any of the properties simply click the associated property’s value field and then either enter a new value or if a drop down list is displayed select one of the items in the list.
Shown below are some of the more commonly used properties for the CornerFinder object. Explanations for other properties such as AbortSeqOnFail, Graphics, etc. which are used on many of the different vision objects can be seen in the following.
"Vision Guide 8.0 Properties & Result Reference"

Property Description
Name property

The default name given to a newly created CornerFinder object is “CornerFind**” where ** is a number which is used to distinguish between multiple CornerFinder objects within the same vision sequence.

If this is the first CornerFinder object for this vision sequence, the default name will be “CornerFind01”.

To change the name, click the Value field of the Name property, type a new name and press the return key. Once the name property is changed, everywhere the CornerFinder object's name is displayed is updated to reflect the new name.

EdgeType (Single)

Select the type of the edge to be searched.

For edge pairs, an edge is found from each direction and the center of the pair is reported as the position.

NumberOfEdges(5) To find edges, you can search for five edges in each corner of the search window.
Polarity (LightToDark)

Search for edges using “LightToDark” polarity.

If you are looking for a DarkToLight edge, change “Polarity”.

Step 4: Running the CornerFinder Object and Examining the Results
The next few sections guide you through how to run a CornerFinder object.
Click [Run] of the object on the execution panel. Results for the CornerFinder object will now be displayed. The primary results to examine at this time are:

Results Description
Angle Result Returns the angle of the detected corner in the Image coordinate system.
MaxError Result Returns the maximum difference from the detected line. (unit: pixel)

PixelX result

PixelY result

Returns the XY coordinate positions of the detected corner in the Image coordinate system.

CameraX result

CameraY result

Returns the XY coordinate positions of the detected corner in the Camera coordinate system. If the calibration is not performed for the XY coordinate positions, “no cal” will be returned.

RobotX result

RobotY result

Returns the XY coordinate positions of the detected corner in the Robot coordinate system. If the calibration is not performed for the XY coordinate positions, “no cal” will be returned.

Contour Object

Contour Object Description
Contour objects outputs the trajectory a workpiece contour is to follow. You can easily obtain the travel route from image information when you wish to move the robot hand along the workpiece contour.
The trajectory of Contour objects can be obtained in three ways. Use each according to the requirements of the application. The means in which the trajectory is acquired can be changed by changing the ContourMode property, with a different GUI and properties available to each mode.
The distinctive features of each mode are explained in brief below.

  • Blob format
    Detects target workpieces in the search window as a blob, acquiring the contour of such. This is used to acquire the contour from complex-shaped workpieces.

    Symbol Description
    a Step Number in Sequence
    b Object Name
    c Search Window
  • Line format
    Obtains the contour using multiple edge search lines arranged horizontally. This is convenient as an easy means of acquiring a contour with little in the way of undulations from a part of the target workpiece.

    Symbol Description
    a Step Number in Sequence
    b Object Name
    c Size & Direction Handle
    d Direction indicator (Direction of edge search)
  • Arc format
    Obtains the contour using multiple edge search lines arranged radially. This is convenient as an easy means of acquiring an arc-shape contour with little in the way of undulations from a part of the target workpiece.

    Symbol Description
    a Step Number in Sequence
    b Object Name
    c Direction indicator (Direction of edge search)
    d Size & Direction Handle

Contour Object Properties
The following list is a summary of the Contour object results with brief descriptions. For details on each result, refer to the following.
"Vision Guide 8.0 Properties & Result Reference"

Further, the properties available will vary based on the ContourMode value. The list shows the ContourMode value required to apply each property.

Property Description
AbortSeqOnFail

Allows the user to specify that if the object fails (not passed), then the entire sequence is aborted at that point and no further objects in the sequence are processed.

Default: False

Format supported: Blob, Line, Arc

Accept

Specifies the shape score that a feature must equal or exceed to be considered found.

Default: 100

Format supported: Line, Arc

AngleEnd

Specifies the end angle of the range to perform a circular search

Default: 135

Format supported: Arc

AngleStart

Specifies the start angle of the range to perform a circular search

Default: 45

Format supported: Arc

Caption

Used to assign a caption to the Contour object.

Default: Empty String

Format supported: Blob, Line, Arc

CenterPointObject

Specifies the position to be used as the center point of the object. When this property is set “Screen”, the object can be configured on arbitrary position. However, when specified to other vision object, the center point in PixelX, PixelY of the object is set.

Format supported: Blob, Line, Arc

CenterPntObjResult

Specifies which result to use from the CenterPointObject.

If All is specified, Contour object will be applied to all of the (NumberFound) for specified vision object results.

Default: 1

Format supported: Blob, Line, Arc

CenterPntOffsetX

Sets or returns the X offset after the center point of the search window is positioned with the CenterPointObject.

Format supported: Blob, Line, Arc

CenterPntOffsetY

Sets or returns the Y offset after the center point of the search window is positioned with the CenterPointObject.

Format supported: Blob, Line, Arc

CenterPntRotOffset

Specifies whether to rotate the XY offset value of the center (CenterPntOffsetX, CenterPntOffsetY) based on the Angle result of CenterPointObject.

If SearchWinType is set to RotatedRectangle, the search window rotates based on the Angle result.

CenterX

Specifies the X coordinate position to be used as the center point for the object. This property is filled in automatically when the CenterPoint property is set to another vision object.

Format supported: Arc

CenterY

Specifies the Y coordinate of the position to be used as the center point for the object. This property is filled in automatically when the CenterPoint property is set to another vision object.

Format supported: Arc

ContourMode Defines the edge detect format of Contour object.
ContrastTarget

Sets the desired contrast for the edge search.

Default: 0 (best contrast)

Format supported: Line, Arc

ContrastVariation

Selects the allowed contrast variation for ContrastTarget. Default: 0

Format supported: Line, Arc

CoordObject

Specifies Coordinates object to copy the result. The copy is executed when the object is executed, and if it didn’t execute because of branch function of Decision, the copy will not be executed.

Default: None

CurrentResult

Defines which result to display in the Results list on the object window or which result to return data for when the system is requested to find more than one of a like feature within a single search window.

Format supported: Blob, Line, Arc

Description

Sets a user description

Default: Blank

Direction

Sets the direction for the edge search.

Default: InsideOut

Format supported: Arc

EdgeThreshold

Sets the threshold at which edges below this value are ignored.

Default: 2

Format supported: Line, Arc

EdgeType

Select the type of edge to search for: single or pair.

Default: 1 - Single

Format supported: Line, Arc

Enabled

Specifies whether to execute the object.

Default: True

Format supported: Blob, Line, Arc

EndPntObjResult

Specifies which result to use from the EndPointObject.

Default: 1

Format supported: Blob, Line, Arc

EndPointObject

Specifies which vision object to use to define the end point of the line to be inspected.

Default: Screen

Format supported: Blob, Line, Arc

EndPointType

Specifies the type of end point used to define the end point of a line.

Default: 0 - Point

Format supported: Blob, Line, Arc

FailColor

Selects the color of an object when it is not accepted.

Default: Red

Format supported: Blob, Line, Arc

FillHoles

Specifies whether to fill the holes in a binary image.

Default: False

Format supported: Blob

FittingThreshold

Defines the fitting threshold for straight lines and circular arcs.

Format supported: Line, Arc

Frame

Specifies which positioning frame to use.

Default: none

Format supported: Blob, Line, Arc

FrameResult

Specifies which number of the Frame results to be used.

Default: 1

Format supported: Blob, Line, Arc

Graphics

Specifies which graphics to display.

Default: 1 - All

Format supported: Blob, Line, Arc

LabelBackColor

Selects the background color for an object label.

Default: Transparent

Format supported: Blob, Line, Arc

LineDirection

Defines the contour point output direction.

Default: LeftToRight

Format supported: Line

MaxArea

Defines the upper Area limit for a defect.

Default: 100,000

Format supported: Blob

MinArea

Defines the lower Area limit for a defect.

Default: 25

Format supported: Blob

MinMaxArea

Runtime only.

Sets or returns both MinArea and MaxArea in one statement.

Name

Used to assign a unique name to the Contour object.

Default: Contour01

Format supported: Blob, Line, Arc

NumberOfEdges

Specified the number of edges to be detected.

Default: 20

Format supported: Line, Arc

NumberToFind

Defines the maximum number of contour points to output.

Default: 1

Format supported: Blob, Line, Arc

PassColor

Selects the color for an object when it is passed.

Default: LightGreen

Format supported: Blob, Line, Arc

PassType

Selects the rule that determines if the object passed.

Default: SomeFound

Format supported: Blob,Line,Arc

Polarity

Specifies whether the Contour object should search for a DarkOnLight, LightOnDark, LightToDark or DarkToLight transition.

Default_Blob: 1 - DarkOnLight

Default_Line /Arc: 1 - LightToDark

Format supported: Blob, Line, Arc

RadiusInner

Specifies the inner diameter of the detection range.

Format supported: Arc

RadiusOuter

Specifies the outer diameter of the detection range.

Format supported: Arc

RejectOnEdge

If the property is set to True, the system ignores blobs detected on the edge of the search window.

Default: False

Format supported: Blob

RotationDirection

Specifies the direction of rotation for contour points.

Default: 0 - CW

Format supported: Blob, Arc

RuntimeContour

Specifies whether to detect contour points at object runtime.

Default: True (Detected at runtime)

Format supported: Blob, Line, Arc

SamplingPitch

Sets the extent to which contour points are reduced.

Default: 0 (Not reduced)

Format supported: Blob, Line, Arc

SaveTeachImage Sets whether the camera image should be saved to a file when the model is taught.
ScoreWeightContrast

Sets the percentage of the score that depends on contrast.

Default: 50

Format supported: Line, Arc

ScoreWeightStrength

Sets the percentage of the score that depends on edge strength.

Default: 50

Format supported: Line, Arc

SearchWidth

Defines the width of the edge search.

Range is from 3 to 99.

Default: 3

Format supported: Line, Arc

SearchWin

Runtime only.

Sets or returns the search window left, top, height, width parameters in one call.

Format supported: Blob, Line

SearchWinAngle

Defines the angle of the area to be searched.

Format supported: Blob, Line

SearchWinCenterX

Defines the X coordinate value of the center of the area to be searched.

Format supported: Blob, Line

SearchWinCenterY

Defines the Y coordinate value of the center of the area to be searched.

Format supported: Blob, Line

SearchWinHeight

Defines the height of the area to be searched in pixels.

Default: 100

Format supported: Blob, Line

SearchWinLeft

Defines the left most position of the area to be searched in pixels.

Format supported: Blob, Line

SearchWinTop

Defines the upper most position of the area to be searched in pixels.

Format supported: Blob, Line

SearchWinType

Defines the type of the area to be searched (i.e. Rectangle, RotatedRectangle, Circle).

Format supported: Blob

SearchWinWidth

Defines the width of the area to be searched. (unit: pixel)

Default: 100

Format supported: Blob

SizeToFind

Selects which size of defects to find.

Default: 1 - Largest

Format supported: Blob

ShowModel

Displays the contour teaching model.

Format supported: Blob

Sort

Selects the sort order used for the results of an object.

Format supported: Blob

StartPntObjResult

Specifies which result to use from the StartPointObject.

Default: 1

Format supported: Blob, Line, Arc

StartPointObject

Specifies which vision object to use to define the start point of the line to be inspected.

Default: Screen

Format supported: Blob, Line, Arc

StartPointType

Defines the type of start point used to define the start point of a line.

Default: 0 - Point

Format supported: Blob, Line, Arc

StrengthTarget

Sets the desired edge strength to search for.

Default: 0

Format supported: Line, Arc

StrengthVariation

Sets the amount of variation for StrengthTarget.

Default: 0

Format supported: Line, Arc

ThresholdAuto

Specifies whether to automatically set the threshold value of the gray level that represents the feature (or object), the background, and the edges of the image.

Default: Disables

Format supported: Blob

ThresholdBlockSize

Defines the range to refer the neighborhood area to set the threshold and use when the ThresholdMethod property is set to LocalAdaptive.

Default: 1/16ROI

Format supported: Blob

ThresholdColor

Defines the color assigned to pixels within the thresholds.

Default: Black

Format supported: Blob

ThresholdHigh

Works with the ThresholdLow property to define the gray level regions that represent the feature (or object), the background, and the edges of the image.

The ThresholdHigh property defines the upper bound of the gray level region for the feature area of the image.

Any part of the image that falls within gray level region defined between ThresholdLow and ThresholdHigh will be assigned a pixel weight of 1. (i.e. it is part of the feature.)

If the ThresholdAuto property is “True” and the ThresholdColor property is “White”, this property value will be set to 255 and cannot be changed.

Default: 128

Format supported: Blob

ThresholdLevel

Defines the ratio between the neighborhood area and the luminance difference to use when the ThresholdMethod property is set to LocalAdaptive.

Default: 15%

Format supported: Blob

ThresholdLow

Works with the ThresholdHigh property to define the gray level regions that represent the feature (or object), the background, and the edges of the image.

The ThresholdLow property defines the lower bound of the gray level region for the feature area of the image.

Any part of the image that falls within gray level region defined between ThresholdLow and ThresholdHigh will be assigned a pixel weight of 1. (i.e. it is part of the feature.)

If the ThresholdAuto property is “True” and the ThresholdColor property is “Black”, this property value will be set to 0 and cannot be changed.

Default: 0

Format supported: Blob

ThresholdMethod Sets processing method of binarization.
ContourTolerance

Sets the tolerance when reducing contour points.

Default: 0

Format supported: Blob, Line, Arc

Contour Object Results
The following list is a summary of the Contour object results with brief descriptions. For details on each result, refer to the following.
"Vision Guide 8.0 Properties & Result Reference"

Results Description
CameraX Returns the X coordinate position of the contour point in the Camera coordinate system.
CameraY Returns the Y coordinate position of the contour point in the Camera coordinate system.
CameraXYU

Runtime only.

Returns the CameraX, CameraY, and CameraU coordinates of the contour point in the camera coordinate system.

Found Returns whether the object was found.
NumberFound Returns the total number of contour points.
Passed Returns whether the object detection result was accepted.
PixelX Returns the X coordinate of the contour point in pixels.
PixelY Returns the Y coordinate of the contour point in pixels.
PixelXYU

Runtime only.

Returns the PixelX, PixelY, and PixelU coordinates of the contour point position in pixels.

RobotX Returns the X coordinate position of the contour point in the Robot coordinate system.
RobotY Returns the Y coordinate position of the contour point in the Robot coordinate system.
RobotXYU

Runtime only.

Returns the RobotX, RobotY, and RobotU coordinates of the contour point with respect to the Robot Coordinate System.

ShowAllResults

Displays a dialog box which allows you to see all results for a specified vision object in a table form.

This makes it easy to compare results.

Time Returns the amount of time required to process the object (unit: millisecond).

Contour Extraction Principle
Broadly, contour extraction can be performed in two different ways. The details of each method are as described below.

  • Create initial contour points
    • Detect blob (Blob)
    • Trace contour (Blob)
    • Detect edges (Line/Arc)
  • Edit contour points
    • Reduce contour points
    • Acquire contour

Detect blob (Blob)
If ContourMode is set to “Blob”, blobs inside the search window will be detected using the same functionality as that provided by a Blob object. For more details on the principle behind blob detection, refer to the following.
Blob Object - How Blob Analysis Works
For Contour objects, all blobs within the search window will be detected before proceeding to the next step.

Trace contour (Blob)
If ContourMode is set to “Blob”, the step of trace contour will be performed once the ‘detect blob’ step ends. In this step, the contour of the first resulting blob detected in the ‘detect blob’ step is traced to produce the initial contour points. As such, search window position alignment, and Sort, MinArea, MaxArea and other property configurations need to be setup to ensure the workpiece you wish to acquire the contour of is detected as the first result.
The initial contour points are output as a continuous trajectory without gaps. Note that the initial contour points start from the closest point to the coordinate position specified as StartPointObject, and end at the closest point to the coordinate position specified as EndPointObject.

Detect edges (Line/Arc)
If ContourMode is set to “Line” or “Arc”, results of edge search line detection are set as the initial contour points. The initial contour points start from the closest point to the coordinate position specified as StartPointObject, and end at the closest point to the coordinate position specified as EndPointObject.

Reduce contour points
Once the initial contour points are created, unnecessary points are removed. The degree to which contour points are reduced is based on the SamplingPitch and ContourTolerance values set. The SamplingPitch setting is used to determine the number of initial contour points required to acquire a single contour point. For example, if this is set to “10”, a single contour point will be extracted from a maximum of 10 initial contour points. Further, the ContourTolerance setting is used to determine the permissible difference when reducing initial contour points. Increase this value to delete a greater number of contour points that do not conform to the outline of the workpiece.

Acquire contour
This step acquires the resulting contour point outline that has been refined in the previous step as result data. When acquiring this data, the RotationDirection / LineDirection settings are used to determine the trajectory direction.
Additionally, when in Blob mode, setting RejectOnEdge to “False” may sometimes result in the workpiece protruding outside the search window. If this happens, the border of the search window is not acquired as the contour. The trajectory will end at the point it touches the search window border frame. If a trajectory is split into multiple acquirable trajectories by the search window, the longest trajectory will be acquired unless StartPointObject and EndPointObject settings have been configured.

Step 1: Create a New Contour Object

  1. Click the [All Tools] - the [Contour] button on the Vision Guide toolbar.
  2. Move the mouse over the image display. You will see the mouse pointer change to the Contour object icon.
  3. Continue moving the mouse until the icon is at the desired position in the image display.
  4. Click the left mouse button to create the object.
  5. Notice that a name for the object is automatically created. In the example, it is called “Contour01” because this is the first Contour object created for this sequence. (We will explain how to change the name later.)
    Change ContourMode based on the type of trajectory you wish to acquire. For more information on types of ContourMode available for selection, refer to the table below.
    ContourMode setting Judgment criteria
    Blob Used to acquire trajectories that completely circle the workpiece. Used to acquire the trajectory of complex shapes.
    Line Used to acquire the trajectory of part of a workpiece. Used to acquire simple line trajectories with little undulation.
    Arc Used to acquire the trajectory of part of a workpiece. Used to acquire circular arc trajectories with little undulation.

Step 2: Set the Contour Detection Position (If ContourMode:Blob)
If ContourMode is Blob, you should now see a Contour object similar to the one shown below:


New Contour Object Layout (ContourMode:Blob)

Symbol Description
a Step Number in Sequence
b Object Name
c Search Window
  1. Click the name label of the Contour object and while holding the mouse down drag the Contour object to the position where you would like the top left position of the search window to reside.
  2. Resize the Contour object search window as required using the search window size handles. (This means click a size handle and drag the mouse.) (The search window is the area within which we will search for Blobs.)

Step 2: Set the Contour Detection Position (If ContourMode: Line)
If ContourMode is Line, you should now see a Contour object similar to the one shown below:


New Contour Object Layout (ContourMode:Line)

Symbol Description
a Step Number in Sequence
b Object Name
c Size & Direction Handle
d Direction indicator (Direction of edge search)
  1. Click the name label of the Contour object and while holding the mouse down drag the Contour object to the position where you would like the top left position of the search window to reside.
  2. Resize the Contour object search window as required using the search window size handles. (This means click a size handle and drag the mouse.) Edge object will now be displayed.

Step 2: Set the Contour Detection Position (If ContourMode:Arc)
If ContourMode is Arc, you should now see the Contour object similar to the one shown below:


New Contour Object Layout (ContourMode:Arc)

Symbol Description
a Step Number in Sequence
b Object Name
c Direction indicator (Direction of edge search)
d Size & Direction Handle
  1. Click the name label of the Contour object and, while holding the mouse down, drag the Contour object to the position where you would like the search window to reside.
  2. Resize the Contour object search window as required using the search window size handles. (This means click a size handle and drag the mouse.)

Step 3: Configuring Properties for the Contour Object (ContourMode: Blob)
We can now set property values for the Contour object (ContourMode: Blob). Shown below are some frequently used properties when ContourMode is set to “Blob”.
Explanations for other properties such as AbortSeqOnFail, Graphics, etc. which are used on many of the different vision objects can be seen in the following.

  • "Vision Guide 8.0 Properties & Result Reference"
  • Contour Object Window Properties List

CAUTION


Ambient lighting and external equipment noise may affect vision sequence image and results. A corrupt image may be acquired and the detected position could be any position in an object’s search area. Properly configure MaxArea, MinArea, RejectOnEdge and other properties to reduce the risk of detection errors.

Property Description
Name property

The default name given to a newly created Contour object is “Contour xx” where xx is a number which is used to distinguish between multiple Contour objects within the same vision sequence.

If this is the first Contour object for this vision sequence then the default name will be “Contour 01”.

To change the name, click the Value field of the Name property, type a new name and press the return key. You will notice that once the name property is modified, every place where the Contour object's name is displayed is updated to reflect the new name.

Polarity property

Select either one of the following in Polarity property:
- detect a dark object on a light background (DarkOnLight)
- detect a light object on a dark background (LightOnDark) Polarity property is usually used to make choice between these options.

The default setting is DarkOnLight (a dark object on a light background).

If you want to change it, click the Value field of the Polarity property and you will see a drop down list with two choices: “DarkOnLight” or “LightOnDark”. Click the choice you want to use.

MinArea, MaxArea

Defines the area of the blob covering the contour extraction area.

The default range is set as 25 to 100,000 (MinArea to MaxArea) which is a very broad range. This means that most blobs will be reported as Found when you first run a new Blob object before adjusting the MinArea and MaxArea properties. Normally, you will want to modify these properties to reflect a reasonable range for the blob you are trying to find. This way if you find a blob which is outside of the range you will know it isn't the blob you wanted to find.

RejectOnEdge property Excludes the parts touching the boundary of the search window. Normally, this should be set to True.
RuntimeContour Defines whether to extract the contour at object runtime. Set this to “True” when the shape of the workpiece changes. If the shape of the workpiece does not change, set this to “False” to teach contour information before object runtime. You can check the shape taught by the ShowModel property.

You can test the Blob object now and then come back and set the any other properties as required later.

Step 3: Configuring Properties for the Contour Object (ContourMode: Line)
We can now set property values for the Contour object (ContourMode: Line). Shown below are some frequently used properties when ContourMode is set to “Line”. Explanations for other properties such as AbortSeqOnFail, Graphics, etc. which are used on many of the different vision objects can be seen in the following.
"Vision Guide 8.0 Properties & Result Reference"

Property Description
EdgeType (Single)

Select the type of the edge to be searched.

For edge pairs, an edge is found from each direction and the center of the pair is reported as the position.

Name property (“Contourxx”)

The default name given to a newly created Contour object is “Contour xx” where xx is a number which is used to distinguish between multiple Contour objects within the same vision sequence.

If this is the first Contour object for this vision sequence then the default name will be “Contour01”.

To change the name, click the Value field of the Name property, type a new name and press the return key. You will notice that once the name property is modified, every place where the Contour object's name is displayed is updated to reflect the new name.

NumberOfEdges(1) You can search for 1 or more edges along the search line.
Polarity (LightToDark)

Search for edges using “LightToDark” polarity.

If you are looking for a DarkToLight edge, change polarity.

Step 3: Configuring Properties for the Contour Object (ContourMode: Arc)
We can now set property values for the Contour object (ContourMode: Arc). Shown below are some frequently used properties when ContourMode is set to “Arc”. Explanations for other properties such as AbortSeqOnFail, Graphics, etc. which are used on many of the different vision objects can be seen in the following.
"Vision Guide 8.0 Properties & Result Reference"

Property Description
EdgeType (Single)

Select the type of the edge to be searched.

For edge pairs, an edge is found from each direction and the center of the pair is reported as the position.

Name property (“Contourxx”) The default name given to a newly created Contour object is “Contourxx” where xx is a number which is used to distinguish between multiple Contour objects within the same vision sequence. If this is the first Contour object for this vision sequence then the default name will be “Contour01”. To change the name, click the Value field of the Name property, type a new name and press the return key. You will notice that once the name property is modified, every place where the Contour object's name is displayed is updated to reflect the new name.
NumberOfEdges(5) You can search for five edges to find circular edges.
Polarity (LightToDark)

Search for edges using “LightToDark” polarity.

If you are looking for a DarkToLight edge, change polarity.

Step 4: Configure Contour Point Properties
Configure properties to adjust trajectory accuracy, the number of contour points and other settings. Shown below are properties used. For details on each property, refer to the following.
"Vision Guide 8.0 Properties & Result Reference"

Property Description
SamplingPitch

Defines the maximum number of contour points reduced.

When set to “5”, the number of contour points is, at most, reduced to around 1/5 of the number of contour points initially detected.

ContourTolerance

Defines the tolerance when reducing contour points.

Reduce this value to restrict the number of contour points that run counter to the work shape from being deleted. Check object runtime results to adjust this to the level of accuracy required.

StartPointObject, EndPointObject

Specifies the starting and ending position of a trajectory.

When set, create Point objects and adjust the vision sequence order to have these run before the Contour object. Place Point objects near the points where you want to start/end the trajectory.

RotationDirection, LineDirection Specifies the trajectory direction. If ContourMode is set to Blob or Arc, this sets the direction of rotation of the trajectory (CW/CCW). If ContourMode is set to Line, this specifies the direction left or right (LeftToRight/RightToLeft) from the edge search line when facing vertically downwards.

Step 5: Test the Contour Object and Examine the Results
To run testing the Contour object, click the [Run] button of the object on the execution panel. Results for the Contour object will now be displayed. The primary results to examine at this time are:

Results Description
PixelX, PixelY The contour point position detected. (unit: pixel)
CameraX, CameraY The contour point position according to the camera coordinates. (unit: millimeter)
RobotX, RobotY The contour point position according to the robot coordinates. (unit: millimeter)
Time result The amount of time it took for the Contour object to execute.

KEY POINTS


The RobotX, RobotY and CameraX, CameraY results will return “no cal” at this time. This means it is impossible for the vision system to calculate the coordinate results with respect to the Robot coordinate system or Camera coordinate system since the calibration is not executed. Refer to the following for details.

Vision Calibration

Text Object

Text Object Description
Text objects display vision object execution results in text form on the screen.

Text Object Layout
The Text object has an image processing window, as shown below.

Symbol Description
a Step Number in Sequence
b Object Name
c

Processing

Window

Text Object Properties
The following list is a summary of the Text object properties with brief descriptions. For details on each property, refer to the following.
"Vision Guide 8.0 Properties & Result Reference"

Property Description
AbortSeqOnFail

Allows the user to specify that if the object fails (not passed), then the entire sequence is aborted at that point and no further objects in the sequence are processed.

Default: False

CenterPointObject

Specifies the position to be used as the center point of the object.

When this property is set “Screen”, the object can be configured on arbitrary position. However, when specified to other vision object, the center point in PixelX, PixelY of the object is set.

CenterPntObjResult

Specifies which result to use from the CenterPointObject property.

If All is specified, Text object will be applied to all of the (NumberFound) for specified vision object results.

Default: 1

CenterPntOffsetX Sets or returns the X offset after the center point of the search window is positioned with the CenterPointObject.
CenterPntOffsetY Sets or returns the Y offset after the center point of the search window is positioned with the CenterPointObject.
Caption

Used to assign a caption to the Text object.

Default: Empty String

CurrentResult Defines which result to display in the Results list on the object window or which result to return data for when the system is requested to find more than one of a like feature within a single search window.
Description

Sets a user description

Default: Blank

Enabled

Specifies whether to execute the object.

Default: True

FailColor Selects the color of an object when it is not accepted.
Font Sets the formatting of the text displayed.
FontBold Displays the font in bold. (Only available from SPEL+ programs)
FontItalic Displays the font in italics. (Only available from SPEL+ programs)
FontName Sets the name of the font. (Only available from SPEL+ programs)
FontSize Specifies the font point size. (Only available from SPEL+ programs)
Graphics

Specifies which graphics to display.

Default: 1 - All

LabelBackColor Sets the background color for the object's label.
TextBackColor

Sets the background color for text.

Default: Transparent

Name

Used to assign a unique name to the Text object.

Default: Text01

PassColor

Selects the color of an object when it is not accepted.

Default: LightGreen

ResultObject Specifies the vision object rendered.
ResultText1 Specifies the results rendered.
ResultText2 Specifies the results rendered.
ResultText3 Specifies the results rendered.
ShowLabel Specifies whether to add a label for character strings.
SearchWin

Runtime only.

Sets or returns the search window left, top, height, width parameters in one call.

SearchWinHeight Defines the height of the area to be searched. (unit: pixel)
SearchWinLeft Defines the left most position of the area to be searched. (unit: pixel)
SearchWinTop Defines the upper most position of the area to be searched. (unit: pixel)
SearchWinWidth Defines the width of the area to be searched. (unit: pixel)
UserText Sets a user-defined character string.

Text Object Results
The following list is a summary of the Text object results with brief descriptions. For details on each result, refer to the following.
"Vision Guide 8.0 Properties & Result Reference"

Results Description
Passed Returns whether the object detection result was accepted.
Found Returns whether the result was acquired.
PixelX Returns the X coordinate position of the text displayed in pixels
PixelY Returns the Y coordinate position of the text displayed in pixels

Using Text Objects
Now we have set the foundation for understanding how to use Vision Guide Text objects.
This next section will describe the steps required to use Text objects as listed below:

  • How to create a new Text object
  • Position and size the search window
  • Configure character string settings
  • Run a text object and examine the results

Step 1: Create a New Text Object

  1. Click on the [All Tools] - the [Text] button on the Vision Guide toolbar.
  2. You will see a Text icon appear above the Text object button.
  3. Click the Text icon and drag to the image display of the Vision Guide window. Notice that a name for the object is automatically created. In the example, it is called 4. “Text01” because this is the first Text object created for this sequence. (We will explain how to change the name later.)

Step 2: Position and Size the Search Window
You should now see a Text object similar to the one shown below:


New Text Object

Symbol Description
a Step Number in Sequence
b Object Name
c Search Window
  1. Click the name label of the Text object and, while holding the mouse down, drag the Text object to the position where you would like the text to appear in relation to the top left corner of the search window.
  2. Resize the Text object search window as required using the search window size handles. (This means click a size handle and drag the mouse.) (The character string will appear in relation to the top left corner of the search window)

Step 3: Configure character string settings

  1. Specify the vision object containing the result you wish to render in ResultObject. For a vision object to be available for selection in ResultObject, the vision object must be executed before the Text object in the vision sequence.
  2. Select the results you wish to render from ResultText 1 to 3.
  3. Edit the UserText property to display character text other than result strings.
  4. Edit the Font property to adjust the font. Open the Font property to display the following settings window. Here you can configure the formatting, size and style of the font.

Step 4: Running The Text object and Examining the Results
Click [Run] of the object on the execution panel. Character strings set as Text objects will appear on the screen. Adjust the property settings if there is a problem with the text shown.

Character Strings Displayed as Text Objects
The formatting of character strings displayed as Text objects is as follows.

Symbol Description
a Step Number in Sequence
b Object Name
c Search Window
d UserText
e ResultText1
f ResultText2
g ResultText3

Character strings will appear in the order of UserText, ResultText1, ResultText2, ResultText3 from the top down. This will not appear if the UserText field is left blank, and ResultText 1 to 3 values are set to “None”. If character text cannot fit within the search window, overlapping text will not be shown.

Decision Object

Decision Object Description
The Decision object is used to control the flow of sequence execution based on the success or failure of a specified vision object.

Decision Object Properties
The following list is a summary of the Decision object properties with brief descriptions. For details on each property, refer to the following.
"Vision Guide 8.0 Properties & Result Reference"

Property Description
ConditionObject

The vision object in the same sequence prior to the Decision object whose results are used to determine sequence flow in the True branch or the False branch of the Decision object.

Description

Sets a user description

Default: Blank

Enabled

Specifies whether to execute the object.

Default: True

Name

Used to assign a unique name to the Decision object.

Default: Decision01

TrueCond

Specifies how the results for the ConditionObject are used to determine if the decision is True.

TargetPassed. If the ConditionObject Passed result is True, then the decision is True. TargetFailed. If the ConditionObject Passed result is False, then the decision is False. TargetNoExec. If the ConditionObject did not execute, then the decision is True.

Default: 0-TargetPassed

Decision Object Results
The Decision object has no results.

Using Decision Objects
Now we have set the foundation for understanding how to use Vision Guide Decision objects.
This next section will describe the steps required to use Decision objects as listed below:

  • How to create a new Decision object
  • Conditional branch settings
  • Adding objects and executing
  • Using the Coordinates object

Step 1: Create a New Decision Object

  1. Add the object to be used for the Decision object ConditionObject property in advance. In this example, we add a Geometric object.
    Decision objects cannot be added to the beginning of a sequence because they cannot be used without setting ConditionObject, which must execute prior to the Decision object.
  2. Click on the [All Tools] menu - then click the [Decision] button in the menu.
  3. Drop the Decision object on the image display of the Vision Guide window or on the flow chart.
    The name for the object is automatically created. In the example, it is called “Decision01” because this is the first Decision object created for this sequence. (We will explain how to change the name later.) The Decision object does not have a search window. You can check the step position from the flowchart or sequence tree.

Step 2: Conditional branch settings

  1. Select the object for which you want to check the result from the dropdown list of the ConditionObject property of the Decision object. ConditionObject can only be set to an object that executes in the sequence before the Decision object. In this example, we set ConditionObject to Geom01.
  2. Specify the condition that the value of the Passed result of the object selected in (1) by setting the TrueCond property of the Decision object.
    If TargetPassed (default) is specified, then the True branch objects will be executed when the Passed result of the object in (1) is True.
    If you want to execute a True branch when the Passed result is False, set the TrueCond property to TargetFailed.

Step 3: Adding objects and executing

  1. Add object(s) to execute in the Decision object True and False branches.
    After selecting an object you want to add from the Vision Guide toolbar, drag it to the flowchart to place it in a branch of the Decision object. You can place as many objects as you like in each branch. You cannot place a Decision object in a branch.
  2. Add object(s) after the Decision object branches.
    You can add more objects to the sequence after the Decision object branches, if desired.
  3. Check vision sequence operation
    The combination of the result of the object specified in the ConditionObject of the Decision object and the setting value of the TrueCond property changes the branch to be executed. You can run the entire sequence, or step through the sequence to verify operation.

Step 4: Using Coordinates objects
In some cases, you need pixel, camera, or robot coordinates based on an object processing in a True or False branch of a Decision object. You can store results from objects that provide coordinate results in a Coordinates object, then in your program, you can access the coordinates from the Coordinates object results.

  1. Add a Coordinates object to the sequence.
  2. Set the CoordObject property for the objects from which you want to store coordinates. Any number of objects can store coordinates in the same Coordinates object. When each object using CoordObject executes, it overwrites the previous coordinates. In this example, CoordObject is set to Coords01 for both Geom01 and Corr01.
  3. In you program, use VGet to retrieve coordinates from the Coordinates object. In this example, if Geom01 is Passed, then Geom01 coordinate results are copied to Coords01. If Geom01 is not Passed, then Corr01 executes. If Corr01 is Passed, then the Corr01 coordinate results are copied to Coords01. In the SPEL program, we can get the robot coordinates from Coords01:
    VGet test.Coords01.RobotXYU, found, x, y, u  
    

Coordinates Object

Coordinates Object Description
The Coordinates object is used as storage for the coordinates results of other objects. It is primarily used for vision sequences that include Decision objects.
For sequences that have a Decision object, the object for which the desired results are to be obtained changes depending on branch execution. By setting the Coordinates object to store the coordinates results of an object in each branch, you can then retrieve the desired results from the Coordinates object.
The Coordinates object is specified from the object from which you want to store the results. Select the Coordinates object from the properties below.

Property Description
CoordObject

Specify a Coordinates object to store the results from the specifying object. The storage process is performed when the object is executed, and if it is not executed by the branch function of Decision, the copy process is not performed.

Default: None

The only vision objects for which you can specify the CoordObject property are vision objects with pixel, camera, and robot X, Y, U results. The Coordinates object specified in the CoordObject property can be specified at any step before or after the step of the object to be set. The Coordinates object can be specified from the CoordObject property of multiple objects. In this case, the store process is performed each time each object is executed, and the stored results are overwritten.

Coordinates Object Properties
The following list is a summary of the Coordinates object properties with brief descriptions. For details on each property, refer to the following.
"Vision Guide 8.0 Properties & Result Reference"

Property Description
CurrentResult Defines which result to display in the Results list (on the Object window) or which result to return data for when the system is requested to find more than one of a like feature within a single search window.
Description

Sets a user description

Default: Empty

Enabled

Specifies whether to execute the object.

Default: True

Name

Used to assign a unique name to the Coordinates object.

Default: Coords01

Coordinates Object Results
The following list is a summary of the Coordinates object results with brief descriptions. For details on each result, refer to the following.
"Vision Guide 8.0 Properties & Result Reference"

Results Description
Angle Returns a point detected in degrees.
CameraX Returns the X coordinate position of an object’s position in the camera coordinate system.
CameraY Returns the Y coordinate position of an object’s position in the camera coordinate system.
CameraXYU

Runtime only.

Returns the CameraX, CameraY, and CameraU coordinates of an object’s position in the camera coordinate system.

NumberFound Returns the number of the point found.
Found Returns whether an object was found.
Passed Returns whether an object detection result was accepted.
PixelX Returns the X coordinate position of an object’s position in pixels.
PixelY Returns the Y coordinate position of an object’s position in pixels.
PixelXYU

Runtime only.

Returns the PixelX, PixelY, and PixelU coordinates of a found object’s position in pixels.

RobotX Returns the X coordinate position of a found object’s position with respect to the Robot’s Coordinate System.
RobotY Returns the Y coordinate position of the found part's position with respect to the Robot’s Coordinate System.
RobotU Returns the amount of rotation of a found object's position with respect to the Robot’s Coordinate System.
RobotXYU

Runtime only.

Returns the RobotX, RobotY coordinates and the amount of rotation of a found object’s position with respect to the Robot Coordinate System.

ShowAllResults

Displays a dialog box which allows you to see all results for a specified vision object in a table form.

This makes it easy to compare results.

Using Coordinates Objects
The next few sections guide you through how to create and use a Coordinates object.

  • How to create a new Coordinates object
  • Configure the properties associated with the Coordinates object
  • Execution of Sequence
    Prior to starting the steps shown below, you should have already created a new vision sequence or selected a vision sequence to use.

Step 1: Create a New Coordinates Object

  1. Click on the [All Tools] - the [Coordinates] button on the Vision Guide toolbar.
  2. You will see a Coordinates icon appear above the Coordinates object button.
  3. Click the Coordinates icon and drag to the image display of the Vision Guide window.
  4. Notice that a name for the object is automatically created. In the example, it is called “Coords01” because this is the first Coordinates object created for this sequence. (We will explain how to change the name later.)

Step 2: Configuring Properties for the Coordinates Object
Add the object(s) for which you want to store the coordinates results to the Coordinates object. There is no restriction on the step order of positions of the object for which you want to copy the coordinates result and the Coordinates object. Switch the step positions according to your situation.
After adding the object(s), select and specify the Coordinates object from the CoordObject property.

Step 3: Adding objects and executing
Execute vision sequence.
After executing the sequence, check the result of the Coordinates object. The results of the object that set the Coordinates object to CoordObject are stored.
In the example flowchart shown below, the CoordsObject property for both Geom01 and Corr01 is set to Coords01. If Blob01 is passed, then Geom01 will be executed, and the results will be stored in Coords01. If Blob01 is not passed, then Corr01 will be executed, and the results will be stored in Coords01. In the SPEL program the coordinate results are retrieved from Coords01.

VGet test.Coords01.RobotXYU, found, x, y, u  

Working with Multiple Results from a Single Object

Blob, Geometric, Edge, Correlation, and DefectFinder objects can be used to find more than only 1 feature within a single search window.
The properties and results that you will need to understand for working with multi-result vision objects are listed below:

  • CurrentResult property
  • NumberToFind property
  • NumberFound result
  • ShowAllResults result

The Default and Maximum Number of Features a Vision Object Can Find
The default property configurations for a Blob, Geometric, Edge, Correlation, or DefectFinder object cause it to find only 1 feature within a search window. This is because the NumberToFind property is set to 1.
However, when you set the NumberToFind property to a number larger than 1, the vision object will attempt to find as many features as you specify.
If the NumberToFind property is set to All, the search will continue until the maximum detection number (100) of the object will be reached.

Set the NumberToFind property and run your vision object. You will see that several of the features in the current image that meet the acceptance criteria will be shown as found (green box with crosshair indicating position returned for found object). You can also see that the feature found which is considered the CurrentResult is highlighted in a lighter shade of green than the other features.

The Sorting Order of the Features that are Found
When a multi-result object is first run, the CurrentResult is automatically set to 1 indicating that the 1st result for the multi-result vision object should have its results displayed in the Results list on the object window.
For a Correlation or Geometric object, the first result is the one with the highest score (as compared to all the features found for that Correlation object). The second result is the one with the second highest score and so on.
For a Blob and DefectFinder objects, the first result is the one returned based on the values for the SizeToFind and Sort properties. (For example, if SizeToFind is set to “Largest” then the first result will be the largest object found.) See the SizeToFind and Sort properties in the Vision Guide 8.0 Properties and Results Manual for more info regarding sorting results.

Examining the Vision Object’s Multiple Results
If you look closely at the Results list heading you can see where it says something like “Result 1 of 10”. (Assume for the sake of this discussion that we had set the NumberToFind property to 10 for a Blob object.)
This means the CurrentResult is 1 out of 10 features that were searched for (as defined by the NumToFind property.)
It is important to remember that the 2nd number in the Results list heading is the value of the NumToFind property and not the number of features that were actually found.
When trying to detect 10 objects, the number of detection sometimes becomes 5. In this case, top 10 objects detected should be calculated and displayed as a result but only 5 objects are displayed. The reason why other 5 objects are not detected is explained as follows.
You can examine the results for any of the multiple blobs you tried to find by changing the CurrentResult property to reflect the blob you would like to see.
This can be done by manually typing a number into the value field of the CurrentResult property or by moving the cursor to the value field of the CurrentResult property and then using the SHIFT+DnArrow or SHIFT+UpArrow keys to move through the results.
You will see the results displayed in the Results list. You can always tell which result is displayed in the Results list by either looking at the CurrentResult property or by just looking at the heading of the Results list. It will display “Result 1 of 10” for the 1st result when NumberToFind is set to 10 and “Result 2 of 10” for the 2nd result, and “Result 3 of 10” for the 3rd result, etc.

CAUTION


When the CurrentResult property is changed, this also changes the VGet result in a program. For example, if a SPEL+ program uses VGet to get the result from a vision object, and the CurrentResult is set to 3, then the VGet instruction will return the RobotXYU 3rd result. Be careful with this. As you may end up getting the wrong results because the CurrentResult was set wrong for your SPEL+ program.

Tips: To acquire multiple results, specify the result numbers explicitly by the result names specified in VGet command.

VGet seqname.objname.RobotXYU(1), found, X, Y, U  

Using the NumberFound Result
The NumberFound result is useful because it displays how many blobs or matching features for a Correlation Model were actually found. This result is also available from the SPEL+ language so you can use a SPEL+ program to make sure that the proper number of results were found, or to count how many were found, or a whole host of other things. See the examples below:
This shows a small section of code to check if the number of results found is less than 5.

VGet seqname.objname.NumberFound, numfound  
If numfound < 5 Then  
     'put code to handle this case here  

Consider a situation where vision is used to find as many parts in one VRun as it can. The robot will then get each part until all the found parts are moved onto a conveyor. This example shows a small section of code to have the robot pick each of the parts found and then drop them off one by one onto the same position on a moving conveyor.

VRun seqname  
VGet seqname.objname.NumberFound, numfound  
For i = 1 to numfound  
  VGet seqname.objname.RobotXYU(i), found, X, Y, U  
If found = True Then  
   'Set coordinates found from vision  
    VPick = XY(X, Y, -100.000, U)  
  Else  
    Print "Vision Error: part not found"  
  EndIf  
  
  Jump Vpick           'Jump to Vision Pickup position  
  On Gripper           'Turn on vacuum  
  Wait .1  
  Jump Vconvey         'Jump to Conveyor Position to drop part  
  Off Gripper          'Turn off vacuum  
  Wait .1             
Next count  

Examining All the Multiple Results at Once
One of the nicest features built in with the multiple results feature for Blob, and Correlation objects is the ShowAllResults result.
There may be cases where you want to compare the different results to see the top Score vs. the 2nd Score and so on. For example, maybe there is a big drop off in score after the 3rd feature is found. Using the ShowAllResults result makes is easy to see all results at once.
Clicking on the ShowAllResults result's value field will cause a button to appear. Click the button and a dialog box will be displayed which shows all the results for the current vision object.

A sample ShowAllResults dialog box is shown in the figure below

Using the Multiple Results Dialog Box to Debug Searching Problems
Sometimes the parts which you are working with vary considerably (even within the same production lot) and sometimes there are 2 or more features on a part which are similar.
This can make it very difficult to determine a good Accept property value. Just when you think you have set the Accept property to a good value, another part will come in which fools the system. In these cases it can be very difficult to see what is going on.
The Show All Results dialog box was created to help solve these and other problems.
While you may only be interested in 1 feature on a part, requesting multiple results can help you see why a secondary feature is sometimes being returned by Vision Guide as the primary feature you are interested in. This generally happens a few different ways:

  1. When 2 or more features within the search window are very similar and as such have very close Score results.
  2. When the Confusion or Accept properties are not set high enough which allow other features with lower scores than the feature you are interested in to meet the Accept property setting.

Both of the situations above can be quite confusing for the beginning Vision Guide user when searching for a single feature within a search window.
If you have a situation where sometimes the feature you are searching for is found and sometimes another feature is found instead, use the Show All Results dialog box to home in on the problem. Follow the following steps to get a better view of what is happening:

  1. Set your NumberToFind property to 3 or more.
  2. Run the vision object from the Vision Guide 8.0 Development Environment.
  3. Click the ShowAllResults property button to bring up the Show All Results dialog box.
  4. Examine the scores of the top 3 or more features that were found.
  5. If only 1 or 2 features were found (Vision Guide will only set scores for those features that are considered found) reduce your Accept property so that more than 1 feature will be found and Run the vision object again. (You can change the Accept level back after examining the Show All Results dialog box)
  6. Click the [ShowAllResults property] button to bring up the Show All Results dialog box.
  7. Examine the scores of the top 3 or more features that were found.

Once you examine the scores of the top 3 or more features that were found as described above, it should become clear to you what is happening. In most cases you will see one of these two situations.

  • Each of the features that were found has a score greater than the Accept property setting.
    If this is the case, simply adjust your Confusion property value up higher to force the best feature to always be found rather that allowing other features to be returned because they meet the Accept threshold. You may also want to adjust the Accept property setting.
  • Each of the features are very close in score. If this is the case, then you will need to do something to differentiate between the feature which you are primarily interested in such as:
    • Readjust the search window so that the features that are randomly returning as the found feature are not contained inside.
    • Teach the Model again for the feature that you are most interested in.
    • Adjust the lighting for your application so the feature that you are most interested in gets a much higher score than the other features that are currently fooling the system.

Accessing Multiple Results from the SPEL+ Language
We already explained that the CurrentResult property is used to set which result will be displayed in the Results list.
It is also used to determine what number of results to return results for. In other words if we want to get the Area results from the 3rd result returned from a Blob object, CurrentResult must be set to 3.
You have already seen how this can be done from the Properties list of the object window. Now let’s take a look at how to access multiple results from SPEL+.
Multiple result access from SPEL+ treats the result like a type of array where the CurrentResult is referenced with a subscript number next to the result to be obtained. The first example below shows how to get the third Area result and put it in a variable called area from the SPEL+ Language.

VGet seqname.objname.Area(3), area  

The 2nd example below shows how to get that same 3rd Area result but this time assign it as the value of the 3rd element in an array called Area().

VGet seqname.objname.Area(3), area(3)  

Variable names can also be used to represent the element in an array rather than fixed elements as in example #2 above. Notice that the variable called “var” is used as the subscript for the Area result.

VGet seqname.objname.Area(var), area(var)  

The 4th example assumes that you have used a single vision object to find multiple like parts (let’s say up to 10). You would now like to pick these parts (let’s say they are pens) with a robot so you will need to store the X, Y, and U coordinates to variables to represent the coordinate values of each of the parts found. The following code could pull these coordinates out of the RobotXYU result and put them into X, Y and U arrays that could later be used for the robot to move to.

Function test  
  Boolean found(10)  
  Integer numfound, i  
  Real X(10), Y(10), U(10)  
  
  Jump camshot   'move camera into position snap shot  
  
  VRun seq01     'run the vision sequence to find the pens  
  
  VGet seq01.blob01.NumFound, numfound     'how many found  
  
  For i = 1 to numfound    'get robot coords  
    VGet seq01.blob01.RobotXYU(I), found(i), X(i), Y(i), Z(i)  
  Next i  
  'Add code for robot motion here……  
Fend  

You can use automatic multiple object search for several vision objects. Objects for searching are created automatically when you specify to use All results from another object. This allows you to configure one or more objects to search for a feature, and at runtime, the objects are created and run automatically for all results of the parent object.
Automatic multiple object search can be used with CenterPointObject and Frame.

Example: CenterPointObject

  1. Create a sequence and add a Blob object. Set NumberToFind to 10.

  2. Create a Polar object. Set CenterPointObject to the Blob object, and set CenterPntObjResult to All.

  3. Teach the Polar object.

  4. Run the sequence. For each blob found, an instance of the Polar object is created and run. If 10 blobs were found, then there will be 10 Polar objects, each centered on a result from the Blob object. In the picture below, you can see seven Polar objects, one for each blob found.

When using automatic multiple object search, if a child object can find multiple results, only one result can be found for each instance of the child object.
You can also use Line, Edge, and LineInspector objects with automatic multiple search.
You must specify both the StartPointObject and EndPointObject, along with StartPntObjResult = All and EndPntObjResult = All.

For example:

  1. Create a new sequence and select ImageFile with two horizontal rows of blobs (see image below).

  2. Create Blob01, set NumberToFind = 3, and Sort = PixelX. Size and position it to find the first row of blobs.

  3. Copy Blob01 and paste to create Blob02. Size and position it to find the second row of blobs.

  4. Create a Line object. Set StartPointObject = Blob01, and set StartPntObjResult = All. Set EndPointObject = Blob02, and set EndPntObjResult = All.

  5. Run the sequence. A instance of the Line object will be created for each pair of results for StartPointObject and EndPointObject.

Example: CheckClearanceFor
The following describes CheckClearanceFor function, which is one of the automatic multiple object searches, using a sequence used in an application that grasps both sides of a workpiece with a gripper that grasps with two points. Sequence is consisted of two objects, such as “Geometric” that detects a workpiece and “Blob” that determines whether there is a clearance without interference when inserting a gripper.

Workpiece cannot be grasped since other workpiece is within the gripper’s motion range and the gripper interferes with the workpiece. Workpiece (a) can be grasped since no other workpiece is in the gripper’s motion range and there is a clearance to insert the gripper without interfering with other workpiece.

Example:

  1. Create Geometric (Geom01) as a parent object to detect a workpiece and set the position and size of the model window. (See below)
    Set the related properties as well. In this example, set NumberToFind to “All” to detect multiple workpieces.
    For the parent objects that are available on CheckClearanceFor function, refer to Vision Guide 8.0 Properties & Results Reference.

  2. Create Blob (Blob01, Blob02) as a child object to determine interference and place them on the both side of Geometric model window. (See below)
    Set the related properties as well.

    KEY POINTS


    If the workpiece detected by the parent object rotates, set SearchWinType of the child object to “RatatedRectangle”. If setting to “Rectangle”, the search window cannot be rotated and the angle of rotation cannot be followed.

    Set the properties of Blob01 and Blob02 (child objects).

    1. Set CheckClearanceFor to “Geom01”. (See below)
      Using this function, automatically create and execute child objects for all results of the parent objects when executing the sequence.

      For the child objects that are available on CheckClearanceFor function, refer to Vision Guide 8.0 Properties & Results Reference.

      KEY POINTS


      The child object that is set to CheckClearanceFor cannot be configured by other objects. (In this example, if CheckClearanceFor of Blob01 is set to “Geom01”, Blob01 is not displayed on the drop down list of CheckClearanceFor of Blob02.)

    2. Set ClearanceCondition.
      Set the grasping determination according to the detection results of a child object. If ClearanceCondition is set to “NotFound” and Found result becomes “False”, set the child object’s ClearanceOK to “True” and determine as grasping is possible. If ClearanceCondition is set to “Found”, the determination will be opposite.
      In this example, set to “NotFound” to make sure that nothing is on the both side of the workpiece and there is a clearance to insert a gripper. (See below)

    3. Execute the sequence.
      The detection results of parent objects “Geom01” surrounded by the green solid line and the red dashed line are displayed as shown below. The detection results surrounded by the green solid line indicate a workpiece that can be grasped. (the child object is determined as grasping is possible.) The detection results surrounded by the red dashed line indicate a workpiece that is determined as grasping is not possible.

      To check whether the workpiece can be grasped as a value, refer to ClearanceOK of the parent object “Geom01”.

    KEY POINTS


    When using CheckClearanceFor function to determine whether to grasp, be sure to execute the sequence and refer to ClearanceOK of the parent object. ClearanceOK of the parent object is set after the child object is executed.

Turning All Vision Object Labels On and Off

Force All Labels Off (Vision Toolbar Only)

The [Force All Labels Off] button is a useful method to reduce screen clutter when working with many vision objects in one Sequence.
The [Force All Labels Off] button on the Vision Guide toolbar is a two-position button. When pressed in, the labels except for selected vision objects are turned off, thus making the objects which are displayed easier to see.
When the [Force All Labels Off] is in the out position (not pressed in), then labels are shown for each vision object that is displayed in the image display.

KEY POINTS


  • The [Force All Labels Off>] button is sometimes used in conjunction with the [Force All Graphics On] button. When the [Force all Graphics On] button is pressed, you can still [Force All Labels Off]. This means that even though the [Force All Graphics On] button is pressed in, the labels still will not be displayed because the [Force All Labels Off] button is also pressed in.
  • If you are working in a vision sequence where you just turned the [Force All Labels] button Off, and you still cannot see a specific vision object, chances are that the Graphics property for that vision object is set to None. This means don't display graphics at all for this object and may be your problem.

The [Force All Labels Off] button is dimmed (made light gray in color and inaccessible) if there are no vision sequences for the current project.

Turning All Vision Object Graphics On

Force All Graphics On (Vision Toolbar Only)

The [Force All Graphics On] button provides a quick method to turn all graphics (search window, model origin, model window, Lines, and Labels) for all vision objects in the current vision sequence On with one button click.
This button overrides the setting of the Graphics property for each individual vision object making it easy to quickly see all vision objects rather than modifying the Graphics property for each vision object individually.

KEY POINTS


The [Force All Labels Off>] button is sometimes used in conjunction with the [Force All Graphics On] button. In this case, the [Force All Graphics On] button has precedence. This means that even though the [Force All Graphics On] button is pressed in, the labels still will not be displayed because the [Force All Labels Off] button is also pressed in.

It should be noted that the [Force All Graphics On] button is dimmed (made light gray in color and inaccessible) if there are no vision sequences for the current project.

Showing Only the Current Object

Show Only Current Object (Vision Toolbar Only)

When there are many objects in a sequence, sometimes it is difficult to select and work with the desired object. By clicking the [Show Only Current Object] button, then only the current active object is displayed. To display all objects again, click the Show Only Current Object again. When only the current object is displayed, you can select which object to display by selecting it in the Objects list.