SimpleCV module

Image

class SimpleCV.Image(source=None)

The Image class is the heart of SimpleCV and allows you to convert to and from a number of source types with ease. It also has intelligent buffer management, so that modified copies of the Image required for algorithms such as edge detection, etc can be cached and reused when appropriate.

Images are converted into 8-bit, 3-channel images in RGB colorspace. It will automatically handle conversion from other representations into this standard format.

applyHLSCurve(hCurve, lCurve, sCurve)

Apply 3 ColorCurve corrections applied in HSL space Parameters are: * Hue ColorCurve * Lightness (brightness/value) ColorCurve * Saturation ColorCurve

Returns: IMAGE

applyIntensityCurve(curve)

Intensity applied to all three color channels

Returns: Image

applyRGBCurve(rCurve, gCurve, bCurve)

Apply 3 ColorCurve corrections applied in rgb channels Parameters are: * Red ColorCurve * Green ColorCurve * Blue ColorCurve

Returns: IMAGE

binarize(thresh=127)
Do a binary threshold the image, changing all values above thresh to white and all below to black. If a color tuple is provided, each color channel is thresholded separately.
copy()

Return a full copy of the Image’s bitmap. Note that this is different from using python’s implicit copy function in that only the bitmap itself is copied.

Returns: IMAGE

dilate(iterations=1)

Apply a morphological dilation. An dilation has the effect of smoothing blobs while intensifying the amount of noise blobs. This implementation uses the default openCV 3X3 square kernel Erosion is effectively a local maxima detector, the kernel moves over the image and takes the maxima value inside the kernel.

iterations - this parameters is the number of times to apply/reapply the operation

See: http://en.wikipedia.org/wiki/Dilation_(morphology) See: http://opencv.willowgarage.com/documentation/cpp/image_filtering.html#cv-dilate Example Use: A part’s blob needs to be smoother Example Code: ./examples/MorphologyExample.py

Returns: IMAGE

drawCircle(ctr, rad, color=(0, 0, 0), thickness=1)

Draw a circle on the Image, parameters include: * the center of the circle * the radius in pixels * a color tuple (default black) * the thickness of the circle

Note that this modifies the image in-place and clears all buffers.

Returns: NONE - Inline Operation

drawLine(pt1, pt2, color=(0, 0, 0), thickness=1)

Draw a line on the Image, parameters include * pt1 - the first point for the line (tuple) * pt1 - the second point on the line (tuple) * a color tuple (default black) * thickness of the line

Note that this modifies the image in-place and clears all buffers.

Returns: NONE - Inline Operation

edges(t1=50, t2=100)

Finds an edge map Image using the Canny edge detection method. Edges will be brighter than the surrounding area.

The t1 parameter is roughly the “strength” of the edge required, and the value between t1 and t2 is used for edge linking. For more information:

<http://opencv.willowgarage.com/documentation/python/imgproc_feature_detection.html> <http://en.wikipedia.org/wiki/Canny_edge_detector>

Returns: IMAGE

erode(iterations=1)

Apply a morphological erosion. An erosion has the effect of removing small bits of noise and smothing blobs. This implementation uses the default openCV 3X3 square kernel Erosion is effectively a local minima detector, the kernel moves over the image and takes the minimum value inside the kernel. iterations - this parameters is the number of times to apply/reapply the operation See: http://en.wikipedia.org/wiki/Erosion_(morphology). See: http://opencv.willowgarage.com/documentation/cpp/image_filtering.html#cv-erode Example Use: A threshold/blob image has ‘salt and pepper’ noise. Example Code: ./examples/MorphologyExample.py

Returns: IMAGE

findBarcode(zxing_path='')

If you have the python-zxing library installed, you can find 2d and 1d barcodes in your image. These are returned as Barcode feature objects in a FeatureSet. The single parameter is the ZXing_path, if you don’t have the ZXING_LIBRARY env parameter set.

You can clone python-zxing at http://github.com/oostendo/python-zxing

Returns: BARCODE

findBlobs(threshval=127, minsize=10, maxsize=0)

If you have the cvblob library installed, this will look for continuous light regions and return them as Blob features in a FeatureSet. Parameters specify the threshold value, and minimum and maximum size for blobs.

You can find the cv-blob python library at http://github.com/oostendo/cvblob-python

Returns: FEATURESET

findCorners(maxnum=50, minquality=0.040000000000000001, mindistance=1.0)

This will find corner Feature objects and return them as a FeatureSet strongest corners first. The parameters give the number of corners to look for, the minimum quality of the corner feature, and the minimum distance between corners.

Returns: FEATURESET

findHaarFeatures(cascadefile, scale_factor=1.2, min_neighbors=2, use_canny=1)

If you want to find Haar Features (useful for face detection among other purposes) this will return Haar feature objects in a FeatureSet. The parameters are: * the scaling factor for subsequent rounds of the haar cascade (default 1.2)7 * the minimum number of rectangles that makes up an object (default 2) * whether or not to use Canny pruning to reject areas with too many edges (default yes, set to 0 to disable)

For more information, consult the cv.HaarDetectObjects documentation

You will need to provide your own cascade file - these are usually found in /usr/local/share/opencv/haarcascades and specify a number of body parts.

Returns: FEATURESET

findLines(threshold=80, minlinelength=30, maxlinegap=10, cannyth1=50, cannyth2=100)

findLines will find line segments in your image and returns Line feature objects in a FeatureSet. The parameters are: * threshold, which determies the minimum “strength” of the line * min line length – how many pixels long the line must be to be returned * max line gap – how much gap is allowed between line segments to consider them the same line * cannyth1 and cannyth2 are thresholds used in the edge detection step, refer to _getEdgeMap() for details

For more information, consult the cv.HoughLines2 documentation

Returns: FEATURESET

flipHorizontal()

Horizontally mirror an image

Returns: IMAGE

flipVertical()

Vertically mirror an image

Returns: IMAGE

getBitmap()
Retrieve the bitmap (iplImage) of the Image. This is useful if you want to use functions from OpenCV with SimpleCV’s image class
getEmpty(channels=3)
Create a new, empty OpenCV bitmap with the specified number of channels (default 3)h
getGrayscaleMatrix()
Returns the intensity grayscale matrix
getMatrix()
Get the matrix (cvMat) version of the image, required for some OpenCV algorithms
getPIL()
Get a PIL Image object for use with the Python Image Library
grayscale()

return a gray scale version of the image

Returns: IMAGE

histogram(numbins=50)

Return a numpy array of the 1D histogram of intensity for pixels in the image Single parameter is how many “bins” to have.

Returns: LIST

invert()

Invert (negative) the image note that this can also be done with the unary minus (-) operator.

Returns: IMAGE

max(other)

The maximum value of my image, and the other image, in each channel If other is a number, returns the maximum of that and the number

Returns: IMAGE

meanColor()

Finds average color of all the pixels in the image.

Returns: IMAGE

min(other)

The minimum value of my image, and the other image, in each channel If other is a number, returns the minimum of that and the number

Returns: IMAGE

morphClose()

morphologyClose applies a morphological close operation which is effectively a dilation operation followed by a morphological erosion. This operation helps to ‘bring together’ or ‘close’ binary regions which are close together.

See: http://en.wikipedia.org/wiki/Closing_(morphology) See: http://opencv.willowgarage.com/documentation/cpp/image_filtering.html#cv-morphologyex Example Use: Use when a part, which should be one blob is really two blobs. Example Code: ./examples/MorphologyExample.py

Returns: IMAGE

morphGradient()

The morphological gradient is the difference betwen the morphological dilation and the morphological gradient. This operation extracts the edges of a blobs in the image.

See: http://en.wikipedia.org/wiki/Morphological_Gradient See: http://opencv.willowgarage.com/documentation/cpp/image_filtering.html#cv-morphologyex Example Use: Use when you have blobs but you really just want to know the blob edges. Example Code: ./examples/MorphologyExample.py

Returns: IMAGE

morphOpen()

morphologyOpen applies a morphological open operation which is effectively an erosion operation followed by a morphological dilation. This operation helps to ‘break apart’ or ‘open’ binary regions which are close together.

See: http://en.wikipedia.org/wiki/Opening_(morphology) See: http://opencv.willowgarage.com/documentation/cpp/image_filtering.html#cv-morphologyex Example Use: two part blobs are ‘sticking’ together. Example Code: ./examples/MorphologyExample.py

Returns: IMAGE

rotate(angle, mode='fixed', point=[, -1, -1], scale=1.0)

This rotates an image around a specific point by the given angle By default in “fixed” mode, the returned Image is the same dimensions as the original Image, and the contents will be scaled to fit. In “full” mode the contents retain the original size, and the Image object will scale by default, the point is the center of the image. you can also specify a scaling parameter

Returns: IMAGE

save(filehandle_or_filename='', mode='')
Save the image to the specified filename. If no filename is provided then then it will use the filename the Image was loaded from or the last place it was saved to.
scale(width, height)

Scale the image to a new width and height.

Returns: IMAGE

shear(cornerpoints)

Given a set of new corner points in clockwise order, return a shear-ed Image that transforms the Image contents. The returned image is the same dimensions.

cornerpoints is a 2x4 array of point tuples

Returns: IMAGE

size()

Gets width and height

Returns: TUPLE

smooth(algorithm_name='gaussian', aperature='', sigma=0, spatial_sigma=0)

Smooth the image, by default with the Gaussian blur. If desired, additional algorithms and aperatures can be specified. Optional parameters are passed directly to OpenCV’s cv.Smooth() function.

Returns: IMAGE

splitChannels(grayscale=True)

Split the channels of an image into RGB (not the default BGR) single parameter is whether to return the channels as grey images (default) or to return them as tinted color image

Returns: TUPLE - of 3 image objects

stretch(thresh_low=0, thresh_high=255)

The stretch filter works on a greyscale image, if the image is color, it returns a greyscale image. The filter works by taking in a lower and upper threshold. Anything below the lower threshold is pushed to black (0) and anything above the upper threshold is pushed to white (255)

Returns: IMAGE

transformAffine(rotMatrix)

This helper function for shear performs an affine rotation using the supplied matrix. The matrix can be a either an openCV mat or an np.ndarray type. The matrix should be a 2x3

Returns: IMAGE

transformPerspective(rotMatrix)

This helper function for warp performs an affine rotation using the supplied matrix. The matrix can be a either an openCV mat or an np.ndarray type. The matrix should be a 3x3

Returns: IMAGE

warp(cornerpoints)

Given a new set of corner points in clockwise order, return an Image with the images contents warped to the new coordinates. The returned image will be the same size as the original image

Returns: IMAGE

class SimpleCV.ColorCurve(curve_vals)

ColorCurve is a color spline class for performing color correction. It can takeas parameters a SciPy Univariate spline, or an array with at least 4 point pairs. Either of these must map in a 255x255 space. The curve can then be used in the applyRGBCurve, applyHSVCurve, and applyInstensityCurve functions:

clr = ColorCurve([[0,0], [100, 120], [180, 230], [255, 255]])
image.applyIntensityCurve(clr)

the only property, mCurve is a linear array with 256 elements from 0 to 255

Features

class SimpleCV.FeatureSet

FeatureSet is a class extended from Python’s list which has special functions so that it is useful for handling feature metadata on an image.

In general, functions dealing with attributes will return numpy arrays, and functions dealing with sorting or filtering will return new FeatureSets.

angle()
Return a numpy array of the angles (theta) of each feature. Note that theta is given in radians, with 0 being horizontal.
area()
Returns a numpy array of the area of each feature in pixels.
colorDistance(color=(0, 0, 0))
Return a numpy array of the distance each features average color is from a given color tuple (default black, so colorDistance() returns intensity)
coordinates()
Returns a 2d numpy array of the x,y coordinates of each feature. This is particularly useful if you want to use Scipy’s Spatial Distance module
distanceFrom(point=(-1, -1))
Returns a numpy array of the distance each Feature is from a given coordinate. Default is the center of the image.
draw(color=(255, 0, 0))
Call draw() on each feature in the FeatureSet.
filter(filterarray)

Return a FeatureSet which is filtered on a numpy boolean array. This will let you use the attribute functions to easily screen Features out of return FeatureSets.

Some examples:

my_lines.filter(my_lines.length() < 200) # returns all lines < 200px my_blobs.filter(my_blobs.area() > 0.9 * my_blobs.length**2) # returns blobs that are nearly square my_lines.filter(abs(my_lines.angle()) < numpy.pi / 4) #any lines within 45 degrees of horizontal my_corners.filter(my_corners.x() - my_corners.y() > 0) #only return corners in the upper diagonal of the image

length()
Return a numpy array of the length (longest dimension) of each feature.
meanColor()
Return a numpy array of the average color of the area covered by each Feature.
sortAngle(theta=0)
Return a sorted FeatureSet with the features closest to a given angle first. Note that theta is given in radians, with 0 being horizontal.
sortArea()
Returns a new FeatureSet, with the largest area features first.
sortColorDistance(color=(0, 0, 0))
Return a sorted FeatureSet with features closest to a given color first. Default is black, so sortColorDistance() will return darkest to brightest
sortDistance(point=(-1, -1))
Returns a sorted FeatureSet with the features closest to a given coordinate first. Default is from the center of the image.
sortLength()
Return a sorted FeatureSet with the longest features first.
x()
Returns a numpy array of the x (horizontal) coordinate of each feature.
y()
Returns a numpy array of the y (vertical) coordinate of each feature.
class SimpleCV.Feature(i, at_x, at_y)

The Feature object is an abstract class which real features descend from. Each feature object has:

  • a draw() method,
  • an image property, referencing the originating Image object
  • x and y coordinates
  • default functions for determining angle, area, meanColor, etc for FeatureSets
  • in the Feature class, these functions assume the feature is 1px
angle()
Return the angle (theta) of the feature – default 0 (horizontal)
area()
Area covered by the feature – for a pixel, 1
colorDistance(color=(0, 0, 0))
Return the euclidean color distance of the color tuple at x,y from a given color (default black)
coordinates()
Return a an array of x,y
distanceFrom(point=(-1, -1))
Given a point (default to center of the image), return the euclidean distance of x,y from this point
draw(color=(255.0, 0.0, 0.0))
With no dimension information, color the x,y point for the featuer
length()
Longest dimension of the feature – for a pixel, 1
meanColor()
Return the color tuple from x,y
class SimpleCV.Corner(i, at_x, at_y)

Bases: SimpleCV.Features.Feature

The Corner feature is a point returned by the FindCorners function

angle()
Return the angle (theta) of the feature – default 0 (horizontal)
area()
Area covered by the feature – for a pixel, 1
colorDistance(color=(0, 0, 0))
Return the euclidean color distance of the color tuple at x,y from a given color (default black)
coordinates()
Return a an array of x,y
distanceFrom(point=(-1, -1))
Given a point (default to center of the image), return the euclidean distance of x,y from this point
draw(color=(255, 0, 0))
Draw a small circle around the corner. Color tuple is single parameter, default Red
length()
Longest dimension of the feature – for a pixel, 1
meanColor()
Return the color tuple from x,y
class SimpleCV.Line(i, line)

Bases: SimpleCV.Features.Feature

The Line class is returned by the findLines function, but can also be initialized with any two points:

l = Line(Image, point1, point2) Where point1 and point2 are coordinate tuples

l.points will be a tuple of the two points

angle()
This is the angle of the line, from the leftmost point to the rightmost point Returns angle (theta) in radians, with 0 = horizontal, -pi/2 = vertical positive slope, pi/2 = vertical negative slope
area()
Area covered by the feature – for a pixel, 1
colorDistance(color=(0, 0, 0))
Return the euclidean color distance of the color tuple at x,y from a given color (default black)
coordinates()
Return a an array of x,y
distanceFrom(point=(-1, -1))
Given a point (default to center of the image), return the euclidean distance of x,y from this point
draw(color=(0, 0, 255))
Draw the line, default color is blue
length()
Compute the length of the line
meanColor()
Returns the mean color of pixels under the line. Note that when the line falls “between” pixels, each pixels color contributes to the weighted average.
class SimpleCV.HaarFeature(i, haarobject, haarclassifier=None)

Bases: SimpleCV.Features.Feature

The HaarFeature is a rectangle returned by the FindHaarFeature() function.

  • The x,y coordinates are defined by the center of the bounding rectangle
  • the classifier property refers to the cascade file used for detection
  • points are the clockwise points of the bounding rectangle, starting in upper left
angle()
Returns the angle of the rectangle – horizontal if wide, vertical if tall
area()
Returns the area contained within the HaarFeature’s bounding rectangle
colorDistance(color=(0, 0, 0))
Return the euclidean color distance of the color tuple at x,y from a given color (default black)
coordinates()
Return a an array of x,y
distanceFrom(point=(-1, -1))
Given a point (default to center of the image), return the euclidean distance of x,y from this point
draw(color=(0, 255, 0))
Draw the bounding rectangle, default color green
length()
Returns the longest dimension of the HaarFeature, either width or height
meanColor()
Find the mean color of the boundary rectangle
class SimpleCV.Blob(i, cb)

Bases: SimpleCV.Features.Feature

The Blob Feature is a wrapper for the cvblob-python library.

The findBlobs() function returns contiguous regions of light-colored area, given an intensity threshold. The Blob class helps you map the position, volume, and shape of these areas. The coordinates of the Blob are its centroid, and its area is defined by its total pixel count.

Blob implements all of the Feature properties, and its core data structure, cvblob has the following properties (from cvblob.h):

CvLabel label; ///< Label assigned to the blob.

union
{
  unsigned int area; ///< Area (moment 00).
  unsigned int m00; ///< Moment 00 (area).
};

unsigned int minx; ///< X min.
unsigned int maxx; ///< X max.
unsigned int miny; ///< Y min.
unsigned int maxy; ///< y max.

CvPoint2D64f centroid; ///< Centroid.

double m10; ///< Moment 10.
double m01; ///< Moment 01.
double m11; ///< Moment 11.
double m20; ///< Moment 20.
double m02; ///< Moment 02.

double u11; ///< Central moment 11.
double u20; ///< Central moment 20.
double u02; ///< Central moment 02.

double n11; ///< Normalized central moment 11.
double n20; ///< Normalized central moment 20.
double n02; ///< Normalized central moment 02.

double p1; ///< Hu moment 1.
double p2; ///< Hu moment 2.

CvContourChainCode contour;           ///< Contour.
CvContoursChainCode internalContours; ///< Internal contours. 

For more information:

angle()
This Angle function is defined as: .5*atan2(2.* blob.cvblob.u11,(blob.cvblob.u20-blob.cvblob.u02))
area()
colorDistance(color=(0, 0, 0))
Return the euclidean color distance of the color tuple at x,y from a given color (default black)
coordinates()
Return a an array of x,y
distanceFrom(point=(-1, -1))
Given a point (default to center of the image), return the euclidean distance of x,y from this point
draw(color=(0, 255, 0))
Fill in the blob with the given color (default green), and flush buffers
length()
Length returns the longest dimension of the X/Y bounding box
meanColor()
Returns the color tuple of the entire area of the blob
class SimpleCV.Barcode(i, zxbc)

Bases: SimpleCV.Features.Feature

The Barcode Feature wrappers the object returned by findBarcode(), a python-zxing object.

  • The x,y coordinate is the center of the code
  • points represents the four boundary points of the feature. Note: for QR codes, these points are the reference rectangles, and are quadrangular, rather than rectangular with other datamatrix types.
  • data is the parsed data of the code
angle()
Return the angle (theta) of the feature – default 0 (horizontal)
area()
Returns the area defined by the quandrangle formed by the boundary points
colorDistance(color=(0, 0, 0))
Return the euclidean color distance of the color tuple at x,y from a given color (default black)
coordinates()
Return a an array of x,y
distanceFrom(point=(-1, -1))
Given a point (default to center of the image), return the euclidean distance of x,y from this point
draw(color=(255, 0, 0))
Draws the bounding area of the barcode, given by points. Note that for QR codes, these points are the reference boxes, and so may “stray” into the actual code.
length()
Returns the longest side of the quandrangle formed by the boundary points
meanColor()
Return the color tuple from x,y

Cameras

class SimpleCV.Camera(camera_index=0, prop_set={}, threaded=True)

Bases: SimpleCV.Camera.FrameSource

The Camera class is the class for managing input from a basic camera. Note that once the camera is initialized, it will be locked from being used by other processes. You can check manually if you have compatable devices on linux by looking for /dev/video* devices.

This class wrappers OpenCV’s cvCapture class and associated methods. Read up on OpenCV’s CaptureFromCAM method for more details if you need finer control than just basic frame retrieval

getAllProperties()
Return all properties from the camera
getImage()

Retrieve an Image-object from the camera. If you experience problems with stale frames from the camera’s hardware buffer, increase the flushcache number to dequeue multiple frames before retrieval

We’re working on how to solve this problem.

getProperty(prop)
Retrieve the value of a given property, wrapper for cv.GetCaptureProperty
getPropery(p)
class SimpleCV.Kinect

Bases: SimpleCV.Camera.FrameSource

This is an experimental wrapper for the Freenect python libraries you can getImage() and getDepth() for separate channel images

getAllProperties()
getDepth()
getDepthMatrix()
getImage()
getPropery(p)
class SimpleCV.VirtualCamera(s, st)

Bases: SimpleCV.Camera.FrameSource

The virtual camera lets you test algorithms or functions by providing a Camera object which is not a physically connected device.

Currently, VirtualCamera supports “image” and “video” source types.

getAllProperties()
getImage()
Retrieve the next frame of the video, or just a copy of the image
getPropery(p)
class SimpleCV.JpegStreamCamera(url)

Bases: SimpleCV.Camera.FrameSource

The JpegStreamCamera takes a URL of a JPEG stream and treats it like a camera. The current frame can always be accessed with getImage()

Requires the [Python Imaging Library](http://www.pythonware.com/library/pil/handbook/index.htm)

getAllProperties()
getImage()
Return the current frame of the JpegStream being monitored
getPropery(p)

Streams

class SimpleCV.JpegStreamer(hostandport=8080, st=0.10000000000000001)

The JpegStreamer class allows the user to stream a jpeg encoded file to a HTTP port. Any updates to the jpg file will automatically be pushed to the browser via multipart/replace content type.

To initialize: js = JpegStreamer()

to update: img.save(js)

to open a browser and display: import webbrowser webbrowser.open(js.url)

Note 3 optional parameters on the constructor: - port (default 8080) which sets the TCP port you need to connect to - sleep time (default 0.1) how often to update. Above 1 second seems to cause dropped connections in Google chrome

Once initialized, the buffer and sleeptime can be modified and will function properly – port will not.

streamUrl()
Returns the URL of the MJPEG stream. If host and port are not set in the constructor, defaults to “http://localhost:8080/stream/
url()
Returns the JpegStreams Webbrowser-appropriate URL, if not provided in the constructor, it defaults to “http://localhost:8080

Table Of Contents

Previous topic

SimpleCV: a kinder, gentler machine vision library

Next topic

Installation

This Page