The Image class is the heart of SimpleCV and allows you to convert to and from a number of source types with ease. It also has intelligent buffer management, so that modified copies of the Image required for algorithms such as edge detection, etc can be cached and reused when appropriate.
Images are converted into 8-bit, 3-channel images in RGB colorspace. It will automatically handle conversion from other representations into this standard format.
Apply 3 ColorCurve corrections applied in HSL space Parameters are: * Hue ColorCurve * Lightness (brightness/value) ColorCurve * Saturation ColorCurve
Returns: IMAGE
Intensity applied to all three color channels
Returns: Image
Apply 3 ColorCurve corrections applied in rgb channels Parameters are: * Red ColorCurve * Green ColorCurve * Blue ColorCurve
Returns: IMAGE
Return a full copy of the Image’s bitmap. Note that this is different from using python’s implicit copy function in that only the bitmap itself is copied.
Returns: IMAGE
Apply a morphological dilation. An dilation has the effect of smoothing blobs while intensifying the amount of noise blobs. This implementation uses the default openCV 3X3 square kernel Erosion is effectively a local maxima detector, the kernel moves over the image and takes the maxima value inside the kernel.
iterations - this parameters is the number of times to apply/reapply the operation
See: http://en.wikipedia.org/wiki/Dilation_(morphology) See: http://opencv.willowgarage.com/documentation/cpp/image_filtering.html#cv-dilate Example Use: A part’s blob needs to be smoother Example Code: ./examples/MorphologyExample.py
Returns: IMAGE
Draw a circle on the Image, parameters include: * the center of the circle * the radius in pixels * a color tuple (default black) * the thickness of the circle
Note that this modifies the image in-place and clears all buffers.
Returns: NONE - Inline Operation
Draw a line on the Image, parameters include * pt1 - the first point for the line (tuple) * pt1 - the second point on the line (tuple) * a color tuple (default black) * thickness of the line
Note that this modifies the image in-place and clears all buffers.
Returns: NONE - Inline Operation
Finds an edge map Image using the Canny edge detection method. Edges will be brighter than the surrounding area.
The t1 parameter is roughly the “strength” of the edge required, and the value between t1 and t2 is used for edge linking. For more information:
<http://opencv.willowgarage.com/documentation/python/imgproc_feature_detection.html> <http://en.wikipedia.org/wiki/Canny_edge_detector>
Returns: IMAGE
Apply a morphological erosion. An erosion has the effect of removing small bits of noise and smothing blobs. This implementation uses the default openCV 3X3 square kernel Erosion is effectively a local minima detector, the kernel moves over the image and takes the minimum value inside the kernel. iterations - this parameters is the number of times to apply/reapply the operation See: http://en.wikipedia.org/wiki/Erosion_(morphology). See: http://opencv.willowgarage.com/documentation/cpp/image_filtering.html#cv-erode Example Use: A threshold/blob image has ‘salt and pepper’ noise. Example Code: ./examples/MorphologyExample.py
Returns: IMAGE
If you have the python-zxing library installed, you can find 2d and 1d barcodes in your image. These are returned as Barcode feature objects in a FeatureSet. The single parameter is the ZXing_path, if you don’t have the ZXING_LIBRARY env parameter set.
You can clone python-zxing at http://github.com/oostendo/python-zxing
Returns: BARCODE
If you have the cvblob library installed, this will look for continuous light regions and return them as Blob features in a FeatureSet. Parameters specify the threshold value, and minimum and maximum size for blobs.
You can find the cv-blob python library at http://github.com/oostendo/cvblob-python
Returns: FEATURESET
This will find corner Feature objects and return them as a FeatureSet strongest corners first. The parameters give the number of corners to look for, the minimum quality of the corner feature, and the minimum distance between corners.
Returns: FEATURESET
If you want to find Haar Features (useful for face detection among other purposes) this will return Haar feature objects in a FeatureSet. The parameters are: * the scaling factor for subsequent rounds of the haar cascade (default 1.2)7 * the minimum number of rectangles that makes up an object (default 2) * whether or not to use Canny pruning to reject areas with too many edges (default yes, set to 0 to disable)
For more information, consult the cv.HaarDetectObjects documentation
You will need to provide your own cascade file - these are usually found in /usr/local/share/opencv/haarcascades and specify a number of body parts.
Returns: FEATURESET
findLines will find line segments in your image and returns Line feature objects in a FeatureSet. The parameters are: * threshold, which determies the minimum “strength” of the line * min line length – how many pixels long the line must be to be returned * max line gap – how much gap is allowed between line segments to consider them the same line * cannyth1 and cannyth2 are thresholds used in the edge detection step, refer to _getEdgeMap() for details
For more information, consult the cv.HoughLines2 documentation
Returns: FEATURESET
Horizontally mirror an image
Returns: IMAGE
Vertically mirror an image
Returns: IMAGE
return a gray scale version of the image
Returns: IMAGE
Return a numpy array of the 1D histogram of intensity for pixels in the image Single parameter is how many “bins” to have.
Returns: LIST
Invert (negative) the image note that this can also be done with the unary minus (-) operator.
Returns: IMAGE
The maximum value of my image, and the other image, in each channel If other is a number, returns the maximum of that and the number
Returns: IMAGE
Finds average color of all the pixels in the image.
Returns: IMAGE
The minimum value of my image, and the other image, in each channel If other is a number, returns the minimum of that and the number
Returns: IMAGE
morphologyClose applies a morphological close operation which is effectively a dilation operation followed by a morphological erosion. This operation helps to ‘bring together’ or ‘close’ binary regions which are close together.
See: http://en.wikipedia.org/wiki/Closing_(morphology) See: http://opencv.willowgarage.com/documentation/cpp/image_filtering.html#cv-morphologyex Example Use: Use when a part, which should be one blob is really two blobs. Example Code: ./examples/MorphologyExample.py
Returns: IMAGE
The morphological gradient is the difference betwen the morphological dilation and the morphological gradient. This operation extracts the edges of a blobs in the image.
See: http://en.wikipedia.org/wiki/Morphological_Gradient See: http://opencv.willowgarage.com/documentation/cpp/image_filtering.html#cv-morphologyex Example Use: Use when you have blobs but you really just want to know the blob edges. Example Code: ./examples/MorphologyExample.py
Returns: IMAGE
morphologyOpen applies a morphological open operation which is effectively an erosion operation followed by a morphological dilation. This operation helps to ‘break apart’ or ‘open’ binary regions which are close together.
See: http://en.wikipedia.org/wiki/Opening_(morphology) See: http://opencv.willowgarage.com/documentation/cpp/image_filtering.html#cv-morphologyex Example Use: two part blobs are ‘sticking’ together. Example Code: ./examples/MorphologyExample.py
Returns: IMAGE
This rotates an image around a specific point by the given angle By default in “fixed” mode, the returned Image is the same dimensions as the original Image, and the contents will be scaled to fit. In “full” mode the contents retain the original size, and the Image object will scale by default, the point is the center of the image. you can also specify a scaling parameter
Returns: IMAGE
Scale the image to a new width and height.
Returns: IMAGE
Given a set of new corner points in clockwise order, return a shear-ed Image that transforms the Image contents. The returned image is the same dimensions.
cornerpoints is a 2x4 array of point tuples
Returns: IMAGE
Gets width and height
Returns: TUPLE
Smooth the image, by default with the Gaussian blur. If desired, additional algorithms and aperatures can be specified. Optional parameters are passed directly to OpenCV’s cv.Smooth() function.
Returns: IMAGE
Split the channels of an image into RGB (not the default BGR) single parameter is whether to return the channels as grey images (default) or to return them as tinted color image
Returns: TUPLE - of 3 image objects
The stretch filter works on a greyscale image, if the image is color, it returns a greyscale image. The filter works by taking in a lower and upper threshold. Anything below the lower threshold is pushed to black (0) and anything above the upper threshold is pushed to white (255)
Returns: IMAGE
This helper function for shear performs an affine rotation using the supplied matrix. The matrix can be a either an openCV mat or an np.ndarray type. The matrix should be a 2x3
Returns: IMAGE
This helper function for warp performs an affine rotation using the supplied matrix. The matrix can be a either an openCV mat or an np.ndarray type. The matrix should be a 3x3
Returns: IMAGE
Given a new set of corner points in clockwise order, return an Image with the images contents warped to the new coordinates. The returned image will be the same size as the original image
Returns: IMAGE
ColorCurve is a color spline class for performing color correction. It can takeas parameters a SciPy Univariate spline, or an array with at least 4 point pairs. Either of these must map in a 255x255 space. The curve can then be used in the applyRGBCurve, applyHSVCurve, and applyInstensityCurve functions:
clr = ColorCurve([[0,0], [100, 120], [180, 230], [255, 255]])
image.applyIntensityCurve(clr)
the only property, mCurve is a linear array with 256 elements from 0 to 255
FeatureSet is a class extended from Python’s list which has special functions so that it is useful for handling feature metadata on an image.
In general, functions dealing with attributes will return numpy arrays, and functions dealing with sorting or filtering will return new FeatureSets.
Return a FeatureSet which is filtered on a numpy boolean array. This will let you use the attribute functions to easily screen Features out of return FeatureSets.
Some examples:
my_lines.filter(my_lines.length() < 200) # returns all lines < 200px my_blobs.filter(my_blobs.area() > 0.9 * my_blobs.length**2) # returns blobs that are nearly square my_lines.filter(abs(my_lines.angle()) < numpy.pi / 4) #any lines within 45 degrees of horizontal my_corners.filter(my_corners.x() - my_corners.y() > 0) #only return corners in the upper diagonal of the image
The Feature object is an abstract class which real features descend from. Each feature object has:
Bases: SimpleCV.Features.Feature
The Corner feature is a point returned by the FindCorners function
Bases: SimpleCV.Features.Feature
The Line class is returned by the findLines function, but can also be initialized with any two points:
l = Line(Image, point1, point2) Where point1 and point2 are coordinate tuples
l.points will be a tuple of the two points
Bases: SimpleCV.Features.Feature
The HaarFeature is a rectangle returned by the FindHaarFeature() function.
Bases: SimpleCV.Features.Feature
The Blob Feature is a wrapper for the cvblob-python library.
The findBlobs() function returns contiguous regions of light-colored area, given an intensity threshold. The Blob class helps you map the position, volume, and shape of these areas. The coordinates of the Blob are its centroid, and its area is defined by its total pixel count.
Blob implements all of the Feature properties, and its core data structure, cvblob has the following properties (from cvblob.h):
CvLabel label; ///< Label assigned to the blob.
union
{
unsigned int area; ///< Area (moment 00).
unsigned int m00; ///< Moment 00 (area).
};
unsigned int minx; ///< X min.
unsigned int maxx; ///< X max.
unsigned int miny; ///< Y min.
unsigned int maxy; ///< y max.
CvPoint2D64f centroid; ///< Centroid.
double m10; ///< Moment 10.
double m01; ///< Moment 01.
double m11; ///< Moment 11.
double m20; ///< Moment 20.
double m02; ///< Moment 02.
double u11; ///< Central moment 11.
double u20; ///< Central moment 20.
double u02; ///< Central moment 02.
double n11; ///< Normalized central moment 11.
double n20; ///< Normalized central moment 20.
double n02; ///< Normalized central moment 02.
double p1; ///< Hu moment 1.
double p2; ///< Hu moment 2.
CvContourChainCode contour; ///< Contour.
CvContoursChainCode internalContours; ///< Internal contours.
For more information:
Bases: SimpleCV.Features.Feature
The Barcode Feature wrappers the object returned by findBarcode(), a python-zxing object.
Bases: SimpleCV.Camera.FrameSource
The Camera class is the class for managing input from a basic camera. Note that once the camera is initialized, it will be locked from being used by other processes. You can check manually if you have compatable devices on linux by looking for /dev/video* devices.
This class wrappers OpenCV’s cvCapture class and associated methods. Read up on OpenCV’s CaptureFromCAM method for more details if you need finer control than just basic frame retrieval
Retrieve an Image-object from the camera. If you experience problems with stale frames from the camera’s hardware buffer, increase the flushcache number to dequeue multiple frames before retrieval
We’re working on how to solve this problem.
Bases: SimpleCV.Camera.FrameSource
This is an experimental wrapper for the Freenect python libraries you can getImage() and getDepth() for separate channel images
Bases: SimpleCV.Camera.FrameSource
The virtual camera lets you test algorithms or functions by providing a Camera object which is not a physically connected device.
Currently, VirtualCamera supports “image” and “video” source types.
Bases: SimpleCV.Camera.FrameSource
The JpegStreamCamera takes a URL of a JPEG stream and treats it like a camera. The current frame can always be accessed with getImage()
Requires the [Python Imaging Library](http://www.pythonware.com/library/pil/handbook/index.htm)
The JpegStreamer class allows the user to stream a jpeg encoded file to a HTTP port. Any updates to the jpg file will automatically be pushed to the browser via multipart/replace content type.
To initialize: js = JpegStreamer()
to update: img.save(js)
to open a browser and display: import webbrowser webbrowser.open(js.url)
Note 3 optional parameters on the constructor: - port (default 8080) which sets the TCP port you need to connect to - sleep time (default 0.1) how often to update. Above 1 second seems to cause dropped connections in Google chrome
Once initialized, the buffer and sleeptime can be modified and will function properly – port will not.