Skip to main content

Level set segmentation of bovine corpora lutea in ex situ ovarian ultrasound images

Abstract

Background

The objective of this study was to investigate the viability of level set image segmentation methods for the detection of corpora lutea (corpus luteum, CL) boundaries in ultrasonographic ovarian images. It was hypothesized that bovine CL boundaries could be located within 1–2 mm by a level set image segmentation methodology.

Methods

Level set methods embed a 2D contour in a 3D surface and evolve that surface over time according to an image-dependent speed function. A speed function suitable for segmentation of CL's in ovarian ultrasound images was developed. An initial contour was manually placed and contour evolution was allowed to proceed until the rate of change of the area was sufficiently small. The method was tested on ovarian ultrasonographic images (n = 8) obtained ex situ. A expert in ovarian ultrasound interpretation delineated CL boundaries manually to serve as a "ground truth". Accuracy of the level set segmentation algorithm was determined by comparing semi-automatically determined contours with ground truth contours using the mean absolute difference (MAD), root mean squared difference (RMSD), Hausdorff distance (HD), sensitivity, and specificity metrics.

Results and discussion

The mean MAD was 0.87 mm (sigma = 0.36 mm), RMSD was 1.1 mm (sigma = 0.47 mm), and HD was 3.4 mm (sigma = 2.0 mm) indicating that, on average, boundaries were accurate within 1–2 mm, however, deviations in excess of 3 mm from the ground truth were observed indicating under- or over-expansion of the contour. Mean sensitivity and specificity were 0.814 (sigma = 0.171) and 0.990 (sigma = 0.00786), respectively, indicating that CLs were consistently undersegmented but rarely did the contour interior include pixels that were judged by the human expert not to be part of the CL. It was observed that in localities where gradient magnitudes within the CL were strong due to high contrast speckle, contour expansion stopped too early.

Conclusion

The hypothesis that level set segmentation can be accurate to within 1–2 mm on average was supported, although there can be some greater deviation. The method was robust to boundary leakage as evidenced by the high specificity. It was concluded that the technique is promising and that a suitable data set of human ovarian images should be obtained to conduct further studies.

Background

The ovaries of all mammalian species, including humans and cattle, contain follicles and corpora lutea (CL). The bovine model for studying human ovarian function is well established [1].

Objective and motivation

The objective of this research is to study the viability of level set image segmentation techniques [2] for automatic segmentation of CL (delineation of the boundary) in two-dimensional (2D) ultrasound images. Our study represents the first investigation of semi-automated segmentation of corpora lutea from ultrasound images. The level set method lends itself to image segmentation tasks because it requires minimal user input, accommodates arbitrary changes in region topology and offers a straightforward extension to higher dimensional data [3]. Figure 1(a) shows an image of a CL and Figure 1(b) shows the desired segmentation result as drawn by a human expert in ovarian ultrasound interpretation.

Figure 1
figure 1

Corpus Luteum segmentation goal and contour initialization. (a) Unprocessed ultrasound image containing a corpus luteum; a small irregularly shaped cystic cavity is visualized within the CL. (b) Manual segmentation of the CL by a human expert. (c) A circular initial contour placed within the CL in the image in (a). (d) The surface (ψ) of the initial signed distance function in which the contour is embedded.

The presence (or absence) of a CL in the ovaries, its size and morphology provide information regarding the current state of the individual's reproductive cycle [4, 5]. The most common method of visualizing the CL in vivo is ultrasonography. Monitoring the development of ovarian follicles and CL over time is crucial to the understanding of human and bovine reproductive biology, fertility and timing of fertility therapy, the effects of contraception, and the diagnosis of ovarian diseases and dysfunction, such as ovarian cysts, cancers, and polycystic ovarian syndrome.

In practice, CL segmentation is performed manually. Manual delineation is time consuming and subject to variance in human interpretation of images. The domain of image processing offers the potential for detailed analysis of CL size, and appearance, which will facilitate study of the aforementioned processes and diseases. Successful automation of CL segmentation would further the automation of existing analyses, such as correlating the value of image pixels within the CL region with various physiological attributes [6] and automatically determining CL diameter for use in higher-level classifiers [5], as well as facilitate new investigations.

Literature review

Research on prostate segmentation from 2D ultrasound images was reviewed for insight into CL segmentation due to the similarities of the two problems. The prostate has a similar echotexture appearance to that of corpora lutea in ultrasound images and a similar level of contrast with the rest of the image. Moreover, there is only one region of interest in both problems. However, there are differences which make the CL segmentation problem more challenging. Due to standardized imaging procedures, the prostate's position in an image is approximately known a priori and strong assumptions may also be made about the prostate's shape in the image plane [7–10]. This is not possible when imaging corpora lutea since it can be present at different locations within the ovary and is more variable in shape and size [11, 12]. CLs are assumed only to be approximately elliptical in the image plane with low to moderate eccentricity. Segmentation of the prostate from ultrasound images has been well studied [7]. A number of prostate segmentation methods used deformable contour models. Deformable contour models were first introduced as tools for image segmentation by Kass et al. [13]. Their active contour models or "snakes" formulated image segmentation as an energy minimization problem. An active contour was initially placed on an image. Solving energy equations caused the curve to move or "evolve" until it minimized the energy function. The energy function was chosen so that the curve tended to follow edges in the image. Snakes were found to poorly handle topological changes in contours such as merging or splitting but the difficulties can be overcome with care, albeit considerable computational overhead.

The success of prostate segmentation using deformable models using snakes and so-called "discrete dynamic contours" were largely dependent on careful initialization of the contour in a position near the desired boundary [14–16].

Badiei et al. developed an algorithm in which small number of marker points were placed on the prostate boundary which was then warped, based on an a priori shape model, so that the prostate was elliptical in shape [8]. The marker points were then used to find the ellipse that best fit the warped prostate. The inverse of the warping transformation was applied to the ellipse to obtain the final segmentation. The success of the approach was dependent on the correctness of the assumption of elliptical shape and the careful placement of the initial marker points.

Level set methods has been proven effective both in general [17–20] and for ultrasound image segmentation [21, 22]. Recent applications of the level set method to prostate ultrasound image segmentation [3, 7, 23] show that this technique is accurate and flexible (it can handle contours of varying shape, size, and concavity). A level set method was chosen for the current study because of the prior success of level set methods in prostate segmentation and its ability to easily handle arbitrary changes in contour topology.

Fan et al. [3] used a level set method to perform three-dimensional (3D) surface detection from ultrasound images of the prostate. A fast discrimination method was used to roughly extract the prostate region. The prostate region information was incorporated into the level set method instead of the spatial image gradient. This addressed the issue of "boundary leaking" which occurs when the contour evolves across a very weak edge that is part of the desired boundary. The segmentations produced qualitatively appeared to be good, but a quantitative assessment of segmentation accuracy was lacking.

Herein, a new methodology was created to segment corpora lutea from 2D ultrasound images using this work as a starting point. Our algorithm consists of a multi-step preprocessing stage followed by segmentation using a level set method. It is semi-automated in that it requires an initial user-specified closed contour that is assumed to be completely contained within the CL, and to completely contain the CL's central cavity if one exists. It was hypothesized that this level-set contour evolution method could be used to locate the boundaries of CL with an average error of 1–2 mm. The CLs used in this study ranged from 14.3 mm to 21.5 mm in diameter.

Methods

Image data

The work herein is motivated by the need to study human reproduction but images of bovine ovaries were used in this feasibility study as they are well-established as a vehicle for studying human ovaries due to similarities in physiology and morphology. The images used in this study were obtained during a previous study Singh et al. [6]. Left and right ovaries of heifers were surgically removed at defined stages of the estrous cycle and imaged ex situ in parallel planes at 0.5 mm increments using a broad-band (5–9 Mhz), convex-array, ultrasound transducer interfaced with an ATL Ultra Mark 9 HDI ultrasound machine (Advanced Technology Laboratories, Brothell, WA). At the time of ovariectomy, the number of days from ovulation was known. From this data set, ovaries with CLs (n = 8) were selected for the current study. From the set of parallel images of each ovary, the image which contained the CL at maximum diameter was selected. All images were 640 × 480 pixel 8-bit grayscale, but the pixel size varied across images, ranging from 0.057 mm × 0.057 mm to 0.087 mm × 0.087 mm as determined from the distance gradations on the right-hand side of each image. An experienced gynecologic ultrasonographer manually segmented the images which provided a "ground truth" for validation purposes. The diameters of the CLs in the selected images ranged from 14.2 mm to 21.5 mm. Diameter was estimated by averaging the length and width of the smallest bounding box aligned to the image coordinate axes of the expertly segmented CL region. A sample image is shown in Figure 1(c) and is used as a running example throughout this paper.

Curve evolution with level sets

In this section, the level set method is presented in a similar fashion to that in [3] which is, in turn, derived from [2]. The level set method evolves an initial 2D contour according to an energy function derived from image pixel data by embedding the 2D contour in a 3D surface.

An initial 2D contour C(t = 0) is represented by a 3D function ψ that evolves over time. The value of ψ at a point p at time t = 0 is defined as the distance d from p to the nearest point on C at time t:

The sign of d is negative if the point lies in the interior of C and is positive otherwise. The function ψ(p, t = 0) is called a signed distance function. Figure 1(c) illustrates a circular initial contour that was used to segment the image on which the contour was superimposed. The corresponding signed distance function, ψ(p, t = 0) is in Figure 1(d); the 2D contour is drawn beneath the surface.

The level set method causes the contour C(t = 0) to move towards the desired boundary by evolving the surface ψ over time. At an aribitrary time t, the contour C(t) is represented by the set of points for which ψ(p, t) = 0, also called the zero-level set of ψ:

The evolution of ψ is described by the partial differential equation

(3)

In this equation, the initial condition is ψ(p, t = 0), ∇ψ denotes the gradient of ψ, and F is the speed function. The speed function describes the rate at which the contour will move in the outward normal direction. The contour is encouraged to evolve towards the desired image boundary by designing an appropriate speed function F. Figure 2(a) is the contour from Figure 1(c) resulting from evolution using Equation 3 and an appropriate speed function; the corresponding final surface and its zero level set are depicted in 2(b).

Figure 2
figure 2

Result of the level set method. (a) The contour in Figure 1(c) after evolution with Equation 3 and an appropriate speed function. (b) The corresponding surface.

In general, the speed function F depends on the curvature K of the evolving front and is typically separated into a constant term F0 and the remainder F1(K):

The constant term F0 provides a constant expansion or contraction force depending on its sign. The curvature-dependent component F1 controls the smoothness of the deforming shape. A common choice of F is given in the following equation:

Equation 5 describes an outward unit normal force that is reduced by a factor proportional to the local curvature of the contour. Intuitively, this causes the non-smooth segments of the contour (which have high curvature) to move slowly while nearby portions of contour "catch up", resulting in smoother curve. The constant ϵ regulates the smoothness of the curve and must be greater than zero [3]. Larger values of ϵ result in smoother contours.

To cause the evolving contour to stop at the desired image boundary, F is typically multiplied with an image dependent quantity k I :

(6)

The term ∇G σ denotes the gradient of a two-dimensional Gaussian function whose standard deviation is σ, I is the image function, and * denotes convolution. The convolution operation is the same as that described for sticks filters (below); only the kernel differs. Since the gradient operator can be applied after the convolution operation without changing the result, the second term of the denominator's sum can be viewed as magnitude of the gradient of an image which has been convolved with a Gaussian filter kernel raised to the power p. A Gaussian filter has the effect of computing for each pixel a center-weighted average of the intensities in its local neighborhood which smoothes or blurs the image. A smoothed image is desirable since it reduces the magnitude of small, unimportant local edges so that they have less effect on the speed function.

Pixels with large image gradient (corresponding to pixels that have a high probability of being edges of the corpus luteum) will cause the value of k I (Equation 6) to be close to zero. When k I is multiplied with F the speed at which the contour embedded in ψ propogates is reduced to nearly zero when it is near the desired boundary. The exponent p controls the severity of the penalty that gradient magnitude applies to the speed function.

Speed function for CL segmentation

The low intensity contrast between CL regions and other regions of the ovary and the noisy nature of ultrasound images required a more sophisticated construction of the speed function, rather than direct application of the process described by Equation 6. The speed function was constructed by filtering the image with a sticks filter [24, 25]. The filtered image was then processed with Sobel filters [26] to obtain a gradient magnitude which was then contrast enhanced. The mean image intensity was then subtracted from each pixel and the resulting image was contrast enhanced a second time and inserted into Equation 6 to obtain the weighting term k I for the speed function F, above. These steps are discussed in greater detail below.

Sticks filter

Sticks filtering [24, 25] is a process for reducing image speckle which may obfuscate boundaries of structures in the image. A sticks filter is a bank of linear filters which evaluate the likelyhood of a linear feature of length N in one of 2N-2 possible orientations passing through each pixel. Each filter mask is a N × N matrix where each entry is either 0 or 1/N. Entries with value 1/N represent lines through the center of the matrix. A bank of filters for N = 5 is illustrated graphically in Figure 3. Linear filtering [26] (convolution) works by positioning the filter mask's center over each image pixel and computing the sum of products of each entry in the filter mask with the intensity of the underlying pixel. Each filter mask or "stick" will thus produce the greatest response (sum of products) when positioned over a bright linear feature of a specific orientation. A pixel's output intensity is the response of the filter from the sticks filter bank which responds maximally at that point; the magnitude of the maximal response corresponds to the mean intensity along the line segment orientation to which that filter is most sensitive (the filter response is the sum of the intensities along the stick divided by N). Figure 4(b) shows the effect of sticks filtering on the image in Figure 4(a) using sticks of length 17. The sticks filter suppresses much of the speckle while maintaining the visibility of thin linear features which would be seriously degraded by standard image smoothing techniques.

Figure 3
figure 3

Representation of a sticks filter bank for sticks of length 5.

Figure 4
figure 4

Effect of the Sticks Filter. (a) An unprocessed ultrasound image of a corpus luteum. (b) The image in (a) processed with a sticks filter bank of size 17.

The first step in our algorithm was to filter the images with a sticks filter bank of size 17. This stick length was chosen because it is small enough that it correlates well with CL boundary segments of the same length but is large enough that it smoothes small-scale features and reduces the magnitude of edges with high curvature which are not of interest. Reducing the magnitude of high-curvature edges is important since the level set method relies heavily on edge information and segmentations are undesirably influenced by small-scale edges such as those caused by speckle.

Sobel magnitude filter

Sobel convolution filters are edge detectors which approximate partial first derivatives of the image. The horizontal (S h ) and vertical (S v ) Sobel filter kernels are

(7)

These filters compute center-weighted average finite differences over a distance of 3 pixels in a 3 pixel-wide band in the horizontal and vertical directions, respectively.

An edge magnitude image M was computed by applying horizontal and vertical Sobel edge detectors to the sticks filtered image and combining the horizontal and vertical responses (respectively R h and R v ) in the usual way:

(8)

The Sobel filters may be generalized to larger sizes:

(9)

Larger Sobel filters compute the finite differences over longer distances and bands and are much less sensitive to very local changes. The effect is that larger filters respond to larger scale changes in the image function and small scale changes are not captured.

The use of the Sobel filter at this point in the algorithm was to identify large scale step edges in the image which are characteristic of the CL-stroma boundary. A Sobel filter size of 17 × 17 was chosen empirically by testing the filters on sample images, where preference was given to images that exhibited strong edges along the true boundary of the CL. A size of 17 × 17 enhanced only the major, dominant edges, which are more likely to be true boundaries of the CL. Figure 5 shows the results of processing Figure 4(b) with a 17 × 17 Sobel magnitude filter.

Figure 5
figure 5

Effect of edge detection. This figure shows the result of applying a 17 × 17 Sobel magnitude filter to Figure 4(b). The dominant edges identified at this stage are those more likely to be contours of the corpus luteum.

Intensity normalization

In ultrasonographic images, even the strongest edges may have fairly weak magnitude. In order to improve the contrast and further emphasize major edges, the pixel intensities of the edge magnitude image M were normalized to obtain the image M ¯ MathType@MTEF@5@5@+=feaagaart1ev2aaatCvAUfKttLearuWrP9MDH5MBPbIqV92AaeXatLxBI9gBaebbnrfifHhDYfgasaacPC6xNi=xH8viVGI8Gi=hEeeu0xXdbba9frFj0xb9qqpG0dXdb9aspeI8k8fiI+fsY=rqGqVepae9pg0db9vqaiVgFr0xfr=xfr=xc9adbaqaaeGaciGaaiaabeqaaeqabiWaaaGcbaWaa0aaaeaacqWGnbqtaaaaaa@2D09@ .

Normalization is a linear remapping of pixel intensities. If the pixel intensities in an image are within the interval [Imin,I max ], then a normalizing linear remapping causes intensity I min to be mapped to intensity 0, and I max to be mapped to the maximum possible intensity while intensities between I min and I max are linearly interpolated. After normalization, images were converted from 8-bit grayscale (integer pixel intensities ranging from 0 to 255) to real-valued images with intensities ranging from 0.0 (black) to 1.0 (white) by dividing each pixel's intensity by 255.

Weak edge suppression

The normalized edge magnitude image obtained in the previous step exhibits weak edges within the interior of the CL region (Figure 6(a)) caused by the fine texture present there. These edges interfere with the level set segmentation process. Weak edges were suppressed by subtracting from each pixel in the normalized edge magnitude image the mean intensity of the normalized edge magnitude image and clipping all resulting negative intensities to zero. Formally, let μ be the mean intensity of the normalized edge magnitude image M ¯ MathType@MTEF@5@5@+=feaagaart1ev2aaatCvAUfKttLearuWrP9MDH5MBPbIqV92AaeXatLxBI9gBaebbnrfifHhDYfgasaacPC6xNi=xH8viVGI8Gi=hEeeu0xXdbba9frFj0xb9qqpG0dXdb9aspeI8k8fiI+fsY=rqGqVepae9pg0db9vqaiVgFr0xfr=xfr=xc9adbaqaaeGaciGaaiaabeqaaeqabiWaaaGcbaWaa0aaaeaacqWGnbqtaaaaaa@2D09@ . The mean-subtracted edge magnitude image N is defined as

Figure 6
figure 6

Removal of weak edges within the CL. (a) The normalized image from Figure 5. (b) The image from (a) after subtraction of global average, truncation, and re-normalization; the weak edges within the CL have been largely suppressed.

(10)

The resulting image N was re-normalized using the procedure described above to obtain the normalized mean-subtracted edge magnitude image N ¯ MathType@MTEF@5@5@+=feaagaart1ev2aaatCvAUfKttLearuWrP9MDH5MBPbIqV92AaeXatLxBI9gBaebbnrfifHhDYfgasaacPC6xNi=xH8viVGI8Gi=hEeeu0xXdbba9frFj0xb9qqpG0dXdb9aspeI8k8fiI+fsY=rqGqVepae9pg0db9vqaiVgFr0xfr=xfr=xc9adbaqaaeGaciGaaiaabeqaaeqabiWaaaGcbaWaa0aaaeaacqWGobGtaaaaaa@2D0B@ . Figure 6(b) illustrates the effect of mean subtraction from the normalized edge magnitude image in Figure 6(a); the weaker edges present in the CL region have been largely suppressed. The central fluid cavity in the pictured CL causes the high-magnitude edge that is retained in the renormalized image. The image N ¯ MathType@MTEF@5@5@+=feaagaart1ev2aaatCvAUfKttLearuWrP9MDH5MBPbIqV92AaeXatLxBI9gBaebbnrfifHhDYfgasaacPC6xNi=xH8viVGI8Gi=hEeeu0xXdbba9frFj0xb9qqpG0dXdb9aspeI8k8fiI+fsY=rqGqVepae9pg0db9vqaiVgFr0xfr=xfr=xc9adbaqaaeGaciGaaiaabeqaaeqabiWaaaGcbaWaa0aaaeaacqWGobGtaaaaaa@2D0B@ was used to compute the weighting term k I for the speed function F in the level set method.

Level set segmentation of corpora lutea

Placement of the initial contour

Placement of the initial 2D contour was done manually. All initial contours were circular. If a fluid cavity was present, the initial contour was chosen to enclose the fluid cavity. Kastelic et al. showed that 79% of bovine ovaries exhibit central cavities [27]. If a fluid cavity was not present, the contour was placed near the center of the CL. The remainder of the algorithm is automatic.

The intial 2D contour was embedded as the zero level set of a 3D signed distance function ψ(p, t = 0). This function was evolved using Baris Sumengen's Matlab level set toolbox [28] according to Equation 3.

Curve evolution parameters

The weighting term k I for the speed function F was computed from the preprocessed image N ¯ MathType@MTEF@5@5@+=feaagaart1ev2aaatCvAUfKttLearuWrP9MDH5MBPbIqV92AaeXatLxBI9gBaebbnrfifHhDYfgasaacPC6xNi=xH8viVGI8Gi=hEeeu0xXdbba9frFj0xb9qqpG0dXdb9aspeI8k8fiI+fsY=rqGqVepae9pg0db9vqaiVgFr0xfr=xfr=xc9adbaqaaeGaciGaaiaabeqaaeqabiWaaaGcbaWaa0aaaeaacqWGobGtaaaaaa@2D0B@ in the previous section using a slightly altered version of Equation 6:

(11)

In order to achieve finer control over the rate at which k I approaches zero as N ¯ MathType@MTEF@5@5@+=feaagaart1ev2aaatCvAUfKttLearuWrP9MDH5MBPbIqV92AaeXatLxBI9gBaebbnrfifHhDYfgasaacPC6xNi=xH8viVGI8Gi=hEeeu0xXdbba9frFj0xb9qqpG0dXdb9aspeI8k8fiI+fsY=rqGqVepae9pg0db9vqaiVgFr0xfr=xfr=xc9adbaqaaeGaciGaaiaabeqaaeqabiWaaaGcbaWaa0aaaeaacqWGobGtaaaaaa@2D0B@ increases, a coefficient α was added to the second term of the denominator.

Values of α = 100 and p = 1 were chosen empirically. A surface plot of 1-k I (inverted for ease of interpretation) computed using the image in Figure 6(b) as N ¯ MathType@MTEF@5@5@+=feaagaart1ev2aaatCvAUfKttLearuWrP9MDH5MBPbIqV92AaeXatLxBI9gBaebbnrfifHhDYfgasaacPC6xNi=xH8viVGI8Gi=hEeeu0xXdbba9frFj0xb9qqpG0dXdb9aspeI8k8fiI+fsY=rqGqVepae9pg0db9vqaiVgFr0xfr=xfr=xc9adbaqaaeGaciGaaiaabeqaaeqabiWaaaGcbaWaa0aaaeaacqWGobGtaaaaaa@2D0B@ is shown Figure 7. Strong edges are red in colour, indicating that the evolution of ψ will slow and eventually stop near the true contour of the CL. The parameter ϵ in Equation 5 was experimentally chosen to be 0.375. Use of a non-zero curvature-based force was used to discourage the "leaking" of the evolving contour through sections of the true CL boundary where the magnitude of the edge was weak.

Figure 7
figure 7

Visualization of the speed function. A surface plot of 1 - k I (inverted for ease of interpretation) computed by using the image in Figure 6(b) for N ¯ MathType@MTEF@5@5@+=feaagaart1ev2aaatCvAUfKttLearuWrP9MDH5MBPbIqV92AaeXatLxBI9gBaebbnrfifHhDYfgasaacPC6xNi=xH8viVGI8Gi=hEeeu0xXdbba9frFj0xb9qqpG0dXdb9aspeI8k8fiI+fsY=rqGqVepae9pg0db9vqaiVgFr0xfr=xfr=xc9adbaqaaeGaciGaaiaabeqaaeqabiWaaaGcbaWaa0aaaeaacqWGobGtaaaaaa@2D0B@ in Equation 11. Areas close to the desired contour of the corpus luteum boundary are red in colour, indicating that the evolution of ψ will slow and eventually stop near the true contour of the corpus luteum.

The Matlab toolbox [28] which was used to evolve the boundary also required some additional parameters. The first-order accurate finite difference option was chosen for approximation of derivatives. The surface being evolved must be discretized on a grid; a conservative grid size of 1 × 1 was chosen. The time interval between steps of the evolution, which may range from 0.5–0.9, was chosen to be a conservative 0.5. A more aggressive selection of a larger timestep would improve the speed of the curve evolution algorithm, but could also cause numeric instability leading to poor or nonsensical results.

The surface in which the 2D contour is embedded will almost certainly lose the property of being a signed distance function as it evolves since portions of surface will move at different velocities. It is necessary to periodically re-initialize the surface to a signed distance function, for example, after every m time steps. This is achieved by extracting the zero-level set C(t) (defined in Equation 2) and constructing from it a new signed distance function ψ new . The evolution process then continues using ψ new for another m time steps. This is repeated until a stopping condition is condition is met (see below). A value of m = 1 was selected which caused the surface to be reinitialized as a signed distance function after every time step. This ensured that the surface was always a signed distance function and facilitated the test of the stopping condition.

Stopping condition

The stopping condition determined when to cease evolution of ψ. Since the zero level set contour will presumably stop when it has settled onto the desired boundary, a test is needed to determine when motion of C(t) becomes sufficiently small. The criterion used was motivated by the work of [29]: "If the iteration count [total number of time steps] n exceeds a particular threshold N0 and the rate of change of contour length | d L c ( t ) d t | MathType@MTEF@5@5@+=feaagaart1ev2aaatCvAUfKttLearuWrP9MDH5MBPbIqV92AaeXatLxBI9gBaebbnrfifHhDYfgasaacPC6xNi=xH8viVGI8Gi=hEeeu0xXdbba9frFj0xb9qqpG0dXdb9aspeI8k8fiI+fsY=rqGqVepae9pg0db9vqaiVgFr0xfr=xfr=xc9adbaqaaeGaciGaaiaabeqaaeqabiWaaaGcbaWaaqWaaKqbagaadaWcaaqaaiabdsgaKjabdYeamnaaBaaabaGaem4yamgabeaacqGGOaakcqWG0baDcqGGPaqkaeaacqWGKbazcqWG0baDaaaakiaawEa7caGLiWoaaaa@3966@ <γ over a fixed number of iterations Δn, STOP the iteration process".

Herein, L c (t), the length of the contour C(t), was replaced with the area of the interior of C(t), denoted A c . The quantity | d A c ( t ) d t | MathType@MTEF@5@5@+=feaagaart1ev2aaatCvAUfKttLearuWrP9MDH5MBPbIqV92AaeXatLxBI9gBaebbnrfifHhDYfgasaacPC6xNi=xH8viVGI8Gi=hEeeu0xXdbba9frFj0xb9qqpG0dXdb9aspeI8k8fiI+fsY=rqGqVepae9pg0db9vqaiVgFr0xfr=xfr=xc9adbaqaaeGaciGaaiaabeqaaeqabiWaaaGcbaWaaqWaaKqbagaadaWcaaqaaiabdsgaKjabdgeabnaaBaaabaGaem4yamgabeaacqGGOaakcqWG0baDcqGGPaqkaeaacqWGKbazcqWG0baDaaaakiaawEa7caGLiWoaaaa@3950@ was computed at each time step and a record of these values for the previous Δm time steps was maintained. The evolution was halted when n > N0 and | d A c ( t ) d t | MathType@MTEF@5@5@+=feaagaart1ev2aaatCvAUfKttLearuWrP9MDH5MBPbIqV92AaeXatLxBI9gBaebbnrfifHhDYfgasaacPC6xNi=xH8viVGI8Gi=hEeeu0xXdbba9frFj0xb9qqpG0dXdb9aspeI8k8fiI+fsY=rqGqVepae9pg0db9vqaiVgFr0xfr=xfr=xc9adbaqaaeGaciGaaiaabeqaaeqabiWaaaGcbaWaaqWaaKqbagaadaWcaaqaaiabdsgaKjabdgeabnaaBaaabaGaem4yamgabeaacqGGOaakcqWG0baDcqGGPaqkaeaacqWGKbazcqWG0baDaaaakiaawEa7caGLiWoaaaa@3950@ <γ was true for all of the previous Δm time steps. Values N0 = 250 and γ = 250, and Δm = 15 were used to evaluate the stopping condition.

Figure 8 shows the contour detected by the proposed method using the lower CL image from Figure 1(a) as input to the level set method. The green contour is the initial contour, the purple contour is the final (automatic) contour, and the yellow contour is the contour determined by an experienced human interpreter. Results of the method on the entire data set are discussed in the Results section, below.

Figure 8
figure 8

Result of segmentation. The contour detected by the proposed method using the image and initial contour from Figure 1(c) as input to the level set method. The green contour is the initial contour, the purple contour is the final (automatic) contour, and the yellow contour is the contour determined by an experienced human interpreter.

Smoothing with Fourier descriptors

During development of the implementation of the above algorithm, it was observed that the region boundaries determined by the algorithm quite often meandered back and forth across the expertly traced boundary. It was hypothesized that if the boundaries were smoothed, they might conform more closely to the ground truth boundary. Boundaries were smoothed by truncating the Fourier descriptors of the boundary [26]. This section reviews the basics of smoothing using Fourier descriptors.

Let P = (p1,...,p B ) be a sequence of B points representing the pixel locations of a boundary in (anti-)clockwise order. For the i-th point, p i , let x(p i ) and y(p i ) respectively denote the x and y coordinates of p i . The boundary can then be represented as a sequence of complex numbers S = (s1,s2,...,s B ) where s i = x(p i ) + jy(p i ) and j = − 1 MathType@MTEF@5@5@+=feaagaart1ev2aaatCvAUfKttLearuWrP9MDH5MBPbIqV92AaeXatLxBI9gBaebbnrfifHhDYfgasaacPC6xNi=xH8viVGI8Gi=hEeeu0xXdbba9frFj0xb9qqpG0dXdb9aspeI8k8fiI+fsY=rqGqVepae9pg0db9vqaiVgFr0xfr=xfr=xc9adbaqaaeGaciGaaiaabeqaaeqabiWaaaGcbaWaaOaaaeaacqGHsislcqaIXaqmaSqabaaaaa@2DCD@ . The Fourier descriptors, a(u), of the boundary are the coefficients of the Fourier transform of S:

(12)

The original boundary may be recovered via the inverse Fourier transform. However, S, (and, in turn, P) may be approximated by using only the first Z ≤ B Fourier descriptors in the computation of the inverse transform:

(13)

This acts as a "lowpass filter" on the contour, and reduces the magnitude of high frequency variations which results in a smoother contour. The fewer descriptors that are retained, the greater the smoothing effect, as illustrated in Figure 9. The effects of smoothing on contour accuracy are discussed in the Results section.

Figure 9
figure 9

Effect of smoothing a contour by truncating Fourier descriptors. (a) An unsmoothed meandering contour. (b) The contour in (a) smoothed by retaining only the first 5% of the Fourier descriptors; the smoothed contour is drawn in black and the original contour is superimposed in grey. (c) Retaining only the first 3% of descriptors. (d) Retaining only the first 2% of descriptors, the smoothed contour now follows the "average" path rather than meandering.

Validation

The manual segmentations obtained for this study were used as the basis for comparison when validating the semi-automatic segmentations. The metrics [30, 31] mean absolute distance (MAD), root mean squared distance (RMSD), Hausdorff distance (HD), sensitivity, and specificity were used to evaluate segmentation accuracy. These metrics are formally defined below.

The minimum distance d min between a point p, and a contour C, is defined as

(14)

which is the smallest Euclidean distance between p and any point on the contour C.

Let C M and C A denote the sets of points on the manually segmented and semi-automatically segmented contours, respectively. The mean absolute distance between C M and C A is

(15)

where |C M | denotes the number of points in the set CM. Smaller MAD indicates a better segmentation. The root mean squared distance between C M and C A is

(16)

This metric is similar to MAD, except that pixels in C A which are further away from C M contribute a greater penalty to the metric.

Hausdorff distance is the greatest minimum distance between two contours. It characterizes the maximum deviation of one contour from another. Formally,

(17)

A smaller HD is more desirable.

Let TP denote the set of true positive pixels, that is, those pixels that were correctly identified as belonging to a CL region. Let TN be the set of true negative pixels correctly identified as not belonging to a CL region. Similarly define FP and FN to be the set of false positive and false negative pixels. The sensitivity of a segmentation is thus defined as

(18)

and represents the percentage of pixels that truly belong to the CL region which were correctly identified as such. The specificity of a segmentation is the ratio

(19)

Specificity represents the percentage of pixels which are not part of the CL region which were correctly identified as such. A perfect segmentation with respect to the ground truth will have both a sensitivity and a specificity of 1.0.

Results

The eight images in our testing set were segmented with the algorithm described above. The manual segmentations performed by a human expert were compared with the automated segmentations by computing the previously described metrics for each image.

Table 1 summarizes the validation metrics for each image. Overall, the mean MAD was 0.87 mm (σ = 0. 36 mm), the mean RMSD was 1.1 mm (σ = 0.47 mm), mean Hausdorff distance was 3.4 mm (σ = 2.0 mm), mean sensitivity was 0.814 (σ = 0.171), and mean specificity was 0.990 (σ = 0.00786).

Table 1 Quality of the CL segmentations analyzed by computing the absolute distance (MAD), root mean square distance (RMSD), and Hausdorff distance (HD) between the automatically and manually determined contours.

Figures 10 and 11 show the results of segmentation for all test images. The green contours are the initial contours, the purple contours are the final automatically determined contours, and the yellow contours are the contours manually determined by a human expert.

Figure 10
figure 10

Segmentation results for test images 1–4. The green contours are the initial contours, the purple contours are the final (automatic) contours, and the yellow contours are the contours determined by a human expert. The image numbering corresponds to "Image ID" in Table 1.

Figure 11
figure 11

Segmentation results for test images 5–8. The green contours are the initial contours, the purple contours are the final (automatic) contours, and the yellow contours are the contours determined by a human expert. The image numbering corresponds to "Image ID" in Table 1.

The mean runtime for the completion of the segmentation of one image is 5 min 54s (σ = 1 min 41s). All test images were segmented using a 2.80 GHz Pentium 4 Processor with 2 GB of RAM running Mandriva Linux (2.6.17–14 mdv). All segmentations were executed using MATLAB (The Mathworks Inc., Natick, MA, USA) version 7.4.0.336 (R2007a).

Smoothing with Fourier descriptors was performed on the automatically segmented contours. Table 2 shows the mean MAD, mean RMSD, mean HD, specificity, and sensitivity over all eight test images while varying the percentage of descriptors used to construct the smoothed contour. Table 2 demonstrates that as the percentage of descriptors was decreased, all validation metrics improved. However, the improvements were negligible; the greatest improvement was in the RMSD metric using 1% of the Fourier descriptors, where the mean RMSD decreased by 0.2 mm.

Table 2 Effect of smoothing by truncation of Fourier descriptors on mean quality metrics as a function of percentage of Fourier descriptors retained.

Discussion

Table 1 and Figures 10 and 11 demonstrate quite clearly that the proposed algorithm typically under-segments CL; specificity is generally excellent, while sensitivity varies from very good (0.938) to quite poor (0.457). The high specificity occurred because the contours were initialized inside the CL and grew outward. The sensitivity was generally lower because of the noisy nature of the images. The gradient information captured in k I (Equation 11) did not perfectly match the true contour of the CL which caused the curve evolution to stop too soon in many instances. There are no other CL segmentation algorithms with which this work could be compared. Given that the mean Hausdorff distance is only 3.4 mm (σ = 2.0 mm) and the images from Figures 10 and 11, it is clear that the regions are being under-segmented, but the degree of under-segmentation is fairly uniform about the true contour. Smoothing with Fourier descriptors resulted in a minor improvement, but it is concluded that the additional computation is not worth the degree of improvement. The advantages of the algorithm are that it can handle arbitrary contours and requires no user intervention beyond placement of the initial contour. Moreover, no serious boundary leaking occurred resulting in very high specificity in the segmentations. Though similar to the work of Gooding et al. [32], our work differs in that a substantial amount of preprocessing is needed to enhance the gradient formed by CL boundaries. Gooding et al. use a similar gradient-based level set contour evolution method to segment ovarian follicles from 3D images. Follicles are easier to distinguish from their environment with gradient-based level set methods without aggressive preprocessing because the gradient at follicle boundaries is usually quite strong.

There are several aspects of this algorithm that could be improved with future work. A combination of texture and intensity properties might be able to locate a small area of the CL with high confidence in which the initial contour could be automatically placed. Recent work on distinguishing CL echotexture from that of ovarian stroma has shown promise [33] and this work will be continued by examining wavelet-based texture features. A challenge in automatic contour placement will be to distinguish CL texture not only from that of stroma, but from that of a corpus albicans which are non-functional and are comprised of more dense tissue.

A more rigorous exploration of the algorithm's parameter space (which is of high dimension) could yield a set of parameters that produces superior results. The speed of the contour evolution is clearly an issue. Speedup by a factor of 2–3 is likely possible by re-implementing the methods herein using a programming language such as C or C++. Real-time speeds are not likely achievable using level-set-based contour evolution, thus, more computationally efficient segmentation algorithms will be investigated to see whether they may be suitable for CL segmentation.

The algorithm must be tested on CL's imaged in vivo. It is expected that the algorithm will give similar performance on such images. Since the algorithm was successful in locating boundaries between lutetal tissue and stroma we expect it to perform similarly at boundaries where the protruding CL is closely interfaced with organs and tissues surrounding the ovary. Moreover, our preprocessing causes significant blurring in order to "plug" holes in the CL boundary through which leaks into the background may occur which affects the potential accuracy of boundary location. Images acquired in vivo may permit less aggressive preprocessing since the area of the image surrounding the ovary will be textured and not have a near-zero gradient through which the contour can leak. In turn, less blurring would cause less damage to the shape of the boundary, permitting a higher accuracy in boundary location.

The evidence that CL morphology is related to function is conflicting, particularly between different laboratories and species. A study by Tom et al. supported the hypothesis that quantitative changes in luteal echotexture in bovine corpora lutea are indicative of changes in its physiologic status and capacity to elaborate progesterone [34]. In mares, luteal area was positively correlated with circulating progesterone levels; however, the presence of a cystic cavity within the CL did not affect the luteal gland's ability to produce progesterone [27, 35]. In humans, mean luteal area of human CL's was shown to be positively correlated with serum progesterone concentrations (r = +0.88) and serum estradiol concentrations (r = +0.62) [36]. The human study is particularly encouraging with regard to the prospect of non-invasive automated interpretation of physiologic information from images which could obviate blood tests in future generations. In this context, the current study and others of its kind will be crucial to automated, standardized analysis of CL images. The work presented in the present study also is useful for automated recognition of CL in the studies of CL morphology and has the potential to be used in different imaging modalities such as histology, ultrasonography and magnetic resonance imaging [37, 38]. Indeed, any study which would benefit from automatic measurements of CL diameter, such as that of Maldonado-Castillo et al. [5], would be an application for automated CL segmentation algorithms.

Conclusion

This work is a first step towards a fully automated CL segmentation algorithm which is quite successful when the initial contour is manually placed. The hypothesis that our algorithm can locate CL boundaries within 1–2 mm on average is accepted. It is concluded that level set contour evolution methods are a viable option for offline (non-realtime) CL segmentation and warrant further study.

The CL segmentation algorithm herein is a prototype for the currently missing piece of a comprehensive segmentation algorithm for all ovarian structures (follicles, CL, stroma). The work has both research and clinical importance as noninvasive gynecologic imaging becomes more firmly integrated into standard clinical practice. Standardized descriptions and image interpretations are critical in developing standardized diagnostic algorithms and clinical practice guidelines.

Abbreviations

CL:

Corpus luteum

HD:

Hausdorff distance

MAD:

Mean absolute distance

RMSD:

Root mean squared distance.

References

  1. Adams GP, Pierson RA: Bovine model for study of ovarian follicular dynamics in humans. Theriogenology. 1995, 43: 113-120.

    Article  Google Scholar 

  2. Sethian JA: Level Set Methods and Fast Marching Methods – Evolving Interfaces in Geometry, Fluid Mechanics, Computer Vision and Materials Science. 1996, Cambridge University Press

    Google Scholar 

  3. Fan S, Voon LK, Ng WS: 3D Prostate Surface Detection from Ultrasound Images Based on Level Set Method. Med Image Comput Comput Assist Interv Int Conf Med Image Comput Comput Assist Interv. 2002, London, UK: Springer-Verlag, 389-396.

    Google Scholar 

  4. Baerwald AR, Adams GP, Pierson RA: A new model for ovarian follicular development during the human menstrual cycle. Fertil Steril. 2003, 80: 116-122.

    Article  PubMed  Google Scholar 

  5. Maldonado-Castillo I, Eramian M, Pierson R, Singh J, Adams G: Classification of Reproductive Cycle Phase using Ultrasound-detected Features. Proceedings of the 4th Canadian Conference on Computer and Robot Vision, IEEE. 2007, 258-265.

    Google Scholar 

  6. Singh J, Pierson RA, Adams GP: Ultrasound image attributes of the bovine corpus luteum: structural and functional correlates. J Reprod Fertil. 1997, 109: 35-44.

    Article  CAS  PubMed  Google Scholar 

  7. Fan S, Voon LK, Ng WS, Wu RY: Prostate Boundary Detection From Ultrasonographic Images. J Ultrasound Med. 2003, 22 (6): 605-623.

    Google Scholar 

  8. Badiei S, Salcudean SE, Varah J, Morris WJ: Prostate Segmentation in 2D Ultrasound Images Using Image Warping and Ellipse Fitting. Med Image Comput Comput Assist Interv Int Conf Med Image Comput Comput Assist Interv. 2006, 17-24.

    Google Scholar 

  9. Pathak SD, Chalana V, Haynor DR, Kim Y: Edge-Guided Boundary Delineation in Prostate Ultrasound Images. IEEE Trans Med Imaging. 2000, 19 (12): 1211-1219.

    Article  CAS  PubMed  Google Scholar 

  10. Gong L, Pathak SD, Haynor DR, Cho PS, Kim Y: Parametric shape modeling using deformable superellipses for prostate segmentation. IEEE Trans Med Imaging. 2004, 23 (3): 340-349.

    Article  PubMed  Google Scholar 

  11. Pierson R, Ginther O: Reliability of Diagnostic Ultrasonography for Identification and Measurement of Follicles and Detecting the Corpus Luteum in Heifers. Theriogenology. 1987, 28 (6): 929-936.

    Article  Google Scholar 

  12. Pierson R, Ginther O: Ultrasonic Imaging of the Ovaries and Uterus in Cattle. Theriogenology. 1988, 29: 21-37.

    Article  Google Scholar 

  13. Kass M, Witkin A, Terzopoulos D: Snakes: Active Contour Models. International Journal of Computer Vision. 1988, 1 (4): 321-331.

    Article  Google Scholar 

  14. Pathak S, Aarnink RG, de La Rosette JJ, Chalana V, Wijkstra H, Haynor DR, Debruyne FM, Kim Y: Quantitative three-dimensional transrectal ultrasound (TRUS) for prostate imaging. Proc Soc Photo Opt Instrum Eng, Bellingham, WA. 1998, 3335: 83-92.

    Google Scholar 

  15. Ladak H, Mao F, Wang Y, Downey D, Steinman D, Fenster A: Prostate segmentation from 2D ultrasound images. World Congress on Medical Physics and Biomedical Engineering. 2000, 27: 1777-1788.

    CAS  Google Scholar 

  16. Wang Y, Cardinal HN, Downey DB, Fenster A: Semiautomatic three-dimensional segmentation of the prostate using two-dimensional ultrasound images. Med Phys. 2003, 30 (5): 887-897.

    Article  PubMed  Google Scholar 

  17. Paragios N, Deriche R: Geodesic Active Regions and Level Set Methods for Supervised Texture Segmentation. International Journal of Computer Vision. 2002, 46 (3): 223-247.

    Article  Google Scholar 

  18. Malladi R, Sethian JA, Vemuri BC: Shape Modeling with Front Propagation: A Level Set Approach. IEEE Trans Pattern Anal Mach Intell. 1995, 17 (2): 158-175.

    Article  Google Scholar 

  19. Ranchin F, Dibos F: Variational level set methods: from continuous to discrete setting, applications in video segmentation and tracking. IEEE International Conference on Image Processing. 2005, 1: 273-276.

    Google Scholar 

  20. Santis AD, Iacoviello D: Discrete level set approach to image segmentation. 2005, Tech. rep., Computer Science, University of Rome

    Google Scholar 

  21. Lin P, Zheng C, Yang Y, Gu J: Medical Image Segmentation by Level Set Method Incorporating Region and Boundary Statistical Information. Progress in Pattern Recognition, Image Analysis and Applications. 2004, 3287/2004: 654-660.

    Article  Google Scholar 

  22. McInerney T, Terzopoulos D: Deformable models in medical image analysis: a survey. Med Image Anal. 1996, 1 (2): 91-108.

    Article  CAS  PubMed  Google Scholar 

  23. Gong L, Ng L, Pathak SD, Tutar I, Cho PS, Haynor DR, Kim Y: Prostate ultrasound image segmentation using level set-based region flow with shape guidance. Proc Soc Photo Opt Instrum Eng. Edited by: Fitzpatrick MJ, Reinhardt JM. 2005, 5747: 1648-1657.

    Google Scholar 

  24. Czerwinski RN, Jones DL, O'Brien WD: Line and Boundary Detection in Speckle Images. IEEE Trans Image Process. 1998, 7 (12): 1700-1714.

    Article  CAS  PubMed  Google Scholar 

  25. Czerwinski RN, Jones DL, O'Brien WD: Detection of Lines and Boundaries in Speckle Images – Application to Medical Ultrasound. IEEE Trans Med Imaging. 1999, 18 (2): 126-136.

    Article  CAS  PubMed  Google Scholar 

  26. Gonzalez RC, Woods RE: Digital Image Processing. 2008, Prentice-Hall, Inc, 3

    Google Scholar 

  27. Kastelic JP, Pierson RA, Ginther OJ: Ultrasonic Morphology of Corpora Lutea and Central Luteal Cavities During the Estrous Cycle and Early Pregnancy in Heifers. Theriogenology. 1990, 34 (3): 487-498.

    Article  CAS  PubMed  Google Scholar 

  28. Sumengen B: A Matlab toolbox implementing Level Set Methods. 2005, [http://barissumengen.com/level_set_methods/index.html]

    Google Scholar 

  29. Chaudhury KN, Ramakrishnan KR: Stability and convergence of the level set method in computer vision. Pattern Recognition Letters. 2007, 28 (7): 884-893.

    Article  Google Scholar 

  30. Chalana V, Kim Y: A methodology for evaluation of boundary detection algorithms on medical images. IEEE Trans Med Imaging. 1997, 16: 642-652.

    Article  CAS  PubMed  Google Scholar 

  31. Bowyer KW: Validation of Medial Image Analysis Techniques. Handbook of Medical Imaging: Medical Image Processing and Analysis. Edited by: Fitzpatrick JM, Sonka M. 2000, Bellingham, WA: SPIE Press, 2: 567-608.

    Chapter  Google Scholar 

  32. Gooding MJ, Kennedy S, Noble JA: Volume segmentation and reconstruction from freehand 3D ultrasound data with application to ovarian follicle measurement. Ultrasound Med Biol. 2008, 34 (2): 183-195.

    Article  PubMed  Google Scholar 

  33. Eramian MG, Adams GP, Pierson RA: Enhancing Ultrasound Texture Differences for Developing an in vivo "Virutal Histology" Approach to Bovine Ovarian Imaging. Reprod Fert Develop. 2007, 19 (8): 910-924. [http://www.publish.csiro.au/nid/44/paper/RD06167.htm]

    Article  Google Scholar 

  34. Tom JW, Pierson RA, Adams GP: Quantitative echotexture analysis of bovine corpora lutea. Theriogenology. 1998, 49: 1345-1352.

    Article  CAS  PubMed  Google Scholar 

  35. Townson DH, Pierson RA, Ginther OJ: Characterization of Plasma Progesterone Concentrations for Two Distinct Luteal Morphologies in Mares. Theriogenology. 1989, 32: 197-204.

    Article  CAS  PubMed  Google Scholar 

  36. Baerwald AR, Adams GP, Pierson RA: Form and function of the corpus luteum during the human menstrual cycle. Ultrasound Obst Gyn. 2005, 25 (5): 498-507.

    Article  CAS  Google Scholar 

  37. Sarty GE, Adams GP, Pierson RA: Three-dimensional magnetic resonance imaging for the study of ovarian function in a bovine in vitro model. J Reprod Fertil. 2000, 119: 69-75.

    Article  CAS  PubMed  Google Scholar 

  38. Hilton JL, Baerwald AR, Sarty GE, Adams GP, Pierson RA: Magnetic Resonance Image Attributes of the Bovine Corpus Luteum During Development and Regression. Anat Rec Part A. 2003, 272A (1): 454-459.

    Article  Google Scholar 

Download references

Acknowledgements

This research was supported by grants from the Natural Sciences and Engineering Research Council of Canada, the Saskatchewan Health Research Foundation, and the Canadian Institutes for Health Research.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Mark G Eramian.

Additional information

Competing interests

The authors declare that they have no competing interests.

Authors' contributions

BJR carried out the literature review and experiments, and drafted the initial manuscript. MGE directly supervised the research, is primarily responsible for the design of the study, and is largely responsible for the final draft of the manuscript. RAP participated in writing the introduction, discussion, and conclusion sections of the manuscript and provided "ground truth" for the CL segmentations to be evaluated. GPA and JS provided the image data for the study, advised on the design of the study, and participated in proofreading the manuscript. All authors have read and approved the final manuscript.

Authors’ original submitted files for images

Rights and permissions

Open Access This article is published under license to BioMed Central Ltd. This is an Open Access article is distributed under the terms of the Creative Commons Attribution License ( https://creativecommons.org/licenses/by/2.0 ), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Reprints and permissions

About this article

Cite this article

Rusnell, B.J., Pierson, R.A., Singh, J. et al. Level set segmentation of bovine corpora lutea in ex situ ovarian ultrasound images. Reprod Biol Endocrinol 6, 33 (2008). https://doi.org/10.1186/1477-7827-6-33

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/1477-7827-6-33

Keywords