Level set segmentation of bovine corpora lutea in ex situ ovarian ultrasound images
 Brennan J Rusnell^{1},
 Roger A Pierson^{2},
 Jaswant Singh^{3},
 Gregg P Adams^{3} and
 Mark G Eramian^{1}Email author
DOI: 10.1186/14777827633
© Rusnell et al; licensee BioMed Central Ltd. 2008
Received: 27 May 2008
Accepted: 04 August 2008
Published: 04 August 2008
Abstract
Background
The objective of this study was to investigate the viability of level set image segmentation methods for the detection of corpora lutea (corpus luteum, CL) boundaries in ultrasonographic ovarian images. It was hypothesized that bovine CL boundaries could be located within 1–2 mm by a level set image segmentation methodology.
Methods
Level set methods embed a 2D contour in a 3D surface and evolve that surface over time according to an imagedependent speed function. A speed function suitable for segmentation of CL's in ovarian ultrasound images was developed. An initial contour was manually placed and contour evolution was allowed to proceed until the rate of change of the area was sufficiently small. The method was tested on ovarian ultrasonographic images (n = 8) obtained ex situ. A expert in ovarian ultrasound interpretation delineated CL boundaries manually to serve as a "ground truth". Accuracy of the level set segmentation algorithm was determined by comparing semiautomatically determined contours with ground truth contours using the mean absolute difference (MAD), root mean squared difference (RMSD), Hausdorff distance (HD), sensitivity, and specificity metrics.
Results and discussion
The mean MAD was 0.87 mm (sigma = 0.36 mm), RMSD was 1.1 mm (sigma = 0.47 mm), and HD was 3.4 mm (sigma = 2.0 mm) indicating that, on average, boundaries were accurate within 1–2 mm, however, deviations in excess of 3 mm from the ground truth were observed indicating under or overexpansion of the contour. Mean sensitivity and specificity were 0.814 (sigma = 0.171) and 0.990 (sigma = 0.00786), respectively, indicating that CLs were consistently undersegmented but rarely did the contour interior include pixels that were judged by the human expert not to be part of the CL. It was observed that in localities where gradient magnitudes within the CL were strong due to high contrast speckle, contour expansion stopped too early.
Conclusion
The hypothesis that level set segmentation can be accurate to within 1–2 mm on average was supported, although there can be some greater deviation. The method was robust to boundary leakage as evidenced by the high specificity. It was concluded that the technique is promising and that a suitable data set of human ovarian images should be obtained to conduct further studies.
Background
The ovaries of all mammalian species, including humans and cattle, contain follicles and corpora lutea (CL). The bovine model for studying human ovarian function is well established [1].
Objective and motivation
The presence (or absence) of a CL in the ovaries, its size and morphology provide information regarding the current state of the individual's reproductive cycle [4, 5]. The most common method of visualizing the CL in vivo is ultrasonography. Monitoring the development of ovarian follicles and CL over time is crucial to the understanding of human and bovine reproductive biology, fertility and timing of fertility therapy, the effects of contraception, and the diagnosis of ovarian diseases and dysfunction, such as ovarian cysts, cancers, and polycystic ovarian syndrome.
In practice, CL segmentation is performed manually. Manual delineation is time consuming and subject to variance in human interpretation of images. The domain of image processing offers the potential for detailed analysis of CL size, and appearance, which will facilitate study of the aforementioned processes and diseases. Successful automation of CL segmentation would further the automation of existing analyses, such as correlating the value of image pixels within the CL region with various physiological attributes [6] and automatically determining CL diameter for use in higherlevel classifiers [5], as well as facilitate new investigations.
Literature review
Research on prostate segmentation from 2D ultrasound images was reviewed for insight into CL segmentation due to the similarities of the two problems. The prostate has a similar echotexture appearance to that of corpora lutea in ultrasound images and a similar level of contrast with the rest of the image. Moreover, there is only one region of interest in both problems. However, there are differences which make the CL segmentation problem more challenging. Due to standardized imaging procedures, the prostate's position in an image is approximately known a priori and strong assumptions may also be made about the prostate's shape in the image plane [7–10]. This is not possible when imaging corpora lutea since it can be present at different locations within the ovary and is more variable in shape and size [11, 12]. CLs are assumed only to be approximately elliptical in the image plane with low to moderate eccentricity. Segmentation of the prostate from ultrasound images has been well studied [7]. A number of prostate segmentation methods used deformable contour models. Deformable contour models were first introduced as tools for image segmentation by Kass et al. [13]. Their active contour models or "snakes" formulated image segmentation as an energy minimization problem. An active contour was initially placed on an image. Solving energy equations caused the curve to move or "evolve" until it minimized the energy function. The energy function was chosen so that the curve tended to follow edges in the image. Snakes were found to poorly handle topological changes in contours such as merging or splitting but the difficulties can be overcome with care, albeit considerable computational overhead.
The success of prostate segmentation using deformable models using snakes and socalled "discrete dynamic contours" were largely dependent on careful initialization of the contour in a position near the desired boundary [14–16].
Badiei et al. developed an algorithm in which small number of marker points were placed on the prostate boundary which was then warped, based on an a priori shape model, so that the prostate was elliptical in shape [8]. The marker points were then used to find the ellipse that best fit the warped prostate. The inverse of the warping transformation was applied to the ellipse to obtain the final segmentation. The success of the approach was dependent on the correctness of the assumption of elliptical shape and the careful placement of the initial marker points.
Level set methods has been proven effective both in general [17–20] and for ultrasound image segmentation [21, 22]. Recent applications of the level set method to prostate ultrasound image segmentation [3, 7, 23] show that this technique is accurate and flexible (it can handle contours of varying shape, size, and concavity). A level set method was chosen for the current study because of the prior success of level set methods in prostate segmentation and its ability to easily handle arbitrary changes in contour topology.
Fan et al. [3] used a level set method to perform threedimensional (3D) surface detection from ultrasound images of the prostate. A fast discrimination method was used to roughly extract the prostate region. The prostate region information was incorporated into the level set method instead of the spatial image gradient. This addressed the issue of "boundary leaking" which occurs when the contour evolves across a very weak edge that is part of the desired boundary. The segmentations produced qualitatively appeared to be good, but a quantitative assessment of segmentation accuracy was lacking.
Herein, a new methodology was created to segment corpora lutea from 2D ultrasound images using this work as a starting point. Our algorithm consists of a multistep preprocessing stage followed by segmentation using a level set method. It is semiautomated in that it requires an initial userspecified closed contour that is assumed to be completely contained within the CL, and to completely contain the CL's central cavity if one exists. It was hypothesized that this levelset contour evolution method could be used to locate the boundaries of CL with an average error of 1–2 mm. The CLs used in this study ranged from 14.3 mm to 21.5 mm in diameter.
Methods
Image data
The work herein is motivated by the need to study human reproduction but images of bovine ovaries were used in this feasibility study as they are wellestablished as a vehicle for studying human ovaries due to similarities in physiology and morphology. The images used in this study were obtained during a previous study Singh et al. [6]. Left and right ovaries of heifers were surgically removed at defined stages of the estrous cycle and imaged ex situ in parallel planes at 0.5 mm increments using a broadband (5–9 Mhz), convexarray, ultrasound transducer interfaced with an ATL Ultra Mark 9 HDI ultrasound machine (Advanced Technology Laboratories, Brothell, WA). At the time of ovariectomy, the number of days from ovulation was known. From this data set, ovaries with CLs (n = 8) were selected for the current study. From the set of parallel images of each ovary, the image which contained the CL at maximum diameter was selected. All images were 640 × 480 pixel 8bit grayscale, but the pixel size varied across images, ranging from 0.057 mm × 0.057 mm to 0.087 mm × 0.087 mm as determined from the distance gradations on the righthand side of each image. An experienced gynecologic ultrasonographer manually segmented the images which provided a "ground truth" for validation purposes. The diameters of the CLs in the selected images ranged from 14.2 mm to 21.5 mm. Diameter was estimated by averaging the length and width of the smallest bounding box aligned to the image coordinate axes of the expertly segmented CL region. A sample image is shown in Figure 1(c) and is used as a running example throughout this paper.
Curve evolution with level sets
In this section, the level set method is presented in a similar fashion to that in [3] which is, in turn, derived from [2]. The level set method evolves an initial 2D contour according to an energy function derived from image pixel data by embedding the 2D contour in a 3D surface.
The sign of d is negative if the point lies in the interior of C and is positive otherwise. The function ψ(p, t = 0) is called a signed distance function. Figure 1(c) illustrates a circular initial contour that was used to segment the image on which the contour was superimposed. The corresponding signed distance function, ψ(p, t = 0) is in Figure 1(d); the 2D contour is drawn beneath the surface.
Equation 5 describes an outward unit normal force that is reduced by a factor proportional to the local curvature of the contour. Intuitively, this causes the nonsmooth segments of the contour (which have high curvature) to move slowly while nearby portions of contour "catch up", resulting in smoother curve. The constant ϵ regulates the smoothness of the curve and must be greater than zero [3]. Larger values of ϵ result in smoother contours.
The term ∇G_{ σ }denotes the gradient of a twodimensional Gaussian function whose standard deviation is σ, I is the image function, and * denotes convolution. The convolution operation is the same as that described for sticks filters (below); only the kernel differs. Since the gradient operator can be applied after the convolution operation without changing the result, the second term of the denominator's sum can be viewed as magnitude of the gradient of an image which has been convolved with a Gaussian filter kernel raised to the power p. A Gaussian filter has the effect of computing for each pixel a centerweighted average of the intensities in its local neighborhood which smoothes or blurs the image. A smoothed image is desirable since it reduces the magnitude of small, unimportant local edges so that they have less effect on the speed function.
Pixels with large image gradient (corresponding to pixels that have a high probability of being edges of the corpus luteum) will cause the value of k_{ I }(Equation 6) to be close to zero. When k_{ I }is multiplied with F the speed at which the contour embedded in ψ propogates is reduced to nearly zero when it is near the desired boundary. The exponent p controls the severity of the penalty that gradient magnitude applies to the speed function.
Speed function for CL segmentation
The low intensity contrast between CL regions and other regions of the ovary and the noisy nature of ultrasound images required a more sophisticated construction of the speed function, rather than direct application of the process described by Equation 6. The speed function was constructed by filtering the image with a sticks filter [24, 25]. The filtered image was then processed with Sobel filters [26] to obtain a gradient magnitude which was then contrast enhanced. The mean image intensity was then subtracted from each pixel and the resulting image was contrast enhanced a second time and inserted into Equation 6 to obtain the weighting term k_{ I }for the speed function F, above. These steps are discussed in greater detail below.
Sticks filter
The first step in our algorithm was to filter the images with a sticks filter bank of size 17. This stick length was chosen because it is small enough that it correlates well with CL boundary segments of the same length but is large enough that it smoothes smallscale features and reduces the magnitude of edges with high curvature which are not of interest. Reducing the magnitude of highcurvature edges is important since the level set method relies heavily on edge information and segmentations are undesirably influenced by smallscale edges such as those caused by speckle.
Sobel magnitude filter
These filters compute centerweighted average finite differences over a distance of 3 pixels in a 3 pixelwide band in the horizontal and vertical directions, respectively.
Larger Sobel filters compute the finite differences over longer distances and bands and are much less sensitive to very local changes. The effect is that larger filters respond to larger scale changes in the image function and small scale changes are not captured.
Intensity normalization
In ultrasonographic images, even the strongest edges may have fairly weak magnitude. In order to improve the contrast and further emphasize major edges, the pixel intensities of the edge magnitude image M were normalized to obtain the image $\overline{M}$.
Normalization is a linear remapping of pixel intensities. If the pixel intensities in an image are within the interval [I_{min,}I_{ max }], then a normalizing linear remapping causes intensity I_{ min }to be mapped to intensity 0, and I_{ max }to be mapped to the maximum possible intensity while intensities between I_{ min }and I_{ max }are linearly interpolated. After normalization, images were converted from 8bit grayscale (integer pixel intensities ranging from 0 to 255) to realvalued images with intensities ranging from 0.0 (black) to 1.0 (white) by dividing each pixel's intensity by 255.
Weak edge suppression
The resulting image N was renormalized using the procedure described above to obtain the normalized meansubtracted edge magnitude image $\overline{N}$. Figure 6(b) illustrates the effect of mean subtraction from the normalized edge magnitude image in Figure 6(a); the weaker edges present in the CL region have been largely suppressed. The central fluid cavity in the pictured CL causes the highmagnitude edge that is retained in the renormalized image. The image $\overline{N}$ was used to compute the weighting term k_{ I }for the speed function F in the level set method.
Level set segmentation of corpora lutea
Placement of the initial contour
Placement of the initial 2D contour was done manually. All initial contours were circular. If a fluid cavity was present, the initial contour was chosen to enclose the fluid cavity. Kastelic et al. showed that 79% of bovine ovaries exhibit central cavities [27]. If a fluid cavity was not present, the contour was placed near the center of the CL. The remainder of the algorithm is automatic.
The intial 2D contour was embedded as the zero level set of a 3D signed distance function ψ(p, t = 0). This function was evolved using Baris Sumengen's Matlab level set toolbox [28] according to Equation 3.
Curve evolution parameters
In order to achieve finer control over the rate at which k_{ I }approaches zero as $\overline{N}$ increases, a coefficient α was added to the second term of the denominator.
The Matlab toolbox [28] which was used to evolve the boundary also required some additional parameters. The firstorder accurate finite difference option was chosen for approximation of derivatives. The surface being evolved must be discretized on a grid; a conservative grid size of 1 × 1 was chosen. The time interval between steps of the evolution, which may range from 0.5–0.9, was chosen to be a conservative 0.5. A more aggressive selection of a larger timestep would improve the speed of the curve evolution algorithm, but could also cause numeric instability leading to poor or nonsensical results.
The surface in which the 2D contour is embedded will almost certainly lose the property of being a signed distance function as it evolves since portions of surface will move at different velocities. It is necessary to periodically reinitialize the surface to a signed distance function, for example, after every m time steps. This is achieved by extracting the zerolevel set C(t) (defined in Equation 2) and constructing from it a new signed distance function ψ_{ new }. The evolution process then continues using ψ_{ new }for another m time steps. This is repeated until a stopping condition is condition is met (see below). A value of m = 1 was selected which caused the surface to be reinitialized as a signed distance function after every time step. This ensured that the surface was always a signed distance function and facilitated the test of the stopping condition.
Stopping condition
The stopping condition determined when to cease evolution of ψ. Since the zero level set contour will presumably stop when it has settled onto the desired boundary, a test is needed to determine when motion of C(t) becomes sufficiently small. The criterion used was motivated by the work of [29]: "If the iteration count [total number of time steps] n exceeds a particular threshold N_{0} and the rate of change of contour length $\left\frac{d{L}_{c}(t)}{dt}\right$ <γ over a fixed number of iterations Δn, STOP the iteration process".
Herein, L_{ c }(t), the length of the contour C(t), was replaced with the area of the interior of C(t), denoted A_{ c }. The quantity $\left\frac{d{A}_{c}(t)}{dt}\right$ was computed at each time step and a record of these values for the previous Δm time steps was maintained. The evolution was halted when n > N_{0} and $\left\frac{d{A}_{c}(t)}{dt}\right$ <γ was true for all of the previous Δm time steps. Values N_{0} = 250 and γ = 250, and Δm = 15 were used to evaluate the stopping condition.
Smoothing with Fourier descriptors
During development of the implementation of the above algorithm, it was observed that the region boundaries determined by the algorithm quite often meandered back and forth across the expertly traced boundary. It was hypothesized that if the boundaries were smoothed, they might conform more closely to the ground truth boundary. Boundaries were smoothed by truncating the Fourier descriptors of the boundary [26]. This section reviews the basics of smoothing using Fourier descriptors.
Validation
The manual segmentations obtained for this study were used as the basis for comparison when validating the semiautomatic segmentations. The metrics [30, 31] mean absolute distance (MAD), root mean squared distance (RMSD), Hausdorff distance (HD), sensitivity, and specificity were used to evaluate segmentation accuracy. These metrics are formally defined below.
which is the smallest Euclidean distance between p and any point on the contour C.
This metric is similar to MAD, except that pixels in C_{ A }which are further away from C_{ M }contribute a greater penalty to the metric.
A smaller HD is more desirable.
Specificity represents the percentage of pixels which are not part of the CL region which were correctly identified as such. A perfect segmentation with respect to the ground truth will have both a sensitivity and a specificity of 1.0.
Results
The eight images in our testing set were segmented with the algorithm described above. The manual segmentations performed by a human expert were compared with the automated segmentations by computing the previously described metrics for each image.
Quality of the CL segmentations analyzed by computing the absolute distance (MAD), root mean square distance (RMSD), and Hausdorff distance (HD) between the automatically and manually determined contours.
Image ID  MAD (mm)  RMSD (mm)  HD (mm)  Specificity  Sensitivity 

1  0.61  0.79  2.3  0.986  0.938 
2  1.5  1.9  5.7  0.996  0.654 
3  0.60  0.74  1.9  0.989  0.916 
4  0.77  1.0  3.0  0.979  0.914 
5  0.68  0.87  2.1  0.980  0.884 
6  1.4  1.8  7.3  1.00  0.457 
7  0.77  1.0  2.6  0.998  0.834 
8  0.65  0.82  2.2  0.990  0.915 
Mean ± σ  0.87 ± 0.36  1.1 ± 0.47  3.4 ± 2.0  0.990 ± 0.00786  0.814 ± 0.171 
The mean runtime for the completion of the segmentation of one image is 5 min 54s (σ = 1 min 41s). All test images were segmented using a 2.80 GHz Pentium 4 Processor with 2 GB of RAM running Mandriva Linux (2.6.17–14 mdv). All segmentations were executed using MATLAB (The Mathworks Inc., Natick, MA, USA) version 7.4.0.336 (R2007a).
Effect of smoothing by truncation of Fourier descriptors on mean quality metrics as a function of percentage of Fourier descriptors retained.
Percentage Used (%)  MAD (mm)  RMSD (mm)  HD (mm)  Specificity  Sensitivity 

100 (no smoothing)  0.87 ± 0.36  1.1 ± 0.47  3.4 ± 2.0  0.990 ± 0.00786  0.814 ± 0.171 
10  0.87 ± 0.36  1.1 ± 0.47  3.4 ± 2.0  0.990 ± 0.00784  0.814 ± 0.171 
7.5  0.86 ± 0.35  1.1 ± 0.46  3.4 ± 2.0  0.990 ± 0.00782  0.815 ± 0.171 
5  0.86 ± 0.35  1.1 ± 0.46  3.4 ± 2.0  0.990 ± 0.00778  0.815 ± 0.172 
2.5  0.83 ± 0.35  1.1 ± 0.47  3.3 ± 2.1  0.991 ± 0.00751  0.817 ± 0.175 
1  0.77 ± 0.43  0.95 ± 0.54  3.1 ± 2.1  0.992 ± 0.00654  0.821 ± 0.184 
Discussion
Table 1 and Figures 10 and 11 demonstrate quite clearly that the proposed algorithm typically undersegments CL; specificity is generally excellent, while sensitivity varies from very good (0.938) to quite poor (0.457). The high specificity occurred because the contours were initialized inside the CL and grew outward. The sensitivity was generally lower because of the noisy nature of the images. The gradient information captured in k_{ I }(Equation 11) did not perfectly match the true contour of the CL which caused the curve evolution to stop too soon in many instances. There are no other CL segmentation algorithms with which this work could be compared. Given that the mean Hausdorff distance is only 3.4 mm (σ = 2.0 mm) and the images from Figures 10 and 11, it is clear that the regions are being undersegmented, but the degree of undersegmentation is fairly uniform about the true contour. Smoothing with Fourier descriptors resulted in a minor improvement, but it is concluded that the additional computation is not worth the degree of improvement. The advantages of the algorithm are that it can handle arbitrary contours and requires no user intervention beyond placement of the initial contour. Moreover, no serious boundary leaking occurred resulting in very high specificity in the segmentations. Though similar to the work of Gooding et al. [32], our work differs in that a substantial amount of preprocessing is needed to enhance the gradient formed by CL boundaries. Gooding et al. use a similar gradientbased level set contour evolution method to segment ovarian follicles from 3D images. Follicles are easier to distinguish from their environment with gradientbased level set methods without aggressive preprocessing because the gradient at follicle boundaries is usually quite strong.
There are several aspects of this algorithm that could be improved with future work. A combination of texture and intensity properties might be able to locate a small area of the CL with high confidence in which the initial contour could be automatically placed. Recent work on distinguishing CL echotexture from that of ovarian stroma has shown promise [33] and this work will be continued by examining waveletbased texture features. A challenge in automatic contour placement will be to distinguish CL texture not only from that of stroma, but from that of a corpus albicans which are nonfunctional and are comprised of more dense tissue.
A more rigorous exploration of the algorithm's parameter space (which is of high dimension) could yield a set of parameters that produces superior results. The speed of the contour evolution is clearly an issue. Speedup by a factor of 2–3 is likely possible by reimplementing the methods herein using a programming language such as C or C++. Realtime speeds are not likely achievable using levelsetbased contour evolution, thus, more computationally efficient segmentation algorithms will be investigated to see whether they may be suitable for CL segmentation.
The algorithm must be tested on CL's imaged in vivo. It is expected that the algorithm will give similar performance on such images. Since the algorithm was successful in locating boundaries between lutetal tissue and stroma we expect it to perform similarly at boundaries where the protruding CL is closely interfaced with organs and tissues surrounding the ovary. Moreover, our preprocessing causes significant blurring in order to "plug" holes in the CL boundary through which leaks into the background may occur which affects the potential accuracy of boundary location. Images acquired in vivo may permit less aggressive preprocessing since the area of the image surrounding the ovary will be textured and not have a nearzero gradient through which the contour can leak. In turn, less blurring would cause less damage to the shape of the boundary, permitting a higher accuracy in boundary location.
The evidence that CL morphology is related to function is conflicting, particularly between different laboratories and species. A study by Tom et al. supported the hypothesis that quantitative changes in luteal echotexture in bovine corpora lutea are indicative of changes in its physiologic status and capacity to elaborate progesterone [34]. In mares, luteal area was positively correlated with circulating progesterone levels; however, the presence of a cystic cavity within the CL did not affect the luteal gland's ability to produce progesterone [27, 35]. In humans, mean luteal area of human CL's was shown to be positively correlated with serum progesterone concentrations (r = +0.88) and serum estradiol concentrations (r = +0.62) [36]. The human study is particularly encouraging with regard to the prospect of noninvasive automated interpretation of physiologic information from images which could obviate blood tests in future generations. In this context, the current study and others of its kind will be crucial to automated, standardized analysis of CL images. The work presented in the present study also is useful for automated recognition of CL in the studies of CL morphology and has the potential to be used in different imaging modalities such as histology, ultrasonography and magnetic resonance imaging [37, 38]. Indeed, any study which would benefit from automatic measurements of CL diameter, such as that of MaldonadoCastillo et al. [5], would be an application for automated CL segmentation algorithms.
Conclusion
This work is a first step towards a fully automated CL segmentation algorithm which is quite successful when the initial contour is manually placed. The hypothesis that our algorithm can locate CL boundaries within 1–2 mm on average is accepted. It is concluded that level set contour evolution methods are a viable option for offline (nonrealtime) CL segmentation and warrant further study.
The CL segmentation algorithm herein is a prototype for the currently missing piece of a comprehensive segmentation algorithm for all ovarian structures (follicles, CL, stroma). The work has both research and clinical importance as noninvasive gynecologic imaging becomes more firmly integrated into standard clinical practice. Standardized descriptions and image interpretations are critical in developing standardized diagnostic algorithms and clinical practice guidelines.
List of abbreviations
 CL:

Corpus luteum
 HD:

Hausdorff distance
 MAD:

Mean absolute distance
 RMSD:

Root mean squared distance.
Declarations
Acknowledgements
This research was supported by grants from the Natural Sciences and Engineering Research Council of Canada, the Saskatchewan Health Research Foundation, and the Canadian Institutes for Health Research.
Authors’ Affiliations
References
 Adams GP, Pierson RA: Bovine model for study of ovarian follicular dynamics in humans. Theriogenology. 1995, 43: 113120.View ArticleGoogle Scholar
 Sethian JA: Level Set Methods and Fast Marching Methods – Evolving Interfaces in Geometry, Fluid Mechanics, Computer Vision and Materials Science. 1996, Cambridge University PressGoogle Scholar
 Fan S, Voon LK, Ng WS: 3D Prostate Surface Detection from Ultrasound Images Based on Level Set Method. Med Image Comput Comput Assist Interv Int Conf Med Image Comput Comput Assist Interv. 2002, London, UK: SpringerVerlag, 389396.Google Scholar
 Baerwald AR, Adams GP, Pierson RA: A new model for ovarian follicular development during the human menstrual cycle. Fertil Steril. 2003, 80: 116122.View ArticlePubMedGoogle Scholar
 MaldonadoCastillo I, Eramian M, Pierson R, Singh J, Adams G: Classification of Reproductive Cycle Phase using Ultrasounddetected Features. Proceedings of the 4th Canadian Conference on Computer and Robot Vision, IEEE. 2007, 258265.Google Scholar
 Singh J, Pierson RA, Adams GP: Ultrasound image attributes of the bovine corpus luteum: structural and functional correlates. J Reprod Fertil. 1997, 109: 3544.View ArticlePubMedGoogle Scholar
 Fan S, Voon LK, Ng WS, Wu RY: Prostate Boundary Detection From Ultrasonographic Images. J Ultrasound Med. 2003, 22 (6): 605623.Google Scholar
 Badiei S, Salcudean SE, Varah J, Morris WJ: Prostate Segmentation in 2D Ultrasound Images Using Image Warping and Ellipse Fitting. Med Image Comput Comput Assist Interv Int Conf Med Image Comput Comput Assist Interv. 2006, 1724.Google Scholar
 Pathak SD, Chalana V, Haynor DR, Kim Y: EdgeGuided Boundary Delineation in Prostate Ultrasound Images. IEEE Trans Med Imaging. 2000, 19 (12): 12111219.View ArticlePubMedGoogle Scholar
 Gong L, Pathak SD, Haynor DR, Cho PS, Kim Y: Parametric shape modeling using deformable superellipses for prostate segmentation. IEEE Trans Med Imaging. 2004, 23 (3): 340349.View ArticlePubMedGoogle Scholar
 Pierson R, Ginther O: Reliability of Diagnostic Ultrasonography for Identification and Measurement of Follicles and Detecting the Corpus Luteum in Heifers. Theriogenology. 1987, 28 (6): 929936.View ArticleGoogle Scholar
 Pierson R, Ginther O: Ultrasonic Imaging of the Ovaries and Uterus in Cattle. Theriogenology. 1988, 29: 2137.View ArticleGoogle Scholar
 Kass M, Witkin A, Terzopoulos D: Snakes: Active Contour Models. International Journal of Computer Vision. 1988, 1 (4): 321331.View ArticleGoogle Scholar
 Pathak S, Aarnink RG, de La Rosette JJ, Chalana V, Wijkstra H, Haynor DR, Debruyne FM, Kim Y: Quantitative threedimensional transrectal ultrasound (TRUS) for prostate imaging. Proc Soc Photo Opt Instrum Eng, Bellingham, WA. 1998, 3335: 8392.Google Scholar
 Ladak H, Mao F, Wang Y, Downey D, Steinman D, Fenster A: Prostate segmentation from 2D ultrasound images. World Congress on Medical Physics and Biomedical Engineering. 2000, 27: 17771788.Google Scholar
 Wang Y, Cardinal HN, Downey DB, Fenster A: Semiautomatic threedimensional segmentation of the prostate using twodimensional ultrasound images. Med Phys. 2003, 30 (5): 887897.View ArticlePubMedGoogle Scholar
 Paragios N, Deriche R: Geodesic Active Regions and Level Set Methods for Supervised Texture Segmentation. International Journal of Computer Vision. 2002, 46 (3): 223247.View ArticleGoogle Scholar
 Malladi R, Sethian JA, Vemuri BC: Shape Modeling with Front Propagation: A Level Set Approach. IEEE Trans Pattern Anal Mach Intell. 1995, 17 (2): 158175.View ArticleGoogle Scholar
 Ranchin F, Dibos F: Variational level set methods: from continuous to discrete setting, applications in video segmentation and tracking. IEEE International Conference on Image Processing. 2005, 1: 273276.Google Scholar
 Santis AD, Iacoviello D: Discrete level set approach to image segmentation. 2005, Tech. rep., Computer Science, University of RomeGoogle Scholar
 Lin P, Zheng C, Yang Y, Gu J: Medical Image Segmentation by Level Set Method Incorporating Region and Boundary Statistical Information. Progress in Pattern Recognition, Image Analysis and Applications. 2004, 3287/2004: 654660.View ArticleGoogle Scholar
 McInerney T, Terzopoulos D: Deformable models in medical image analysis: a survey. Med Image Anal. 1996, 1 (2): 91108.View ArticlePubMedGoogle Scholar
 Gong L, Ng L, Pathak SD, Tutar I, Cho PS, Haynor DR, Kim Y: Prostate ultrasound image segmentation using level setbased region flow with shape guidance. Proc Soc Photo Opt Instrum Eng. Edited by: Fitzpatrick MJ, Reinhardt JM. 2005, 5747: 16481657.Google Scholar
 Czerwinski RN, Jones DL, O'Brien WD: Line and Boundary Detection in Speckle Images. IEEE Trans Image Process. 1998, 7 (12): 17001714.View ArticlePubMedGoogle Scholar
 Czerwinski RN, Jones DL, O'Brien WD: Detection of Lines and Boundaries in Speckle Images – Application to Medical Ultrasound. IEEE Trans Med Imaging. 1999, 18 (2): 126136.View ArticlePubMedGoogle Scholar
 Gonzalez RC, Woods RE: Digital Image Processing. 2008, PrenticeHall, Inc, 3Google Scholar
 Kastelic JP, Pierson RA, Ginther OJ: Ultrasonic Morphology of Corpora Lutea and Central Luteal Cavities During the Estrous Cycle and Early Pregnancy in Heifers. Theriogenology. 1990, 34 (3): 487498.View ArticlePubMedGoogle Scholar
 Sumengen B: A Matlab toolbox implementing Level Set Methods. 2005, [http://barissumengen.com/level_set_methods/index.html]Google Scholar
 Chaudhury KN, Ramakrishnan KR: Stability and convergence of the level set method in computer vision. Pattern Recognition Letters. 2007, 28 (7): 884893.View ArticleGoogle Scholar
 Chalana V, Kim Y: A methodology for evaluation of boundary detection algorithms on medical images. IEEE Trans Med Imaging. 1997, 16: 642652.View ArticlePubMedGoogle Scholar
 Bowyer KW: Validation of Medial Image Analysis Techniques. Handbook of Medical Imaging: Medical Image Processing and Analysis. Edited by: Fitzpatrick JM, Sonka M. 2000, Bellingham, WA: SPIE Press, 2: 567608.View ArticleGoogle Scholar
 Gooding MJ, Kennedy S, Noble JA: Volume segmentation and reconstruction from freehand 3D ultrasound data with application to ovarian follicle measurement. Ultrasound Med Biol. 2008, 34 (2): 183195.View ArticlePubMedGoogle Scholar
 Eramian MG, Adams GP, Pierson RA: Enhancing Ultrasound Texture Differences for Developing an in vivo "Virutal Histology" Approach to Bovine Ovarian Imaging. Reprod Fert Develop. 2007, 19 (8): 910924. [http://www.publish.csiro.au/nid/44/paper/RD06167.htm]View ArticleGoogle Scholar
 Tom JW, Pierson RA, Adams GP: Quantitative echotexture analysis of bovine corpora lutea. Theriogenology. 1998, 49: 13451352.View ArticlePubMedGoogle Scholar
 Townson DH, Pierson RA, Ginther OJ: Characterization of Plasma Progesterone Concentrations for Two Distinct Luteal Morphologies in Mares. Theriogenology. 1989, 32: 197204.View ArticlePubMedGoogle Scholar
 Baerwald AR, Adams GP, Pierson RA: Form and function of the corpus luteum during the human menstrual cycle. Ultrasound Obst Gyn. 2005, 25 (5): 498507.View ArticleGoogle Scholar
 Sarty GE, Adams GP, Pierson RA: Threedimensional magnetic resonance imaging for the study of ovarian function in a bovine in vitro model. J Reprod Fertil. 2000, 119: 6975.View ArticlePubMedGoogle Scholar
 Hilton JL, Baerwald AR, Sarty GE, Adams GP, Pierson RA: Magnetic Resonance Image Attributes of the Bovine Corpus Luteum During Development and Regression. Anat Rec Part A. 2003, 272A (1): 454459.View ArticleGoogle Scholar
Copyright
This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.