Cryopreservation of in vitro grown nodal segments of Rauvolfia serpentina by PVS2 vitrification.
Ray, Avik; Bhattacharya, Sabita
2008-01-01
This paper describes the cryopreservation by PVS2 vitrification of Rauvolfia serpentina (L.) Benth ex kurz, an important tropical medicinal plant. The effects of type and size of explants, sucrose preculture (duration and concentration) and vitrification treatment were tested. Preliminary experiments with PVS1, 2 and 3 produced shoot growth only for PVS2. When optimizing the PVS2 vitrification of nodal segments, those of 0.31 - 0.39 cm in size were better than other nodal sizes and or apices. Sucrose preculture had a positive role in survival and subsequent regrowth of the cryopreserved explants. Seven days on 0.5 M sucrose solution significantly improved the viability of nodal segments. PVS2 incubation for 45 minutes combined with a 7-day preculture gave the optimum result of 66 percent. Plantlets derived after cryopreservation resumed growth and regenerated normally.
Multiscale CNNs for Brain Tumor Segmentation and Diagnosis.
Zhao, Liya; Jia, Kebin
2016-01-01
Early brain tumor detection and diagnosis are critical to clinics. Thus segmentation of focused tumor area needs to be accurate, efficient, and robust. In this paper, we propose an automatic brain tumor segmentation method based on Convolutional Neural Networks (CNNs). Traditional CNNs focus only on local features and ignore global region features, which are both important for pixel classification and recognition. Besides, brain tumor can appear in any place of the brain and be any size and shape in patients. We design a three-stream framework named as multiscale CNNs which could automatically detect the optimum top-three scales of the image sizes and combine information from different scales of the regions around that pixel. Datasets provided by Multimodal Brain Tumor Image Segmentation Benchmark (BRATS) organized by MICCAI 2013 are utilized for both training and testing. The designed multiscale CNNs framework also combines multimodal features from T1, T1-enhanced, T2, and FLAIR MRI images. By comparison with traditional CNNs and the best two methods in BRATS 2012 and 2013, our framework shows advances in brain tumor segmentation accuracy and robustness.
Knowledge-based segmentation and feature analysis of hand and wrist radiographs
NASA Astrophysics Data System (ADS)
Efford, Nicholas D.
1993-07-01
The segmentation of hand and wrist radiographs for applications such as skeletal maturity assessment is best achieved by model-driven approaches incorporating anatomical knowledge. The reasons for this are discussed, and a particular frame-based or 'blackboard' strategy for the simultaneous segmentation of the hand and estimation of bone age via the TW2 method is described. The new approach is structured for optimum robustness and computational efficiency: features of interest are detected and analyzes in order of their size and prominence in the image, the largest and most distinctive being dealt with first, and the evidence generated by feature analysis is used to update a model of hand anatomy and hence guide later stages of the segmentation. Closed bone boundaries are formed by a hybrid technique combining knowledge-based, one-dimensional edge detection with model-assisted heuristic tree searching.
Okariz, Ana; Guraya, Teresa; Iturrondobeitia, Maider; Ibarretxe, Julen
2017-12-01
A method is proposed and verified for selecting the optimum segmentation of a TEM reconstruction among the results of several segmentation algorithms. The selection criterion is the accuracy of the segmentation. To do this selection, a parameter for the comparison of the accuracies of the different segmentations has been defined. It consists of the mutual information value between the acquired TEM images of the sample and the Radon projections of the segmented volumes. In this work, it has been proved that this new mutual information parameter and the Jaccard coefficient between the segmented volume and the ideal one are correlated. In addition, the results of the new parameter are compared to the results obtained from another validated method to select the optimum segmentation. Copyright © 2017 Elsevier Ltd. All rights reserved.
Disc piezoelectric ceramic transformers.
Erhart, Jirií; Půlpán, Petr; Doleček, Roman; Psota, Pavel; Lédl, Vít
2013-08-01
In this contribution, we present our study on disc-shaped and homogeneously poled piezoelectric ceramic transformers working in planar-extensional vibration modes. Transformers are designed with electrodes divided into wedge, axisymmetrical ring-dot, moonie, smile, or yin-yang segments. Transformation ratio, efficiency, and input and output impedances were measured for low-power signals. Transformer efficiency and transformation ratio were measured as a function of frequency and impedance load in the secondary circuit. Optimum impedance for the maximum efficiency has been found. Maximum efficiency and no-load transformation ratio can reach almost 100% and 52 for the fundamental resonance of ring-dot transformers and 98% and 67 for the second resonance of 2-segment wedge transformers. Maximum efficiency was reached at optimum impedance, which is in the range from 500 Ω to 10 kΩ, depending on the electrode pattern and size. Fundamental vibration mode and its overtones were further studied using frequency-modulated digital holographic interferometry and by the finite element method. Complementary information has been obtained by the infrared camera visualization of surface temperature profiles at higher driving power.
Anderson, I.E.; Figliola, R.S.; Molnar, H.M.
1993-07-20
High pressure atomizing nozzle includes a high pressure gas manifold having a divergent expansion chamber between a gas inlet and arcuate manifold segment to minimize standing shock wave patterns in the manifold and thereby improve filling of the manifold with high pressure gas for improved melt atomization. The atomizing nozzle is especially useful in atomizing rare earth-transition metal alloys to form fine powder particles wherein a majority of the powder particles exhibit particle sizes having near-optimum magnetic properties.
Anderson, Iver E.; Figliola, Richard S.; Molnar, Holly M.
1992-06-30
High pressure atomizing nozzle includes a high pressure gas manifold having a divergent expansion chamber between a gas inlet and arcuate manifold segment to minimize standing shock wave patterns in the manifold and thereby improve filling of the manifold with high pressure gas for improved melt atomization. The atomizing nozzle is especially useful in atomizing rare earth-transition metal alloys to form fine powder particles wherein a majority of the powder particles exhibit particle sizes having near-optimum magnetic properties.
Bar piezoelectric ceramic transformers.
Erhart, Jiří; Pulpan, Půlpán; Rusin, Luboš
2013-07-01
Bar-shaped piezoelectric ceramic transformers (PTs) working in the longitudinal vibration mode (k31 mode) were studied. Two types of the transformer were designed--one with the electrode divided into two segments of different length, and one with the electrodes divided into three symmetrical segments. Parameters of studied transformers such as efficiency, transformation ratio, and input and output impedances were measured. An analytical model was developed for PT parameter calculation for both two- and three-segment PTs. Neither type of bar PT exhibited very high efficiency (maximum 72% for three-segment PT design) at a relatively high transformation ratio (it is 4 for two-segment PT and 2 for three-segment PT at the fundamental resonance mode). The optimum resistive loads were 20 and 10 kΩ for two- and three-segment PT designs for the fundamental resonance, respectively, and about one order of magnitude smaller for the higher overtone (i.e., 2 kΩ and 500 Ω, respectively). The no-load transformation ratio was less than 27 (maximum for two-segment electrode PT design). The optimum input electrode aspect ratios (0.48 for three-segment PT and 0.63 for two-segment PT) were calculated numerically under no-load conditions.
NASA Technical Reports Server (NTRS)
Sawdy, D. T.; Beckemeyer, R. J.; Patterson, J. D.
1976-01-01
Results are presented from detailed analytical studies made to define methods for obtaining improved multisegment lining performance by taking advantage of relative placement of each lining segment. Properly phased liner segments reflect and spatially redistribute the incident acoustic energy and thus provide additional attenuation. A mathematical model was developed for rectangular ducts with uniform mean flow. Segmented acoustic fields were represented by duct eigenfunction expansions, and mode-matching was used to ensure continuity of the total field. Parametric studies were performed to identify attenuation mechanisms and define preliminary liner configurations. An optimization procedure was used to determine optimum liner impedance values for a given total lining length, Mach number, and incident modal distribution. Optimal segmented liners are presented and it is shown that, provided the sound source is well-defined and flow environment is known, conventional infinite duct optimum attenuation rates can be improved. To confirm these results, an experimental program was conducted in a laboratory test facility. The measured data are presented in the form of analytical-experimental correlations. Excellent agreement between theory and experiment verifies and substantiates the analytical prediction techniques. The results indicate that phased liners may be of immediate benefit in the development of improved aircraft exhaust duct noise suppressors.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hanthorn, H.E.; Jaech, J.L.
Results are given of a study to determine the optimum testing scheme consisting of drawing a group of optimum size from the population being tested, and retesting it, if required, in subgroups of optimum size. An exact computation of optimum grouping and subgrouping was made. Results are also given to indicate how much loss inefficiency occurs when physical limitations restrict the size of the original group. (J.R.D.)
Wavefront control of large optical systems
NASA Technical Reports Server (NTRS)
Meinel, Aden B.; Meinel, Marjorie P.; Breckinridge, J. B.
1990-01-01
Several levels of wavefront control are necessary for the optimum performance of very large telescopes, especially segmented ones like the Large Deployable Reflector. In general, the major contributors to wavefront error are the segments of the large primary mirror. Wavefront control at the largest optical surface may not be the optimum choice because of the mass and inaccessibility of the elements of this surface that require upgrading. The concept of two-stage optics was developed to permit a poor wavefront from the large optics to be upgraded by means of a wavefront corrector at a small exit pupil of the system.
Follicle Detection on the USG Images to Support Determination of Polycystic Ovary Syndrome
NASA Astrophysics Data System (ADS)
Adiwijaya; Purnama, B.; Hasyim, A.; Septiani, M. D.; Wisesty, U. N.; Astuti, W.
2015-06-01
Polycystic Ovary Syndrome(PCOS) is the most common endocrine disorders affected to female in their reproductive cycle. This has gained the attention from married couple which affected by infertility. One of the diagnostic criteria considereded by the doctor is analysing manually the ovary USG image to detect the number and size of ovary's follicle. This analysis may affect low varibilites, reproducibility, and efficiency. To overcome this problems. automatic scheme is suggested to detect the follicle on USG image in supporting PCOS diagnosis. The first scheme is determining the initial homogeneous region which will be segmented into real follicle form The next scheme is selecting the appropriate regions to follicle criteria. then measuring the segmented region attribute as the follicle. The measurement remains the number and size that aimed at categorizing the image into the PCOS or non-PCOS. The method used is region growing which includes region-based and seed-based. To measure the follicle diameter. there will be the different method including stereology and euclidean distance. The most optimum system plan to detect PCO is by using region growing and by using euclidean distance on quantification of follicle.
A Method for Optimizing Non-Axisymmetric Liners for Multimodal Sound Sources
NASA Technical Reports Server (NTRS)
Watson, W. R.; Jones, M. G.; Parrott, T. L.; Sobieski, J.
2002-01-01
Central processor unit times and memory requirements for a commonly used solver are compared to that of a state-of-the-art, parallel, sparse solver. The sparse solver is then used in conjunction with three constrained optimization methodologies to assess the relative merits of non-axisymmetric versus axisymmetric liner concepts for improving liner acoustic suppression. This assessment is performed with a multimodal noise source (with equal mode amplitudes and phases) in a finite-length rectangular duct without flow. The sparse solver is found to reduce memory requirements by a factor of five and central processing time by a factor of eleven when compared with the commonly used solver. Results show that the optimum impedance of the uniform liner is dominated by the least attenuated mode, whose attenuation is maximized by the Cremer optimum impedance. An optimized, four-segmented liner with impedance segments in a checkerboard arrangement is found to be inferior to an optimized spanwise segmented liner. This optimized spanwise segmented liner is shown to attenuate substantially more sound than the optimized uniform liner and tends to be more effective at the higher frequencies. The most important result of this study is the discovery that when optimized, a spanwise segmented liner with two segments gives attenuations equal to or substantially greater than an optimized axially segmented liner with the same number of segments.
Experimental investigation of optimum beam size for FSO uplink
NASA Astrophysics Data System (ADS)
Kaushal, Hemani; Kaddoum, Georges; Jain, Virander Kumar; Kar, Subrat
2017-10-01
In this paper, the effect of transmitter beam size on the performance of free space optical (FSO) communication has been determined experimentally. Irradiance profile for varying turbulence strength is obtained using optical turbulence generating (OTG) chamber inside laboratory environment. Based on the results, an optimum beam size is investigated using the semi-analytical method. Moreover, the combined effects of atmospheric scintillation and beam wander induced pointing errors are considered in order to determine the optimum beam size that minimizes the bit error rate (BER) of the system for a fixed transmitter power and link length. The results show that the optimum beam size for FSO uplink depends upon Fried parameter and outer scale of the turbulence. Further, it is observed that the optimum beam size increases with the increase in zenith angle but has negligible effect with the increase in fade threshold level at low turbulence levels and has a marginal effect at high turbulence levels. Finally, the obtained outcome is useful for FSO system design and BER performance analysis.
NASA Astrophysics Data System (ADS)
Sumi, Ayako; Olsen, Lars Folke; Ohtomo, Norio; Tanaka, Yukio; Sawamura, Sadashi
2003-02-01
We have carried out spectral analysis of measles notifications in several communities in Denmark, UK and USA. The results confirm that each power spectral density (PSD) shows exponential characteristics, which are universally observed in the PSD for time series generated from nonlinear dynamical system. The exponential gradient increases with the population size. For almost all communities, many spectral lines observed in each PSD can be fully assigned to linear combinations of several fundamental periods, suggesting that the measles data are substantially noise-free. The optimum least squares fitting curve calculated using these fundamental periods essentially reproduces an underlying variation of the measles data, and an extension of the curve can be used to predict measles epidemics. For the communities with large population sizes, some PSD patterns obtained from segment time series analysis show a close resemblance to the PSD patterns at the initial stages of a period-doubling bifurcation process for the so-called susceptible/exposed/infectious/recovered (SEIR) model with seasonal forcing. The meaning of the relationship between the exponential gradient and the population size is discussed.
Method of making segmented pyrolytic graphite sputtering targets
McKernan, Mark A.; Alford, Craig S.; Makowiecki, Daniel M.; Chen, Chih-Wen
1994-01-01
Anisotropic pyrolytic graphite wafers are oriented and bonded together such that the graphite's high thermal conductivity planes are maximized along the back surface of the segmented pyrolytic graphite target to allow for optimum heat conduction away from the sputter target's sputtering surface and to allow for maximum energy transmission from the target's sputtering surface.
Method of making segmented pyrolytic graphite sputtering targets
McKernan, M.A.; Alford, C.S.; Makowiecki, D.M.; Chen, C.W.
1994-02-08
Anisotropic pyrolytic graphite wafers are oriented and bonded together such that the graphite's high thermal conductivity planes are maximized along the back surface of the segmented pyrolytic graphite target to allow for optimum heat conduction away from the sputter target's sputtering surface and to allow for maximum energy transmission from the target's sputtering surface. 2 figures.
Ernst, Dominique; Köhler, Jürgen
2013-01-21
We provide experimental results on the accuracy of diffusion coefficients obtained by a mean squared displacement (MSD) analysis of single-particle trajectories. We have recorded very long trajectories comprising more than 1.5 × 10(5) data points and decomposed these long trajectories into shorter segments providing us with ensembles of trajectories of variable lengths. This enabled a statistical analysis of the resulting MSD curves as a function of the lengths of the segments. We find that the relative error of the diffusion coefficient can be minimized by taking an optimum number of points into account for fitting the MSD curves, and that this optimum does not depend on the segment length. Yet, the magnitude of the relative error for the diffusion coefficient does, and achieving an accuracy in the order of 10% requires the recording of trajectories with about 1000 data points. Finally, we compare our results with theoretical predictions and find very good qualitative and quantitative agreement between experiment and theory.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Barbiere, J; Beninati, G; Ndlovu, A
2015-06-15
Purpose: It has been argued that a 3D-conformal technique (3DCRT) is suitable for SBRT due to its simplicity for non-coplanar planning and delivery. It has also been hypothesized that a high dose delivered in a short time can enhance indirect cell death due to vascular damage as well as limiting intrafraction motion. Flattening Filter Free (FFF) photon beams are ideal for high dose rate treatment but their conical profiles are not ideal for 3DCRT. The purpose of our work is to present a method to efficiently segment an FFF beam for standard 3DCRT planning. Methods: A 10×10 cm Varian Truemore » Beam 6X FFF beam profile was analyzed using segmentation theory to determine the optimum segmentation intensity required to create an 8 cm uniform dose profile. Two segments were automatically created in sequence with a Varian Eclipse treatment planning system by converting isodoses corresponding to the calculated segmentation intensity to contours and applying the “fit and shield” tool. All segments were then added to the FFF beam to create a single merged field. Field blocking can be incorporated but was not used for clarity. Results: Calculation of the segmentation intensity using an algorithm originally proposed by Xia and Verhey indicated that each segment should extend to the 92% isodose. The original FFF beam with 100% at the isocenter at a depth of 10 cm was reduced to 80% at 4cm from the isocenter; the segmented beam had +/−2.5 % uniformity up to 4.4cm from the isocenter. An additional benefit of our method is a 50% decrease in the 80%-20% penumbra of 0.6cm compared to 1.2cm in the original FFF beam. Conclusion: Creation of two optimum segments can flatten a FFF beam and also reduce its penumbra for clinical 3DCRT SBRT treatment.« less
Ibrahim, Mohd Rasdan; Katman, Herda Yati; Karim, Mohamed Rehan; Koting, Suhana; Mashaan, Nuha S
2014-01-01
The main objective of this paper is to investigate the relations of rubber size, rubber content, and binder content in determination of optimum binder content for open graded friction course (OGFC). Mix gradation type B as specified in Specification for Porous Asphalt produced by the Road Engineering Association of Malaysia (REAM) was used in this study. Marshall specimens were prepared with four different sizes of rubber, namely, 20 mesh size [0.841 mm], 40 mesh [0.42 mm], 80 mesh [0.177 mm], and 100 mesh [0.149 mm] with different concentrations of rubberised bitumen (4%, 8%, and 12%) and different percentages of binder content (4%-7%). The appropriate optimum binder content is then selected according to the results of the air voids, binder draindown, and abrasion loss test. Test results found that crumb rubber particle size can affect the optimum binder content for OGFC.
Study of process parameter on mist lubrication of Titanium (Grade 5) alloy
NASA Astrophysics Data System (ADS)
Maity, Kalipada; Pradhan, Swastik
2017-02-01
This paper deals with the machinability of Ti-6Al-4V alloy with mist cooling lubrication using carbide inserts. The influence of process parameter on the cutting forces, evolution of tool wear, surface finish of the workpiece, material removal rate and chip reduction coefficient have been investigated. Weighted principal component analysis coupled with grey relational analysis optimization is applied to identify the optimum setting of the process parameter. Optimal condition of the process parameter was cutting speed at 160 m/min, feed at 0.16 mm/rev and depth of cut at 1.6 mm. Effects of cutting speed and depth of cut on the type of chips formation were observed. Most of the chips forms were long tubular and long helical type. Image analyses of the segmented chip were examined to study the shape and size of the saw tooth profile of serrated chips. It was found that by increasing cutting speed from 95 m/min to 160 m/min, the free surface lamella of the chips increased and the visibility of the saw tooth segment became clearer.
OPTIMAL AIRCRAFT TRAJECTORIES FOR SPECIFIED RANGE
NASA Technical Reports Server (NTRS)
Lee, H.
1994-01-01
For an aircraft operating over a fixed range, the operating costs are basically a sum of fuel cost and time cost. While minimum fuel and minimum time trajectories are relatively easy to calculate, the determination of a minimum cost trajectory can be a complex undertaking. This computer program was developed to optimize trajectories with respect to a cost function based on a weighted sum of fuel cost and time cost. As a research tool, the program could be used to study various characteristics of optimum trajectories and their comparison to standard trajectories. It might also be used to generate a model for the development of an airborne trajectory optimization system. The program could be incorporated into an airline flight planning system, with optimum flight plans determined at takeoff time for the prevailing flight conditions. The use of trajectory optimization could significantly reduce the cost for a given aircraft mission. The algorithm incorporated in the program assumes that a trajectory consists of climb, cruise, and descent segments. The optimization of each segment is not done independently, as in classical procedures, but is performed in a manner which accounts for interaction between the segments. This is accomplished by the application of optimal control theory. The climb and descent profiles are generated by integrating a set of kinematic and dynamic equations, where the total energy of the aircraft is the independent variable. At each energy level of the climb and descent profiles, the air speed and power setting necessary for an optimal trajectory are determined. The variational Hamiltonian of the problem consists of the rate of change of cost with respect to total energy and a term dependent on the adjoint variable, which is identical to the optimum cruise cost at a specified altitude. This variable uniquely specifies the optimal cruise energy, cruise altitude, cruise Mach number, and, indirectly, the climb and descent profiles. If the optimum cruise cost is specified, an optimum trajectory can easily be generated; however, the range obtained for a particular optimum cruise cost is not known a priori. For short range flights, the program iteratively varies the optimum cruise cost until the computed range converges to the specified range. For long-range flights, iteration is unnecessary since the specified range can be divided into a cruise segment distance and full climb and descent distances. The user must supply the program with engine fuel flow rate coefficients and an aircraft aerodynamic model. The program currently includes coefficients for the Pratt-Whitney JT8D-7 engine and an aerodynamic model for the Boeing 727. Input to the program consists of the flight range to be covered and the prevailing flight conditions including pressure, temperature, and wind profiles. Information output by the program includes: optimum cruise tables at selected weights, optimal cruise quantities as a function of cruise weight and cruise distance, climb and descent profiles, and a summary of the complete synthesized optimal trajectory. This program is written in FORTRAN IV for batch execution and has been implemented on a CDC 6000 series computer with a central memory requirement of approximately 100K (octal) of 60 bit words. This aircraft trajectory optimization program was developed in 1979.
Cao, Yinping; Jia, Fuguo; Han, Yanlong; Liu, Yang; Zhang, Qiang
2015-10-01
The aim of this study was to find out the optimal moisture adding rate of brown rice during the process of germination. The process of water addition in brown rice could be divided into three stages according to different water absorption speeds in soaking process. Water was added with three different speeds in three stages to get the optimal water adding rate in the whole process of germination. Thus, the technology of segmented moisture conditioning which is a method of adding water gradually was put forward. Germinated brown rice was produced by using segmented moisture conditioning method to reduce the loss of water-soluble nutrients and was beneficial to the accumulation of gamma aminobutyric acid. The effects of once moisture adding amount in three stages on the gamma aminobutyric acid content in germinated brown rice and germination rate of brown rice were investigated by using response surface methodology. The optimum process parameters were obtained as follows: once moisture adding amount of stage I with 1.06 %/h, once moisture adding amount of stage II with 1.42 %/h and once moisture adding amount of stage III with 1.31 %/h. The germination rate under the optimum parameters was 91.33 %, which was 7.45 % higher than that of germinated brown rice produced by soaking method (84.97 %). The content of gamma aminobutyric acid in germinated brown rice under the optimum parameters was 29.03 mg/100 g, which was more than two times higher than that of germinated brown rice produced by soaking method (12.81 mg/100 g). The technology of segmented moisture conditioning has potential applications for studying many other cereals.
MRI brain tumor segmentation based on improved fuzzy c-means method
NASA Astrophysics Data System (ADS)
Deng, Wankai; Xiao, Wei; Pan, Chao; Liu, Jianguo
2009-10-01
This paper focuses on the image segmentation, which is one of the key problems in medical image processing. A new medical image segmentation method is proposed based on fuzzy c- means algorithm and spatial information. Firstly, we classify the image into the region of interest and background using fuzzy c means algorithm. Then we use the information of the tissues' gradient and the intensity inhomogeneities of regions to improve the quality of segmentation. The sum of the mean variance in the region and the reciprocal of the mean gradient along the edge of the region are chosen as an objective function. The minimum of the sum is optimum result. The result shows that the clustering segmentation algorithm is effective.
A Real Options Approach to Quantity and Cost Optimization for Lifetime and Bridge Buys of Parts
2015-04-30
fixed EOS of 40 years and a fixed WACC of 3%, decreases to a minimum and then increases. The minimum of this curve gives the optimum buy size for...considered in both analyses. For a 3% WACC , as illustrated in Figure 9(a), the DES method gives an optimum buy size range of 2,923–3,191 with an average...Hence, both methods are consistent in determining the optimum lifetime/bridge buy size. To further verify this consistency, other WACC values
Ibrahim, Mohd Rasdan; Katman, Herda Yati; Karim, Mohamed Rehan; Koting, Suhana; Mashaan, Nuha S.
2014-01-01
The main objective of this paper is to investigate the relations of rubber size, rubber content, and binder content in determination of optimum binder content for open graded friction course (OGFC). Mix gradation type B as specified in Specification for Porous Asphalt produced by the Road Engineering Association of Malaysia (REAM) was used in this study. Marshall specimens were prepared with four different sizes of rubber, namely, 20 mesh size [0.841 mm], 40 mesh [0.42 mm], 80 mesh [0.177 mm], and 100 mesh [0.149 mm] with different concentrations of rubberised bitumen (4%, 8%, and 12%) and different percentages of binder content (4%–7%). The appropriate optimum binder content is then selected according to the results of the air voids, binder draindown, and abrasion loss test. Test results found that crumb rubber particle size can affect the optimum binder content for OGFC. PMID:24574875
NASA Technical Reports Server (NTRS)
Smith, J. M.; Nichols, L. D.
1977-01-01
The value of percent seed, oxygen to fuel ratio, combustion pressure, Mach number, and magnetic field strength which maximize either the electrical conductivity or power density at the entrance of an MHD power generator was obtained. The working fluid is the combustion product of H2 and O2 seeded with CsOH. The ideal theoretical segmented Faraday generator along with an empirical form found from correlating the data of many experimenters working with generators of different sizes, electrode configurations, and working fluids, are investigated. The conductivity and power densities optimize at a seed fraction of 3.5 mole percent and an oxygen to hydrogen weight ratio of 7.5. The optimum values of combustion pressure and Mach number depend on the operating magnetic field strength.
An Efficient Pipeline for Abdomen Segmentation in CT Images.
Koyuncu, Hasan; Ceylan, Rahime; Sivri, Mesut; Erdogan, Hasan
2018-04-01
Computed tomography (CT) scans usually include some disadvantages due to the nature of the imaging procedure, and these handicaps prevent accurate abdomen segmentation. Discontinuous abdomen edges, bed section of CT, patient information, closeness between the edges of the abdomen and CT, poor contrast, and a narrow histogram can be regarded as the most important handicaps that occur in abdominal CT scans. Currently, one or more handicaps can arise and prevent technicians obtaining abdomen images through simple segmentation techniques. In other words, CT scans can include the bed section of CT, a patient's diagnostic information, low-quality abdomen edges, low-level contrast, and narrow histogram, all in one scan. These phenomena constitute a challenge, and an efficient pipeline that is unaffected by handicaps is required. In addition, analysis such as segmentation, feature selection, and classification has meaning for a real-time diagnosis system in cases where the abdomen section is directly used with a specific size. A statistical pipeline is designed in this study that is unaffected by the handicaps mentioned above. Intensity-based approaches, morphological processes, and histogram-based procedures are utilized to design an efficient structure. Performance evaluation is realized in experiments on 58 CT images (16 training, 16 test, and 26 validation) that include the abdomen and one or more disadvantage(s). The first part of the data (16 training images) is used to detect the pipeline's optimum parameters, while the second and third parts are utilized to evaluate and to confirm the segmentation performance. The segmentation results are presented as the means of six performance metrics. Thus, the proposed method achieves remarkable average rates for training/test/validation of 98.95/99.36/99.57% (jaccard), 99.47/99.67/99.79% (dice), 100/99.91/99.91% (sensitivity), 98.47/99.23/99.85% (specificity), 99.38/99.63/99.87% (classification accuracy), and 98.98/99.45/99.66% (precision). In summary, a statistical pipeline performing the task of abdomen segmentation is achieved that is not affected by the disadvantages, and the most detailed abdomen segmentation study is performed for the use before organ and tumor segmentation, feature extraction, and classification.
Optimal wavefront control for adaptive segmented mirrors
NASA Technical Reports Server (NTRS)
Downie, John D.; Goodman, Joseph W.
1989-01-01
A ground-based astronomical telescope with a segmented primary mirror will suffer image-degrading wavefront aberrations from at least two sources: (1) atmospheric turbulence and (2) segment misalignment or figure errors of the mirror itself. This paper describes the derivation of a mirror control feedback matrix that assumes the presence of both types of aberration and is optimum in the sense that it minimizes the mean-squared residual wavefront error. Assumptions of the statistical nature of the wavefront measurement errors, atmospheric phase aberrations, and segment misalignment errors are made in the process of derivation. Examples of the degree of correlation are presented for three different types of wavefront measurement data and compared to results of simple corrections.
Microeconomic principles explain an optimal genome size in bacteria.
Ranea, Juan A G; Grant, Alastair; Thornton, Janet M; Orengo, Christine A
2005-01-01
Bacteria can clearly enhance their survival by expanding their genetic repertoire. However, the tight packing of the bacterial genome and the fact that the most evolved species do not necessarily have the biggest genomes suggest there are other evolutionary factors limiting their genome expansion. To clarify these restrictions on size, we studied those protein families contributing most significantly to bacterial-genome complexity. We found that all bacteria apply the same basic and ancestral 'molecular technology' to optimize their reproductive efficiency. The same microeconomics principles that define the optimum size in a factory can also explain the existence of a statistical optimum in bacterial genome size. This optimum is reached when the bacterial genome obtains the maximum metabolic complexity (revenue) for minimal regulatory genes (logistic cost).
Gamwo, Isaac K [Murrysville, PA; Gidaspow, Dimitri [Northbrook, IL; Jung, Jonghwun [Naperville, IL
2009-11-17
A method for determining optimum catalyst particle size for a gas-solid, liquid-solid, or gas-liquid-solid fluidized bed reactor such as a slurry bubble column reactor (SBCR) for converting synthesis gas into liquid fuels considers the complete granular temperature balance based on the kinetic theory of granular flow, the effect of a volumetric mass transfer coefficient between the liquid and the gas, and the water gas shift reaction. The granular temperature of the catalyst particles representing the kinetic energy of the catalyst particles is measured and the volumetric mass transfer coefficient between the gas and liquid phases is calculated using the granular temperature. Catalyst particle size is varied from 20 .mu.m to 120 .mu.m and a maximum mass transfer coefficient corresponding to optimum liquid hydrocarbon fuel production is determined. Optimum catalyst particle size for maximum methanol production in a SBCR was determined to be in the range of 60-70 .mu.m.
The Cost-Optimal Size of Future Reusable Launch Vehicles
NASA Astrophysics Data System (ADS)
Koelle, D. E.
2000-07-01
The paper answers the question, what is the optimum vehicle size — in terms of LEO payload capability — for a future reusable launch vehicle ? It is shown that there exists an optimum vehicle size that results in minimum specific transportation cost. The optimum vehicle size depends on the total annual cargo mass (LEO equivalent) enviseaged, which defines at the same time the optimum number of launches per year (LpA). Based on the TRANSCOST-Model algorithms a wide range of vehicle sizes — from 20 to 100 Mg payload in LEO, as well as launch rates — from 2 to 100 per year — have been investigated. It is shown in a design chart how much the vehicle size as well as the launch rate are influencing the specific transportation cost (in MYr/Mg and USS/kg). The comparison with actual ELVs (Expendable Launch Vehicles) and Semi-Reusable Vehicles (a combination of a reusable first stage with an expendable second stage) shows that there exists only one economic solution for an essential reduction of space transportation cost: the Fully Reusable Vehicle Concept, with rocket propulsion and vertical take-off. The Single-stage Configuration (SSTO) has the best economic potential; its feasibility is not only a matter of technology level but also of the vehicle size as such. Increasing the vehicle size (launch mass) reduces the technology requirements because the law of scale provides a better mass fraction and payload fraction — practically at no cost. The optimum vehicle design (after specification of the payload capability) requires a trade-off between lightweight (and more expensive) technology vs. more conventional (and cheaper) technology. It is shown that the the use of more conventional technology and accepting a somewhat larger vehicle is the more cost-effective and less risky approach.
Size effects on miniature Stirling cycle cryocoolers
NASA Astrophysics Data System (ADS)
Yang, Xiaoqin; Chung, J. N.
2005-08-01
Size effects on the performance of Stirling cycle cryocoolers were investigated by examining each individual loss associated with the regenerator and combining these effects. For the fixed cycle parameters and given regenerator length scale, it was found that only for a specific range of the hydrodynamic diameter the system can produce net refrigeration and there is an optimum hydraulic diameter at which the maximum net refrigeration is achieved. When the hydraulic diameter is less than the optimum value, the regenerator performance is controlled by the pressure drop loss; when the hydraulic diameter is greater than the optimum value, the system performance is controlled by the thermal losses. It was also found that there exists an optimum ratio between the hydraulic diameter and the length of the regenerator that offers the maximum net refrigeration. As the regenerator length is decreased, the optimum hydraulic diameter-to-length ratio increases; and the system performance is increased that is controlled by the pressure drop loss and heat conduction loss. Choosing appropriate regenerator characteristic sizes in small-scale systems are more critical than in large-scale ones.
A variable-step-size robust delta modulator.
NASA Technical Reports Server (NTRS)
Song, C. L.; Garodnick, J.; Schilling, D. L.
1971-01-01
Description of an analytically obtained optimum adaptive delta modulator-demodulator configuration. The device utilizes two past samples to obtain a step size which minimizes the mean square error for a Markov-Gaussian source. The optimum system is compared, using computer simulations, with a linear delta modulator and an enhanced Abate delta modulator. In addition, the performance is compared to the rate distortion bound for a Markov source. It is shown that the optimum delta modulator is neither quantization nor slope-overload limited. The highly nonlinear equations obtained for the optimum transmitter and receiver are approximated by piecewise-linear equations in order to obtain system equations which can be transformed into hardware. The derivation of the experimental system is presented.
Power spectral ensity of markov texture fields
NASA Technical Reports Server (NTRS)
Shanmugan, K. S.; Holtzman, J. C.
1984-01-01
Texture is an important image characteristic. A variety of spatial domain techniques were proposed for extracting and utilizing textural features for segmenting and classifying images. for the most part, these spatial domain techniques are ad hos in nature. A markov random field model for image texture is discussed. A frequency domain description of image texture is derived in terms of the power spectral density. This model is used for designing optimum frequency domain filters for enhancing, restoring and segmenting images based on their textural properties.
NASA Astrophysics Data System (ADS)
Heilbronner, Renée; Kilian, Ruediger
2017-04-01
Grain size analyses are carried out for a number of reasons, for example, the dynamically recrystallized grain size of quartz is used to assess the flow stresses during deformation. Typically a thin section or polished surface is used. If the expected grain size is large enough (10 µm or larger), the images can be obtained on a light microscope, if the grain size is smaller, the SEM is used. The grain boundaries are traced (the process is called segmentation and can be done manually or via image processing) and the size of the cross sectional areas (segments) is determined. From the resulting size distributions, 'the grain size' or 'average grain size', usually a mean diameter or similar, is derived. When carrying out such grain size analyses, a number of aspects are critical for the reproducibility of the result: the resolution of the imaging equipment (light microscope or SEM), the type of images that are used for segmentation (cross polarized, partial or full orientation images, CIP versus EBSD), the segmentation procedure (algorithm) itself, the quality of the segmentation and the mathematical definition and calculation of 'the average grain size'. The quality of the segmentation depends very strongly on the criteria that are used for identifying grain boundaries (for example, angles of misorientation versus shape considerations), on pre- and post-processing (filtering) and on the quality of the recorded images (most notably on the indexing ratio). In this contribution, we consider experimentally deformed Black Hills quartzite with dynamically re-crystallized grain sizes in the range of 2 - 15 µm. We compare two basic methods of segmentations of EBSD maps (orientation based versus shape based) and explore how the choice of methods influences the result of the grain size analysis. We also compare different measures for grain size (mean versus mode versus RMS, and 2D versus 3D) in order to determine which of the definitions of 'average grain size yields the most stable results.
NASA Astrophysics Data System (ADS)
Liu, Likun
2018-01-01
In the field of remote sensing image processing, remote sensing image segmentation is a preliminary step for later analysis of remote sensing image processing and semi-auto human interpretation, fully-automatic machine recognition and learning. Since 2000, a technique of object-oriented remote sensing image processing method and its basic thought prevails. The core of the approach is Fractal Net Evolution Approach (FNEA) multi-scale segmentation algorithm. The paper is intent on the research and improvement of the algorithm, which analyzes present segmentation algorithms and selects optimum watershed algorithm as an initialization. Meanwhile, the algorithm is modified by modifying an area parameter, and then combining area parameter with a heterogeneous parameter further. After that, several experiments is carried on to prove the modified FNEA algorithm, compared with traditional pixel-based method (FCM algorithm based on neighborhood information) and combination of FNEA and watershed, has a better segmentation result.
Improved heliostat field design for solar tower plants
NASA Astrophysics Data System (ADS)
Collado, Francisco J.; Guallar, Jesús
2017-06-01
In solar power tower (SPT) systems, selecting the optimum location of thousands of heliostats and the most profitable tower height and receiver size remains a challenge. Campo code is prepared for the detailed design of such plants in particular, the optimum layout, provided that the plant size is known. Therefore, less exhaustive codes, as DELSOL3, are also needed to perform preliminary parametric analysis that narrows the most economic size of the plant.
NASA Astrophysics Data System (ADS)
Wahyuningsih, S.; Ramelan, A. H.; Wardoyo, D. T.; Ichsan, S.; Kristiawan, Y. R.
2018-03-01
The utilization and modification of silica from rice straw as the main ingredient of adsorbent has been studied. The aim of this study was to determine the optimum composition of PVA (polyvinyl alcohol): silica to produce adsorbents with excellent pore characteristics, optimum adsorption efficiency and optimum pH for methyl yellow adsorptions. X-Ray Fluorescence (XRF) analysis results showed that straw ash contains 82.12 % of silica (SiO2). SAA (Surface Area Analyzer) analysis showed optimum composition ratio 5:5 of PVA: silica with surface area of 1.503 m2/g. Besides, based on the pore size distribution of PVA: silica (5:5) showed the narrow pore size distribution with the largest pore cumulative volume of 2.8 x 10-3 cc/g. The optimum pH for Methanyl Yellow adsorption is pH 2 with adsorption capacity = 72.1346%.
A system concept for gradual deployment of geostationary lightsats
NASA Astrophysics Data System (ADS)
Peters, Graham C.; Garry, James R. C.
1993-10-01
Small satellites provide an attractive option for developing countries wishing to own and operate a satellite for the first time. It is proposed that space segment capacity could be built-up in response to increasing traffic requirements by launching small satellites at intervals into a single orbital slot to form a cluster. This paper, which results from an ESA study, reviews the various system aspects which must be considered and develops a suitable approach for multi-satellite deployment and collocation. Particular attention is paid to the system and payload configuration required to achieve effective mutual sparing between the satellites' payloads as the constellation is expanded. Mission and operational aspects are examined to obtain an acceptable risk of collisions between the satellites in a single orbit slot. The complexity and cost of operations are investigated to obtain the optimum size of satellite required to satisfy different demand requirements taken from real market scenarios.
[Calculating the optimum size of a hemodialysis unit based on infrastructure potential].
Avila-Palomares, Paula; López-Cervantes, Malaquías; Durán-Arenas, Luis
2010-01-01
To estimate the optimum size for hemodialysis units to maximize production given capital constraints. A national study in Mexico was conducted in 2009. Three possible methods for estimating a units optimum size were analyzed: hemodialysis services production under monopolistic market, under a perfect competitive market and production maximization given capital constraints. The third method was considered best based on the assumptions made in this paper; an optimal size unit should have 16 dialyzers (15 active and one back up dialyzer) and a purifier system able to supply all. It also requires one nephrologist, five nurses per shift, considering four shifts per day. Empirical evidence shows serious inefficiencies in the operation of units throughout the country. Most units fail to maximize production due to not fully utilizing equipment and personnel, particularly their water purifier potential which happens to be the most expensive asset for these units.
Exploring the optimum step size for defocus curves.
Wolffsohn, James S; Jinabhai, Amit N; Kingsnorth, Alec; Sheppard, Amy L; Naroo, Shehzad A; Shah, Sunil; Buckhurst, Phillip; Hall, Lee A; Young, Graeme
2013-06-01
To evaluate the effect of reducing the number of visual acuity measurements made in a defocus curve on the quality of data quantified. Midland Eye, Solihull, United Kingdom. Evaluation of a technique. Defocus curves were constructed by measuring visual acuity on a distance logMAR letter chart, randomizing the test letters between lens presentations. The lens powers evaluated ranged between +1.50 diopters (D) and -5.00 D in 0.50 D steps, which were also presented in a randomized order. Defocus curves were measured binocularly with the Tecnis diffractive, Rezoom refractive, Lentis rotationally asymmetric segmented (+3.00 D addition [add]), and Finevision trifocal multifocal intraocular lenses (IOLs) implanted bilaterally, and also for the diffractive IOL and refractive or rotationally asymmetric segmented (+3.00 D and +1.50 D adds) multifocal IOLs implanted contralaterally. Relative and absolute range of clear-focus metrics and area metrics were calculated for curves fitted using 0.50 D, 1.00 D, and 1.50 D steps and a near add-specific profile (ie, distance, half the near add, and the full near-add powers). A significant difference in simulated results was found in at least 1 of the relative or absolute range of clear-focus or area metrics for each of the multifocal designs examined when the defocus-curve step size was increased (P<.05). Faster methods of capturing defocus curves from multifocal IOL designs appear to distort the metric results and are therefore not valid. No author has a financial or proprietary interest in any material or method mentioned. Copyright © 2013 ASCRS and ESCRS. Published by Elsevier Inc. All rights reserved.
Wei, Qiang; Wei, Wei; Tian, Rui; Wang, Lian-Yan; Su, Zhi-Guo; Ma, Guang-Hui
2008-07-15
Relatively uniform-sized poly(lactide-co-ethylene glycol) (PELA) microspheres with high encapsulation efficiency were prepared rapidly by a novel method combining emulsion-solvent extraction and premix membrane emulsification. Briefly, preparation of coarse double emulsions was followed by additional premix membrane emulsification, and antigen-loaded microspheres were obtained by further solidification. Under the optimum condition, the particle size was about 1 mum and the coefficient of variation (CV) value was 18.9%. Confocal laser scanning microscope and flow cytometer analysis showed that the inner droplets were small and evenly dispersed and the antigen was loaded uniformly in each microsphere when sonication technique was occupied to prepare primary emulsion. Distribution pattern of PEG segment played important role on the properties of microspheres. Compared with triblock copolymer PLA-PEG-PLA, the diblock copolymer PLA-mPEG yielded a more stable interfacial layer at the interface of oil and water phase, and thus was more suitable to stabilize primary emulsion and protect coalescence of inner droplets and external water phase, resulting in high encapsulation efficiency (90.4%). On the other hand, solidification rate determined the time for coalescence during microspheres fabrication, and thus affected encapsulation efficiency. Taken together, improving the polymer properties and solidification rate are considered as two effective strategies to yield high encapsulation.
Decreasing transmembrane segment length greatly decreases perfringolysin O pore size
Lin, Qingqing; Li, Huilin; Wang, Tong; ...
2015-04-08
Perfringolysin O (PFO) is a transmembrane (TM) β-barrel protein that inserts into mammalian cell membranes. Once inserted into membranes, PFO assembles into pore-forming oligomers containing 30–50 PFO monomers. These form a pore of up to 300 Å, far exceeding the size of most other proteinaceous pores. In this study, we found that altering PFO TM segment length can alter the size of PFO pores. A PFO mutant with lengthened TM segments oligomerized to a similar extent as wild-type PFO, and exhibited pore-forming activity and a pore size very similar to wild-type PFO as measured by electron microscopy and a leakagemore » assay. In contrast, PFO with shortened TM segments exhibited a large reduction in pore-forming activity and pore size. This suggests that the interaction between TM segments can greatly affect the size of pores formed by TM β-barrel proteins. PFO may be a promising candidate for engineering pore size for various applications.« less
Malyugin, Boris E; Shpak, Alexander A; Pokrovskiy, Dmitry F
2015-08-01
To use anterior segment optical coherence tomography (AS-OCT) to evaluate the clinical effectiveness of Implantable Collamer Lens posterior chamber phakic intraocular lens (PC pIOL) sizing based on measurement of the distance from the iris pigment end to the iris pigment end. S. Fyodorov Eye Microsurgery Federal State Institution, Moscow, Russia. Evaluation of diagnostic test or technology. Stage 1 was a prospective study. The sulcus-to-sulcus (STS) distance was measured using ultrasound biomicroscopy (UBM) (Vumax 2), and the distance from iris pigment end to iris pigment end was assessed using a proposed AS-OCT algorithm. Part 2 used retrospective data from patients after implantation of a PC pIOL with the size selected according to AS-OCT (Visante) measurements of the distance from iris pigment end to iris pigment end. The PC pIOL vault was measured by AS-OCT, and adverse events were assessed. Stage 1 comprised 32 eyes of 32 myopic patients (mean age 28.4 years ± 6.3 [SD]; mean spherical equivalent [SE] -13.11 ± 4.28 diopters [D]). Stage 2 comprised 29 eyes of 16 patients (mean age 27.7 ± 4.7 years; mean SE -16.55 ± 3.65 D). The mean STS distance (12.35 ± 0.47 mm) was similar to the mean distance from iris pigment end to iris pigment end distance (examiner 1: 12.36 ± 0.51 mm; examiner 2: 12.37 ± 0.53 mm). The PC pIOL sized using the new AS-OCT algorithm had a mean vault of 0.53 ± 0.18 mm and did not produce adverse events during the 12-month follow-up. In 16 of 29 eyes, the PC pIOL vault was within an optimum interval (0.35 to 0.70 mm). The new measurement algorithm can be effectively used for PC pIOL sizing. No author has a financial or proprietary interest in any material or method mentioned. Copyright © 2015 ASCRS and ESCRS. Published by Elsevier Inc. All rights reserved.
Relationship between negative differential thermal resistance and asymmetry segment size
NASA Astrophysics Data System (ADS)
Kong, Peng; Hu, Tao; Hu, Ke; Jiang, Zhenhua; Tang, Yi
2018-03-01
Negative differential thermal resistance (NDTR) was investigated in a system consisting of two dissimilar anharmonic lattices exemplified by Frenkel-Kontorova (FK) lattices and Fremi-Pasta-Ulam (FPU) lattices (FK-FPU). The previous theoretical and numerical simulations show the dependence of NDTR are the coupling constant, interface and system size, but we find the segment size also to be an important element. It’s interesting that NDTR region depends on FK segment size rather than FPU segment size in this coupling FK-FPU model. Remarkably, we could observe that NDTR appears in the strong interface coupling strength case which is not NDTR in previous studies. The results are conducive to further developments in designing and fabricating thermal devices.
Sharifi Dehsari, Hamed; Harris, Richard Anthony; Ribeiro, Anielen Halda; Tremel, Wolfgang; Asadi, Kamal
2018-06-05
Despite the great progress in the synthesis of iron oxide nanoparticles (NPs) using a thermal decomposition method, the production of NPs with low polydispersity index is still challenging. In a thermal decomposition synthesis, oleic acid (OAC) and oleylamine (OAM) are used as surfactants. The surfactants bind to the growth species, thereby controlling the reaction kinetics and hence playing a critical role in the final size and size distribution of the NPs. Finding an optimum molar ratio between the surfactants oleic OAC/OAM is therefore crucial. A systematic experimental and theoretical study, however, on the role of the surfactant ratio is still missing. Here, we present a detailed experimental study on the role of the surfactant ratio in size distribution. We found an optimum OAC/OAM ratio of 3 at which the synthesis yielded truly monodisperse (polydispersity less than 7%) iron oxide NPs without employing any post synthesis size-selective procedures. We performed molecular dynamics simulations and showed that the binding energy of oleate to the NP is maximized at an OAC/OAM ratio of 3. The optimum OAC/OAM ratio of 3 is allowed for the control of the NP size with nanometer precision by simply changing the reaction heating rate. The optimum OAC/OAM ratio has no influence on the crystallinity and the superparamagnetic behavior of the Fe 3 O 4 NPs and therefore can be adopted for the scaled-up production of size-controlled monodisperse Fe 3 O 4 NPs.
Determining Domestic Container Shipping as an Enforcement of Indonesian International Hub Port
NASA Astrophysics Data System (ADS)
Nur, H. I.; Lazuardi, S. D.; Hadi, F.; Hapis, M.
2018-03-01
According to Presidential Regulation Number 26 year of 2012 about the National Logistics System Development Blueprint, the Indonesian government proposed to build two international hub ports, which were in Port of Kuala Tanjung for the western region and Port of Bitung for eastern region. Therefore, the optimum routes and fleet size are required to support the enforcement of Indonesian International Hub Port. The optimization model is used to obtain the optimum route and fleet by minimizing the total shipping costs, while considering the container demand. The result of analysis obtained that the optimum route and fleet size for the western region of Indonesia were: (1) Kuala Tanjung-Belawan required 15 ships of 1,000 TEU; (2) Kuala Tanjung-Tanjung Priok required 73 ships of 2,500 TEU; (3) Kuala Tanjung- Tanjung Perak required 44 ships of 2,500 TEU. Meanwhile, the optimum route and fleet size for the eastern region of Indonesia consisted of: (1) Bitung-Sorong required 1 ship of 500 TEU; (2) Bitung-Banjarmasin required 3 ships of 500 TEU; and (3) Bitung-Makassar required 1 ship of 1,500 TEU.
Recovery of choline oxidase activity by in vitro recombination of individual segments.
Heinze, Birgit; Hoven, Nina; O'Connell, Timothy; Maurer, Karl-Heinz; Bartsch, Sebastian; Bornscheuer, Uwe T
2008-11-01
Initial attempts to express a choline oxidase from Arthrobacter pascens (APChO-syn) in Escherichia coli starting from a synthetic gene only led to inactive protein. However, activity was regained by the systematic exchange of individual segments of the gene with segments from a choline oxidase-encoding gene from Arthrobacter globiformis yielding a functional chimeric enzyme. Next, a sequence alignment of the exchanged segment with other choline oxidases revealed a mutation in the APChO-syn, showing that residue 200 was a threonine instead of an asparagine, which is, thus, crucial for confering enzyme activity and, hence, provides an explanation for the initial lack of activity. The active recombinant APChO-syn-T200N variant was biochemically characterized showing an optimum at pH 8.0 and at 37 degrees C. Furthermore, the substrate specificity was examined using N,N-dimethylethanolamine, N-methylethanolamine and 3,3-dimethyl-1-butanol.
Assessment of Multiresolution Segmentation for Extracting Greenhouses from WORLDVIEW-2 Imagery
NASA Astrophysics Data System (ADS)
Aguilar, M. A.; Aguilar, F. J.; García Lorca, A.; Guirado, E.; Betlej, M.; Cichon, P.; Nemmaoui, A.; Vallario, A.; Parente, C.
2016-06-01
The latest breed of very high resolution (VHR) commercial satellites opens new possibilities for cartographic and remote sensing applications. In this way, object based image analysis (OBIA) approach has been proved as the best option when working with VHR satellite imagery. OBIA considers spectral, geometric, textural and topological attributes associated with meaningful image objects. Thus, the first step of OBIA, referred to as segmentation, is to delineate objects of interest. Determination of an optimal segmentation is crucial for a good performance of the second stage in OBIA, the classification process. The main goal of this work is to assess the multiresolution segmentation algorithm provided by eCognition software for delineating greenhouses from WorldView- 2 multispectral orthoimages. Specifically, the focus is on finding the optimal parameters of the multiresolution segmentation approach (i.e., Scale, Shape and Compactness) for plastic greenhouses. The optimum Scale parameter estimation was based on the idea of local variance of object heterogeneity within a scene (ESP2 tool). Moreover, different segmentation results were attained by using different combinations of Shape and Compactness values. Assessment of segmentation quality based on the discrepancy between reference polygons and corresponding image segments was carried out to identify the optimal setting of multiresolution segmentation parameters. Three discrepancy indices were used: Potential Segmentation Error (PSE), Number-of-Segments Ratio (NSR) and Euclidean Distance 2 (ED2).
Automatic anatomy recognition via multiobject oriented active shape models.
Chen, Xinjian; Udupa, Jayaram K; Alavi, Abass; Torigian, Drew A
2010-12-01
This paper studies the feasibility of developing an automatic anatomy recognition (AAR) system in clinical radiology and demonstrates its operation on clinical 2D images. The anatomy recognition method described here consists of two main components: (a) multiobject generalization of OASM and (b) object recognition strategies. The OASM algorithm is generalized to multiple objects by including a model for each object and assigning a cost structure specific to each object in the spirit of live wire. The delineation of multiobject boundaries is done in MOASM via a three level dynamic programming algorithm, wherein the first level is at pixel level which aims to find optimal oriented boundary segments between successive landmarks, the second level is at landmark level which aims to find optimal location for the landmarks, and the third level is at the object level which aims to find optimal arrangement of object boundaries over all objects. The object recognition strategy attempts to find that pose vector (consisting of translation, rotation, and scale component) for the multiobject model that yields the smallest total boundary cost for all objects. The delineation and recognition accuracies were evaluated separately utilizing routine clinical chest CT, abdominal CT, and foot MRI data sets. The delineation accuracy was evaluated in terms of true and false positive volume fractions (TPVF and FPVF). The recognition accuracy was assessed (1) in terms of the size of the space of the pose vectors for the model assembly that yielded high delineation accuracy, (2) as a function of the number of objects and objects' distribution and size in the model, (3) in terms of the interdependence between delineation and recognition, and (4) in terms of the closeness of the optimum recognition result to the global optimum. When multiple objects are included in the model, the delineation accuracy in terms of TPVF can be improved to 97%-98% with a low FPVF of 0.1%-0.2%. Typically, a recognition accuracy of > or = 90% yielded a TPVF > or = 95% and FPVF < or = 0.5%. Over the three data sets and over all tested objects, in 97% of the cases, the optimal solutions found by the proposed method constituted the true global optimum. The experimental results showed the feasibility and efficacy of the proposed automatic anatomy recognition system. Increasing the number of objects in the model can significantly improve both recognition and delineation accuracy. More spread out arrangement of objects in the model can lead to improved recognition and delineation accuracy. Including larger objects in the model also improved recognition and delineation. The proposed method almost always finds globally optimum solutions.
Optimization of segmented thermoelectric generator using Taguchi and ANOVA techniques.
Kishore, Ravi Anant; Sanghadasa, Mohan; Priya, Shashank
2017-12-01
Recent studies have demonstrated that segmented thermoelectric generators (TEGs) can operate over large thermal gradient and thus provide better performance (reported efficiency up to 11%) as compared to traditional TEGs, comprising of single thermoelectric (TE) material. However, segmented TEGs are still in early stages of development due to the inherent complexity in their design optimization and manufacturability. In this study, we demonstrate physics based numerical techniques along with Analysis of variance (ANOVA) and Taguchi optimization method for optimizing the performance of segmented TEGs. We have considered comprehensive set of design parameters, such as geometrical dimensions of p-n legs, height of segmentation, hot-side temperature, and load resistance, in order to optimize output power and efficiency of segmented TEGs. Using the state-of-the-art TE material properties and appropriate statistical tools, we provide near-optimum TEG configuration with only 25 experiments as compared to 3125 experiments needed by the conventional optimization methods. The effect of environmental factors on the optimization of segmented TEGs is also studied. Taguchi results are validated against the results obtained using traditional full factorial optimization technique and a TEG configuration for simultaneous optimization of power and efficiency is obtained.
Xiong, Chengjie; van Belle, Gerald; Miller, J Philip; Morris, John C
2011-02-01
Therapeutic trials of disease-modifying agents on Alzheimer's disease (AD) require novel designs and analyses involving switch of treatments for at least a portion of subjects enrolled. Randomized start and randomized withdrawal designs are two examples of such designs. Crucial design parameters such as sample size and the time of treatment switch are important to understand in designing such clinical trials. The purpose of this article is to provide methods to determine sample sizes and time of treatment switch as well as optimum statistical tests of treatment efficacy for clinical trials of disease-modifying agents on AD. A general linear mixed effects model is proposed to test the disease-modifying efficacy of novel therapeutic agents on AD. This model links the longitudinal growth from both the placebo arm and the treatment arm at the time of treatment switch for these in the delayed treatment arm or early withdrawal arm and incorporates the potential correlation on the rate of cognitive change before and after the treatment switch. Sample sizes and the optimum time for treatment switch of such trials as well as optimum test statistic for the treatment efficacy are determined according to the model. Assuming an evenly spaced longitudinal design over a fixed duration, the optimum treatment switching time in a randomized start or a randomized withdrawal trial is half way through the trial. With the optimum test statistic for the treatment efficacy and over a wide spectrum of model parameters, the optimum sample size allocations are fairly close to the simplest design with a sample size ratio of 1:1:1 among the treatment arm, the delayed treatment or early withdrawal arm, and the placebo arm. The application of the proposed methodology to AD provides evidence that much larger sample sizes are required to adequately power disease-modifying trials when compared with those for symptomatic agents, even when the treatment switch time and efficacy test are optimally chosen. The proposed method assumes that the only and immediate effect of treatment switch is on the rate of cognitive change. Crucial design parameters for the clinical trials of disease-modifying agents on AD can be optimally chosen. Government and industry officials as well as academia researchers should consider the optimum use of the clinical trials design for disease-modifying agents on AD in their effort to search for the treatments with the potential to modify the underlying pathophysiology of AD.
NASA Astrophysics Data System (ADS)
Mohammadi Nasrabadi, Ali; Hosseinpour, Mohammad Hossein; Ebrahimnejad, Sadoullah
2013-05-01
In competitive markets, market segmentation is a critical point of business, and it can be used as a generic strategy. In each segment, strategies lead companies to their targets; thus, segment selection and the application of the appropriate strategies over time are very important to achieve successful business. This paper aims to model a strategy-aligned fuzzy approach to market segment evaluation and selection. A modular decision support system (DSS) is developed to select an optimum segment with its appropriate strategies. The suggested DSS has two main modules. The first one is SPACE matrix which indicates the risk of each segment. Also, it determines the long-term strategies. The second module finds the most preferred segment-strategies over time. Dynamic network process is applied to prioritize segment-strategies according to five competitive force factors. There is vagueness in pairwise comparisons, and this vagueness has been modeled using fuzzy concepts. To clarify, an example is illustrated by a case study in Iran's coffee market. The results show that success possibility of segments could be different, and choosing the best ones could help companies to be sure in developing their business. Moreover, changing the priority of strategies over time indicates the importance of long-term planning. This fact has been supported by a case study on strategic priority difference in short- and long-term consideration.
Comparison of atlas-based techniques for whole-body bone segmentation.
Arabi, Hossein; Zaidi, Habib
2017-02-01
We evaluate the accuracy of whole-body bone extraction from whole-body MR images using a number of atlas-based segmentation methods. The motivation behind this work is to find the most promising approach for the purpose of MRI-guided derivation of PET attenuation maps in whole-body PET/MRI. To this end, a variety of atlas-based segmentation strategies commonly used in medical image segmentation and pseudo-CT generation were implemented and evaluated in terms of whole-body bone segmentation accuracy. Bone segmentation was performed on 23 whole-body CT/MR image pairs via leave-one-out cross validation procedure. The evaluated segmentation techniques include: (i) intensity averaging (IA), (ii) majority voting (MV), (iii) global and (iv) local (voxel-wise) weighting atlas fusion frameworks implemented utilizing normalized mutual information (NMI), normalized cross-correlation (NCC) and mean square distance (MSD) as image similarity measures for calculating the weighting factors, along with other atlas-dependent algorithms, such as (v) shape-based averaging (SBA) and (vi) Hofmann's pseudo-CT generation method. The performance evaluation of the different segmentation techniques was carried out in terms of estimating bone extraction accuracy from whole-body MRI using standard metrics, such as Dice similarity (DSC) and relative volume difference (RVD) considering bony structures obtained from intensity thresholding of the reference CT images as the ground truth. Considering the Dice criterion, global weighting atlas fusion methods provided moderate improvement of whole-body bone segmentation (DSC= 0.65 ± 0.05) compared to non-weighted IA (DSC= 0.60 ± 0.02). The local weighed atlas fusion approach using the MSD similarity measure outperformed the other strategies by achieving a DSC of 0.81 ± 0.03 while using the NCC and NMI measures resulted in a DSC of 0.78 ± 0.05 and 0.75 ± 0.04, respectively. Despite very long computation time, the extracted bone obtained from both SBA (DSC= 0.56 ± 0.05) and Hofmann's methods (DSC= 0.60 ± 0.02) exhibited no improvement compared to non-weighted IA. Finding the optimum parameters for implementation of the atlas fusion approach, such as weighting factors and image similarity patch size, have great impact on the performance of atlas-based segmentation approaches. The voxel-wise atlas fusion approach exhibited excellent performance in terms of cancelling out the non-systematic registration errors leading to accurate and reliable segmentation results. Denoising and normalization of MR images together with optimization of the involved parameters play a key role in improving bone extraction accuracy. Copyright © 2016 Elsevier B.V. All rights reserved.
Enhanced THz extinction in arrays of resonant semiconductor particles.
Schaafsma, Martijn C; Georgiou, Giorgos; Rivas, Jaime Gómez
2015-09-21
We demonstrate experimentally the enhanced THz extinction by periodic arrays of resonant semiconductor particles. This phenomenon is explained in terms of the radiative coupling of localized resonances with diffractive orders in the plane of the array (Rayleigh anomalies). The experimental results are described by numerical calculations using a coupled dipole model and by Finite-Difference in Time-Domain simulations. An optimum particle size for enhancing the extinction efficiency of the array is found. This optimum is determined by the frequency detuning between the localized resonances in the individual particles and the Rayleigh anomaly. The extinction calculations and measurements are also compared to near-field simulations illustrating the optimum particle size for the enhancement of the near-field.
Villani, Kenneth; Vermandel, Walter; Smets, Koen; Liang, Duoduo; van Tendeloo, Gustaaf; Martens, Johan A
2006-04-15
Platinum metal was dispersed on microporous, mesoporous, and nonporous support materials including the zeolites Na-Y, Ba-Y, Ferrierite, ZSM-22, ETS-10, and AIPO-11, alumina, and titania. The oxidation of carbon black loosely mixed with catalyst powder was monitored gravimetrically in a gas stream containing nitric oxide, oxygen, and water. The carbon oxidation activity of the catalysts was found to be uniquely related to the Pt dispersion and little influenced by support type. The optimum dispersion is around 3-4% corresponding to relatively large Pt particle sizes of 20-40 nm. The carbon oxidation activity reflects the NO oxidation activity of the platinum catalyst, which reaches an optimum in the 20-40 nm Pt particle size range. The lowest carbon oxidation temperatures were achieved with platinum loaded ZSM-22 and AIPO-11 zeolite crystallites bearing platinum of optimum dispersion on their external surfaces.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lin, Qingqing; Li, Huilin; Wang, Tong
Perfringolysin O (PFO) is a transmembrane (TM) β-barrel protein that inserts into mammalian cell membranes. Once inserted into membranes, PFO assembles into pore-forming oligomers containing 30–50 PFO monomers. These form a pore of up to 300 Å, far exceeding the size of most other proteinaceous pores. In this study, we found that altering PFO TM segment length can alter the size of PFO pores. A PFO mutant with lengthened TM segments oligomerized to a similar extent as wild-type PFO, and exhibited pore-forming activity and a pore size very similar to wild-type PFO as measured by electron microscopy and a leakagemore » assay. In contrast, PFO with shortened TM segments exhibited a large reduction in pore-forming activity and pore size. This suggests that the interaction between TM segments can greatly affect the size of pores formed by TM β-barrel proteins. PFO may be a promising candidate for engineering pore size for various applications.« less
Nankali, Saber; Miandoab, Payam Samadi; Baghizadeh, Amin
2016-01-01
In external‐beam radiotherapy, using external markers is one of the most reliable tools to predict tumor position, in clinical applications. The main challenge in this approach is tumor motion tracking with highest accuracy that depends heavily on external markers location, and this issue is the objective of this study. Four commercially available feature selection algorithms entitled 1) Correlation‐based Feature Selection, 2) Classifier, 3) Principal Components, and 4) Relief were proposed to find optimum location of external markers in combination with two “Genetic” and “Ranker” searching procedures. The performance of these algorithms has been evaluated using four‐dimensional extended cardiac‐torso anthropomorphic phantom. Six tumors in lung, three tumors in liver, and 49 points on the thorax surface were taken into account to simulate internal and external motions, respectively. The root mean square error of an adaptive neuro‐fuzzy inference system (ANFIS) as prediction model was considered as metric for quantitatively evaluating the performance of proposed feature selection algorithms. To do this, the thorax surface region was divided into nine smaller segments and predefined tumors motion was predicted by ANFIS using external motion data of given markers at each small segment, separately. Our comparative results showed that all feature selection algorithms can reasonably select specific external markers from those segments where the root mean square error of the ANFIS model is minimum. Moreover, the performance accuracy of proposed feature selection algorithms was compared, separately. For this, each tumor motion was predicted using motion data of those external markers selected by each feature selection algorithm. Duncan statistical test, followed by F‐test, on final results reflected that all proposed feature selection algorithms have the same performance accuracy for lung tumors. But for liver tumors, a correlation‐based feature selection algorithm, in combination with a genetic search algorithm, proved to yield best performance accuracy for selecting optimum markers. PACS numbers: 87.55.km, 87.56.Fc PMID:26894358
Nankali, Saber; Torshabi, Ahmad Esmaili; Miandoab, Payam Samadi; Baghizadeh, Amin
2016-01-08
In external-beam radiotherapy, using external markers is one of the most reliable tools to predict tumor position, in clinical applications. The main challenge in this approach is tumor motion tracking with highest accuracy that depends heavily on external markers location, and this issue is the objective of this study. Four commercially available feature selection algorithms entitled 1) Correlation-based Feature Selection, 2) Classifier, 3) Principal Components, and 4) Relief were proposed to find optimum location of external markers in combination with two "Genetic" and "Ranker" searching procedures. The performance of these algorithms has been evaluated using four-dimensional extended cardiac-torso anthropomorphic phantom. Six tumors in lung, three tumors in liver, and 49 points on the thorax surface were taken into account to simulate internal and external motions, respectively. The root mean square error of an adaptive neuro-fuzzy inference system (ANFIS) as prediction model was considered as metric for quantitatively evaluating the performance of proposed feature selection algorithms. To do this, the thorax surface region was divided into nine smaller segments and predefined tumors motion was predicted by ANFIS using external motion data of given markers at each small segment, separately. Our comparative results showed that all feature selection algorithms can reasonably select specific external markers from those segments where the root mean square error of the ANFIS model is minimum. Moreover, the performance accuracy of proposed feature selection algorithms was compared, separately. For this, each tumor motion was predicted using motion data of those external markers selected by each feature selection algorithm. Duncan statistical test, followed by F-test, on final results reflected that all proposed feature selection algorithms have the same performance accuracy for lung tumors. But for liver tumors, a correlation-based feature selection algorithm, in combination with a genetic search algorithm, proved to yield best performance accuracy for selecting optimum markers.
Image segmentation with a novel regularized composite shape prior based on surrogate study
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhao, Tingting, E-mail: tingtingzhao@mednet.ucla.edu; Ruan, Dan, E-mail: druan@mednet.ucla.edu
Purpose: Incorporating training into image segmentation is a good approach to achieve additional robustness. This work aims to develop an effective strategy to utilize shape prior knowledge, so that the segmentation label evolution can be driven toward the desired global optimum. Methods: In the variational image segmentation framework, a regularization for the composite shape prior is designed to incorporate the geometric relevance of individual training data to the target, which is inferred by an image-based surrogate relevance metric. Specifically, this regularization is imposed on the linear weights of composite shapes and serves as a hyperprior. The overall problem is formulatedmore » in a unified optimization setting and a variational block-descent algorithm is derived. Results: The performance of the proposed scheme is assessed in both corpus callosum segmentation from an MR image set and clavicle segmentation based on CT images. The resulted shape composition provides a proper preference for the geometrically relevant training data. A paired Wilcoxon signed rank test demonstrates statistically significant improvement of image segmentation accuracy, when compared to multiatlas label fusion method and three other benchmark active contour schemes. Conclusions: This work has developed a novel composite shape prior regularization, which achieves superior segmentation performance than typical benchmark schemes.« less
Cellular Manufacturing System with Dynamic Lot Size Material Handling
NASA Astrophysics Data System (ADS)
Khannan, M. S. A.; Maruf, A.; Wangsaputra, R.; Sutrisno, S.; Wibawa, T.
2016-02-01
Material Handling take as important role in Cellular Manufacturing System (CMS) design. In several study at CMS design material handling was assumed per pieces or with constant lot size. In real industrial practice, lot size may change during rolling period to cope with demand changes. This study develops CMS Model with Dynamic Lot Size Material Handling. Integer Linear Programming is used to solve the problem. Objective function of this model is minimizing total expected cost consisting machinery depreciation cost, operating costs, inter-cell material handling cost, intra-cell material handling cost, machine relocation costs, setup costs, and production planning cost. This model determines optimum cell formation and optimum lot size. Numerical examples are elaborated in the paper to ilustrate the characterictic of the model.
Dissociation of somatic growth from segmentation drives gigantism in snakes.
Head, Jason J; David Polly, P
2007-06-22
Body size is significantly correlated with number of vertebrae (pleomerism) in multiple vertebrate lineages, indicating that change in number of body segments produced during somitogenesis is an important factor in evolutionary change in body size, but the role of segmentation in the evolution of extreme sizes, including gigantism, has not been examined. We explored the relationship between body size and vertebral count in basal snakes that exhibit gigantism. Boids, pythonids and the typhlopid genera, Typhlops and Rhinotyphlops, possess a positive relationship between body size and vertebral count, confirming the importance of pleomerism; however, giant taxa possessed fewer than expected vertebrae, indicating that a separate process underlies the evolution of gigantism in snakes. The lack of correlation between body size and vertebral number in giant taxa demonstrates dissociation of segment production in early development from somatic growth during maturation, indicating that gigantism is achieved by modifying development at a different stage from that normally selected for changes in body size.
Naylor, Richard W; Dodd, Rachel C; Davidson, Alan J
2016-10-19
The nephron is the functional unit of the kidney and is divided into distinct proximal and distal segments. The factors determining nephron segment size are not fully understood. In zebrafish, the embryonic kidney has long been thought to differentiate in situ into two proximal tubule segments and two distal tubule segments (distal early; DE, and distal late; DL) with little involvement of cell movement. Here, we overturn this notion by performing lineage-labelling experiments that reveal extensive caudal movement of the proximal and DE segments and a concomitant compaction of the DL segment as it fuses with the cloaca. Laser-mediated severing of the tubule, such that the DE and DL are disconnected or that the DL and cloaca do not fuse, results in a reduction in tubule cell proliferation and significantly shortens the DE segment while the caudal movement of the DL is unaffected. These results suggest that the DL mechanically pulls the more proximal segments, thereby driving both their caudal extension and their proliferation. Together, these data provide new insights into early nephron morphogenesis and demonstrate the importance of cell movement and proliferation in determining initial nephron segment size.
Hatipoglu, Nuh; Bilgin, Gokhan
2017-10-01
In many computerized methods for cell detection, segmentation, and classification in digital histopathology that have recently emerged, the task of cell segmentation remains a chief problem for image processing in designing computer-aided diagnosis (CAD) systems. In research and diagnostic studies on cancer, pathologists can use CAD systems as second readers to analyze high-resolution histopathological images. Since cell detection and segmentation are critical for cancer grade assessments, cellular and extracellular structures should primarily be extracted from histopathological images. In response, we sought to identify a useful cell segmentation approach with histopathological images that uses not only prominent deep learning algorithms (i.e., convolutional neural networks, stacked autoencoders, and deep belief networks), but also spatial relationships, information of which is critical for achieving better cell segmentation results. To that end, we collected cellular and extracellular samples from histopathological images by windowing in small patches with various sizes. In experiments, the segmentation accuracies of the methods used improved as the window sizes increased due to the addition of local spatial and contextual information. Once we compared the effects of training sample size and influence of window size, results revealed that the deep learning algorithms, especially convolutional neural networks and partly stacked autoencoders, performed better than conventional methods in cell segmentation.
Comparative evaluation of distributed-collector solar thermal electric power plants
NASA Technical Reports Server (NTRS)
Fujita, T.; El Gabalawi, N.; Herrera, G. G.; Caputo, R. S.
1978-01-01
Distributed-collector solar thermal-electric power plants are compared by projecting power plant economics of selected systems to the 1990-2000 timeframe. The approach taken is to evaluate the performance of the selected systems under the same weather conditions. Capital and operational costs are estimated for each system. Energy costs are calculated for different plant sizes based on the plant performance and the corresponding capital and maintenance costs. Optimum systems are then determined as the systems with the minimum energy costs for a given load factor. The optimum system is comprised of the best combination of subsystems which give the minimum energy cost for every plant size. Sensitivity analysis is done around the optimum point for various plant parameters.
Filová, Elena; Suchý, Tomáš; Sucharda, Zbyněk; Šupová, Monika; Žaloudková, Margit; Balík, Karel; Lisá, Věra; Šlouf, Miroslav; Bačáková, Lucie
2014-01-01
Hydroxyapatite (HA) is considered to be a bioactive material that favorably influences the adhesion, growth, and osteogenic differentiation of osteoblasts. To optimize the cell response on the hydroxyapatite composite, it is desirable to assess the optimum concentration and also the optimum particle size. The aim of our study was to prepare composite materials made of polydimethylsiloxane, polyamide, and nano-sized (N) or micro-sized (M) HA, with an HA content of 0%, 2%, 5%, 10%, 15%, 20%, 25% (v/v) (referred to as N0–N25 or M0–M25), and to evaluate them in vitro in cultures with human osteoblast-like MG-63 cells. For clinical applications, fast osseointegration of the implant into the bone is essential. We observed the greatest initial cell adhesion on composites M10 and N5. Nano-sized HA supported cell growth, especially during the first 3 days of culture. On composites with micro-size HA (2%–15%), MG-63 cells reached the highest densities on day 7. Samples M20 and M25, however, were toxic for MG-63 cells, although these composites supported the production of osteocalcin in these cells. On N2, a higher concentration of osteopontin was found in MG-63 cells. For biomedical applications, the concentration range of 5%–15% (v/v) nano-size or micro-size HA seems to be optimum. PMID:25125978
Comparison of parameters affecting GNP-loaded choroidal melanoma dosimetry; Monte Carlo study
NASA Astrophysics Data System (ADS)
Sharabiani, Marjan; Asadi, Somayeh; Barghi, Amir Rahnamai; Vaezzadeh, Mehdi
2018-04-01
The current study reports the results of tumor dosimetry in the presence of gold nanoparticles (GNPs) with different sizes and concentrations. Due to limited number of works carried out on the brachytherapy of choroidal melanoma in combination with GNPs, this study was performed to determine the optimum size and concentration for GNPs which contributes the highest dose deposition in tumor region, using two phantom test cases namely water phantom and a full Monte Carlo model of human eye. Both water and human eye phantoms were simulated with MCNP5 code. Tumor dosimetry was performed for a typical point photon source with an energy of 0.38 MeV as a high energy source and 103Pd brachytherapy source with an average energy of 0.021 MeV as a low energy source in water phantom and eye phantom respectively. Such a dosimetry was done for different sizes and concentrations of GNPs. For all of the diameters, increase in concentration of GNPs resulted in an increase in dose deposited in the region of interest. In a certain concentration, GNPs with larger diameters contributed more dose to the tumor region, which was more pronounced using eye phantom. 100 nm was reported as the optimum size in order to achieve the highest energy deposition within the target. This work investigated the optimum parameters affecting macroscopic dose enhancement in GNP-aided brachytherapy of choroidal melanoma. The current work also had implications on using low energy photon sources in the presence of GNPs to acquire the highest dose enhancement. This study is conducted through four different sizes and concentrations of GNPs. Considering the sensitivity of human eye tissue, in order to report the precise optimum parameters affecting radiosensitivity, a comprehensive study on a wide range of sizes and concentrations are required.
Preparation and Physical Properties of Segmented Thermoelectric YBa2Cu3O7-x -Ca3Co4O9 Ceramics
NASA Astrophysics Data System (ADS)
Wannasut, P.; Keawprak, N.; Jaiban, P.; Watcharapasorn, A.
2018-01-01
Segmented thermoelectric ceramics are now well known for their high conversion efficiency and are currently being investigated in both basic and applied energy researches. In this work, the successful preparation of the segmented thermoelectric YBa2Cu3O7-x -Ca3Co4O9 (YBCO-CCO) ceramic by hot pressing method and the study on its physical properties were presented. Under the optimum hot pressing condition of 800 °C temperature, 1-hour holding time and 1-ton weight, the segmented YBCO-CCO sample showed two strongly connected layers with the relative density of about 96%. The X-ray diffraction (XRD) patterns indicated that each segment showed pure phase corresponding to each respective composition. Scanning electron microscopy (SEM) results confirmed the sharp interface and good adhesion between YBCO and CCO layers. Although the chemical analysis indicated the limited inter-layer diffusion near the interface, some elemental diffusion at the boundary was expected to be the source of this strong bonding.
NASA Astrophysics Data System (ADS)
Castilla, G.
2004-09-01
Landcover maps typically represent the territory as a mosaic of contiguous units "polygons- that are assumed to correspond to geographic entities" like e.g. lakes, forests or villages-. They may also be viewed as representing a particular level of a landscape hierarchy where each polygon is a holon - an object made of subobjects and part of a superobject. The focal level portrayed in the map is distinguished from other levels by the average size of objects compounding it. Moreover, the focal level is bounded by the minimum size that objects of this level are supposed to have. Based on this framework, we have developed a segmentation method that defines a partition on a multiband image such that i) the mean size of segments is close to the one specified; ii) each segment exceeds the required minimum size; and iii) the internal homogeneity of segments is maximal given the size constraints. This paper briefly describes the method, focusing on its region merging stage. The most distinctive feature of the latter is that while the merging sequence is ordered by increasing dissimilarity as in conventional methods, there is no need to define a threshold on the dissimilarity measure between adjacent segments.
MPEG-4 ASP SoC receiver with novel image enhancement techniques for DAB networks
NASA Astrophysics Data System (ADS)
Barreto, D.; Quintana, A.; García, L.; Callicó, G. M.; Núñez, A.
2007-05-01
This paper presents a system for real-time video reception in low-power mobile devices using Digital Audio Broadcast (DAB) technology for transmission. A demo receiver terminal is designed into a FPGA platform using the Advanced Simple Profile (ASP) MPEG-4 standard for video decoding. In order to keep the demanding DAB requirements, the bandwidth of the encoded sequence must be drastically reduced. In this sense, prior to the MPEG-4 coding stage, a pre-processing stage is performed. It is firstly composed by a segmentation phase according to motion and texture based on the Principal Component Analysis (PCA) of the input video sequence, and secondly by a down-sampling phase, which depends on the segmentation results. As a result of the segmentation task, a set of texture and motion maps are obtained. These motion and texture maps are also included into the bit-stream as user data side-information and are therefore known to the receiver. For all bit-rates, the whole encoder/decoder system proposed in this paper exhibits higher image visual quality than the alternative encoding/decoding method, assuming equal image sizes. A complete analysis of both techniques has also been performed to provide the optimum motion and texture maps for the global system, which has been finally validated for a variety of video sequences. Additionally, an optimal HW/SW partition for the MPEG-4 decoder has been studied and implemented over a Programmable Logic Device with an embedded ARM9 processor. Simulation results show that a throughput of 15 QCIF frames per second can be achieved with low area and low power implementation.
Segmentation of the Speaker's Face Region with Audiovisual Correlation
NASA Astrophysics Data System (ADS)
Liu, Yuyu; Sato, Yoichi
The ability to find the speaker's face region in a video is useful for various applications. In this work, we develop a novel technique to find this region within different time windows, which is robust against the changes of view, scale, and background. The main thrust of our technique is to integrate audiovisual correlation analysis into a video segmentation framework. We analyze the audiovisual correlation locally by computing quadratic mutual information between our audiovisual features. The computation of quadratic mutual information is based on the probability density functions estimated by kernel density estimation with adaptive kernel bandwidth. The results of this audiovisual correlation analysis are incorporated into graph cut-based video segmentation to resolve a globally optimum extraction of the speaker's face region. The setting of any heuristic threshold in this segmentation is avoided by learning the correlation distributions of speaker and background by expectation maximization. Experimental results demonstrate that our method can detect the speaker's face region accurately and robustly for different views, scales, and backgrounds.
NASA Astrophysics Data System (ADS)
Liu, Qiang; Chattopadhyay, Aditi
2000-06-01
Aeromechanical stability plays a critical role in helicopter design and lead-lag damping is crucial to this design. In this paper, the use of segmented constrained damping layer (SCL) treatment and composite tailoring is investigated for improved rotor aeromechanical stability using formal optimization technique. The principal load-carrying member in the rotor blade is represented by a composite box beam, of arbitrary thickness, with surface bonded SCLs. A comprehensive theory is used to model the smart box beam. A ground resonance analysis model and an air resonance analysis model are implemented in the rotor blade built around the composite box beam with SCLs. The Pitt-Peters dynamic inflow model is used in air resonance analysis under hover condition. A hybrid optimization technique is used to investigate the optimum design of the composite box beam with surface bonded SCLs for improved damping characteristics. Parameters such as stacking sequence of the composite laminates and placement of SCLs are used as design variables. Detailed numerical studies are presented for aeromechanical stability analysis. It is shown that optimum blade design yields significant increase in rotor lead-lag regressive modal damping compared to the initial system.
Optimum Particle Size for Gold-Catalyzed CO Oxidation
2018-01-01
The structure sensitivity of gold-catalyzed CO oxidation is presented by analyzing in detail the dependence of CO oxidation rate on particle size. Clusters with less than 14 gold atoms adopt a planar structure, whereas larger ones adopt a three-dimensional structure. The CO and O2 adsorption properties depend strongly on particle structure and size. All of the reaction barriers relevant to CO oxidation display linear scaling relationships with CO and O2 binding strengths as main reactivity descriptors. Planar and three-dimensional gold clusters exhibit different linear scaling relationship due to different surface topologies and different coordination numbers of the surface atoms. On the basis of these linear scaling relationships, first-principles microkinetics simulations were conducted to determine CO oxidation rates and possible rate-determining step of Au particles. Planar Au9 and three-dimensional Au79 clusters present the highest CO oxidation rates for planar and three-dimensional clusters, respectively. The planar Au9 cluster is much more active than the optimum Au79 cluster. A common feature of optimum CO oxidation performance is the intermediate binding strengths of CO and O2, resulting in intermediate coverages of CO, O2, and O. Both these optimum particles present lower performance than maximum Sabatier performance, indicating that there is sufficient room for improvement of gold catalysts for CO oxidation. PMID:29707098
Effects of counterion size and backbone rigidity on the dynamics of ionic polymer melts and glasses
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fu, Yao; Bocharova, Vera; Ma, Mengze
Backbone rigidity, counterion size and the static dielectric constant affect the glass transition temperature, segmental relaxation time and decoupling between counterion and segmental dynamics in significant manners.
Carbon fiber reinforcements for sheet molding composites
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ozcan, Soydan; Paulauskas, Felix L.
A method of processing a carbon fiber tow includes the steps of providing a carbon fiber tow made of a plurality of carbon filaments, depositing a sizing composition at spaced-apart sizing sites along a length of the tow, leaving unsized interstitial regions of the tow, and cross-cutting the tow into a plurality of segments. Each segment includes at least a portion of one of the sizing sites and at least a portion of at least one of the unsized regions of the tow, the unsized region including and end portion of the segment.
Impact of vane size and separation on radiometric forces for microactuation
NASA Astrophysics Data System (ADS)
Gimelshein, Natalia; Gimelshein, Sergey; Ketsdever, Andrew; Selden, Nathaniel
2011-04-01
A kinetic approach is used to study the feasibility of increasing the efficiency of microactuators that use radiometric force through etching holes in a single radiometer vane. It has been shown that a radiometer that consists of small vanes is capable of producing at least an order of magnitude larger force than a single-vane radiometer that takes up the same area. The optimum gap between the vanes is found to be slightly smaller than the vane size, with the optimum Knudsen number of about 0.05 based on the vane height.
DOE Office of Scientific and Technical Information (OSTI.GOV)
York, W.S.; Darvill, A.G.; Albersheim, P.
1984-06-01
Xyloglucan, isolated from the soluble extracellular polysaccharides of suspension-cultured sycamore (Acer pseudoplatanus) cells, was digested with an endo-..beta..-1,4-glucanase purified from the culture fluid of Trichoderma viride. A nonasaccharide-rich Bio-Gel P-2 fraction of this digest inhibited 2,4-dichlorophenoxyacetic-acid-stimulated elongation of etiolated pea stem segments. The inhibitory activity of this oligosaccharide fraction exhibited a well-define concentraction optimum between 10/sup -2/ and 10/sup -1/ micrograms per milliliter. Another fraction of the same xyloglucan digest, rich in a structurally related heptasaccharide, did not, at similar concentrations, significantly inhibit the elongation. 11 references, 3 figures.
Pore size engineering applied to starved electrochemical cells and batteries
NASA Technical Reports Server (NTRS)
Abbey, K. M.; Thaller, L. H.
1982-01-01
To maximize performance in starved, multiplate cells, the cell design should rely on techniques which widen the volume tolerance characteristics. These involve engineering capillary pressure differences between the components of an electrochemical cell and using these forces to promote redistribution of electrolyte to the desired optimum values. This can be implemented in practice by prescribing pore size distributions for porous back-up plates, reservoirs, and electrodes. In addition, electrolyte volume management can be controlled by incorporating different pore size distributions into the separator. In a nickel/hydrogen cell, the separator must contain pores similar in size to the small pores of both the nickel and hydrogen electrodes in order to maintain an optimum conductive path for the electrolyte. The pore size distributions of all components should overlap in such a way as to prevent drying of the separator and/or flooding of the hydrogen electrode.
Incorporating scale into digital terrain analysis
NASA Astrophysics Data System (ADS)
Dragut, L. D.; Eisank, C.; Strasser, T.
2009-04-01
Digital Elevation Models (DEMs) and their derived terrain attributes are commonly used in soil-landscape modeling. Process-based terrain attributes meaningful to the soil properties of interest are sought to be produced through digital terrain analysis. Typically, the standard 3 X 3 window-based algorithms are used for this purpose, thus tying the scale of resulting layers to the spatial resolution of the available DEM. But this is likely to induce mismatches between scale domains of terrain information and soil properties of interest, which further propagate biases in soil-landscape modeling. We have started developing a procedure to incorporate scale into digital terrain analysis for terrain-based environmental modeling (Drăguţ et al., in press). The workflow was exemplified on crop yield data. Terrain information was generalized into successive scale levels with focal statistics on increasing neighborhood size. The degree of association between each terrain derivative and crop yield values was established iteratively for all scale levels through correlation analysis. The first peak of correlation indicated the scale level to be further retained. While in a standard 3 X 3 window-based analysis mean curvature was one of the poorest correlated terrain attribute, after generalization it turned into the best correlated variable. To illustrate the importance of scale, we compared the regression results of unfiltered and filtered mean curvature vs. crop yield. The comparison shows an improvement of R squared from a value of 0.01 when the curvature was not filtered, to 0.16 when the curvature was filtered within 55 X 55 m neighborhood size. This indicates the optimum size of curvature information (scale) that influences soil fertility. We further used these results in an object-based image analysis environment to create terrain objects containing aggregated values of both terrain derivatives and crop yield. Hence, we introduce terrain segmentation as an alternative method for generating scale levels in terrain-based environmental modeling. Based on segments, R squared improved up to a value of 0.47. Before integrating the procedure described above into a software application, thorough comparison between the results of different generalization techniques, on different datasets and terrain conditions is necessary. This is the subject of our ongoing research as part of the SCALA project (Scales and Hierarchies in Landform Classification). References: Drăguţ, L., Schauppenlehner, T., Muhar, A., Strobl, J. and Blaschke, T., in press. Optimization of scale and parametrization for terrain segmentation: an application to soil-landscape modeling, Computers & Geosciences.
Sampling for area estimation: A comparison of full-frame sampling with the sample segment approach
NASA Technical Reports Server (NTRS)
Hixson, M.; Bauer, M. E.; Davis, B. J. (Principal Investigator)
1979-01-01
The author has identified the following significant results. Full-frame classifications of wheat and non-wheat for eighty counties in Kansas were repetitively sampled to simulate alternative sampling plans. Evaluation of four sampling schemes involving different numbers of samples and different size sampling units shows that the precision of the wheat estimates increased as the segment size decreased and the number of segments was increased. Although the average bias associated with the various sampling schemes was not significantly different, the maximum absolute bias was directly related to sampling size unit.
Jabbari, Esmaiel; Sarvestani, Samaneh K.; Daneshian, Leily; Moeinzadeh, Seyedsina
2015-01-01
Introduction The growth and expression of cancer stem cells (CSCs) depend on many factors in the tumor microenvironment. The objective of this work was to investigate the effect of cancer cells’ tissue origin on the optimum matrix stiffness for CSC growth and marker expression in a model polyethylene glycol diacrylate (PEGDA) hydrogel without the interference of other factors in the microenvironment. Methods Human MCF7 and MDA-MB-231 breast carcinoma, HCT116 colorectal and AGS gastric carcinoma, and U2OS osteosarcoma cells were used. The cells were encapsulated in PEGDA gels with compressive moduli in the 2-70 kPa range and optimized cell seeding density of 0.6x106 cells/mL. Micropatterning was used to optimize the growth of encapsulated cells with respect to average tumorsphere size. The CSC sub-population of the encapsulated cells was characterized by cell number, tumorsphere size and number density, and mRNA expression of CSC markers. Results The optimum matrix stiffness for growth and marker expression of CSC sub-population of cancer cells was 5 kPa for breast MCF7 and MDA231, 25 kPa for colorectal HCT116 and gastric AGS, and 50 kPa for bone U2OS cells. Conjugation of a CD44 binding peptide to the gel stopped tumorsphere formation by cancer cells from different tissue origin. The expression of YAP/TAZ transcription factors by the encapsulated cancer cells was highest at the optimum stiffness indicating a link between the Hippo transducers and CSC growth. The optimum average tumorsphere size for CSC growth and marker expression was 50 μm. Conclusion The marker expression results suggest that the CSC sub-population of cancer cells resides within a niche with optimum stiffness which depends on the cancer cells’ tissue origin. PMID:26168187
NASA Astrophysics Data System (ADS)
Raudah; Zulkifli
2018-03-01
The present research focuses on establishing the optimum conditions in converting coffee husk into a densified biomass fuel using starch as a binding agent. A Response Surface Methodology (RSM) approach using Box-Behnken experimental design with three levels (-1, 0, and +1) was employed to obtain the optimum level for each parameter. The briquettes wereproduced by compressing the mixture of coffee husk-starch in a piston and die assembly with the pressure of 2000 psi. Furthermore, starch percentage, pyrolysis time, and particle size were the input parameters for the algorithm. Bomb calorimeter was used to determine the heating value (HHV) of the solid fuel. The result of the study indicated that a combination of 34.71 mesh particle size, 110.93 min pyrolysis time, and 8% starch concentration werethe optimum variables.The HHV and density of the fuel were up to 5644.66 calgr-1 and 0.7069 grcm-3,respectively. The study showed that further research should be conducted to improve the briquette density therefore the coffee husk could be convert into commercialsolid fuel to replace the dependent on fossil fuel.
Grain-size considerations for optoelectronic multistage interconnection networks.
Krishnamoorthy, A V; Marchand, P J; Kiamilev, F E; Esener, S C
1992-09-10
This paper investigates, at the system level, the performance-cost trade-off between optical and electronic interconnects in an optoelectronic interconnection network. The specific system considered is a packet-switched, free-space optoelectronic shuffle-exchange multistage interconnection network (MIN). System bandwidth is used as the performance measure, while system area, system power, and system volume constitute the cost measures. A detailed design and analysis of a two-dimensional (2-D) optoelectronic shuffle-exchange routing network with variable grain size K is presented. The architecture permits the conventional 2 x 2 switches or grains to be generalized to larger K x K grain sizes by replacing optical interconnects with electronic wires without affecting the functionality of the system. Thus the system consists of log(k) N optoelectronic stages interconnected with free-space K-shuffles. When K = N, the MIN consists of a single electronic stage with optical input-output. The system design use an effi ient 2-D VLSI layout and a single diffractive optical element between stages to provide the 2-D K-shuffle interconnection. Results indicate that there is an optimum range of grain sizes that provides the best performance per cost. For the specific VLSI/GaAs multiple quantum well technology and system architecture considered, grain sizes larger than 256 x 256 result in a reduced performance, while grain sizes smaller than 16 x 16 have a high cost. For a network with 4096 channels, the useful range of grain sizes corresponds to approximately 250-400 electronic transistors per optical input-output channel. The effect of varying certain technology parameters such as the number of hologram phase levels, the modulator driving voltage, the minimum detectable power, and VLSI minimum feature size on the optimum grain-size system is studied. For instance, results show that using four phase levels for the interconnection hologram is a good compromise for the cost functions mentioned above. As VLSI minimum feature sizes decrease, the optimum grain size increases, whereas, if optical interconnect performance in terms of the detector power or modulator driving voltage requirements improves, the optimum grain size may be reduced. Finally, several architectural modifications to the system, such as K x K contention-free switches and sorting networks, are investigated and optimized for grain size. Results indicate that system bandwidth can be increased, but at the price of reduced performance/cost. The optoelectronic MIN architectures considered thus provide a broad range of performance/cost alternatives and offer a superior performance over purely electronic MIN's.
Fleet Sizing of Automated Material Handling Using Simulation Approach
NASA Astrophysics Data System (ADS)
Wibisono, Radinal; Ai, The Jin; Ratna Yuniartha, Deny
2018-03-01
Automated material handling tends to be chosen rather than using human power in material handling activity for production floor in manufacturing company. One critical issue in implementing automated material handling is designing phase to ensure that material handling activity more efficient in term of cost spending. Fleet sizing become one of the topic in designing phase. In this research, simulation approach is being used to solve fleet sizing problem in flow shop production to ensure optimum situation. Optimum situation in this research means minimum flow time and maximum capacity in production floor. Simulation approach is being used because flow shop can be modelled into queuing network and inter-arrival time is not following exponential distribution. Therefore, contribution of this research is solving fleet sizing problem with multi objectives in flow shop production using simulation approach with ARENA Software
Segmentation-based L-filtering of speckle noise in ultrasonic images
NASA Astrophysics Data System (ADS)
Kofidis, Eleftherios; Theodoridis, Sergios; Kotropoulos, Constantine L.; Pitas, Ioannis
1994-05-01
We introduce segmentation-based L-filters, that is, filtering processes combining segmentation and (nonadaptive) optimum L-filtering, and use them for the suppression of speckle noise in ultrasonic (US) images. With the aid of a suitable modification of the learning vector quantizer self-organizing neural network, the image is segmented in regions of approximately homogeneous first-order statistics. For each such region a minimum mean-squared error L- filter is designed on the basis of a multiplicative noise model by using the histogram of grey values as an estimate of the parent distribution of the noisy observations and a suitable estimate of the original signal in the corresponding region. Thus, we obtain a bank of L-filters that are corresponding to and are operating on different image regions. Simulation results on a simulated US B-mode image of a tissue mimicking phantom are presented which verify the superiority of the proposed method as compared to a number of conventional filtering strategies in terms of a suitably defined signal-to-noise ratio measure and detection theoretic performance measures.
Cheng, Wei; Cai, Shu; Sun, Jia-yu; Xia, Chun-chao; Li, Zhen-lin; Chen, Yu-cheng; Zhong, Yao-zu
2015-05-01
To compare the two sequences [single shot true-FISP-PSIR (single shot-PSIR) and segmented-turbo-FLASH-PSIR (segmented-PSIR)] in the value of quantification for myocardial infarct size at 3. 0 tesla MRI. 38 patients with clinical confirmed myocardial infarction were served a comprehensive gadonilium cardiac MRI at 3. 0 tesla MRI system (Trio, Siemens). Myocardial delayed enhancement (MDE) were performed by single shot-PSIR and segmented-PSIR sequences separatedly in 12-20 min followed gadopentetate dimeglumine injection (0. 15 mmol/kg). The quality of MDE images were analysed by experienced physicians. Signal-to-noise ratio (SNR), contrast-to-noise ratio (CNR) between the two techniques were compared. Myocardial infarct size was quantified by a dedicated software automatically (Q-mass, Medis). All objectives were scanned on the 3. 0T MR successfully. No significant difference was found in SNR and CNR of the image quality between the two sequences (P>0. 05), as well as the total myocardial volume, between two sequences (P>0. 05). Furthermore, there were still no difference in the infarct size [single shot-PSIR (30. 87 ± 15. 72) mL, segmented-PSIR (29. 26±14. 07) ml], ratio [single shot-PSIR (22. 94%±10. 94%), segmented-PSIR (20. 75% ± 8. 78%)] between the two sequences (P>0. 05). However, the average aquisition time of single shot-PSIR (21. 4 s) was less than that of the latter (380 s). Single shot-PSIR is equal to segmented-PSIR in detecting the myocardial infarct size with less acquisition time, which is valuable in the clinic application and further research.
Kumar, K Vasanth; Porkodi, K; Rocha, F
2008-01-15
A comparison of linear and non-linear regression method in selecting the optimum isotherm was made to the experimental equilibrium data of basic red 9 sorption by activated carbon. The r(2) was used to select the best fit linear theoretical isotherm. In the case of non-linear regression method, six error functions namely coefficient of determination (r(2)), hybrid fractional error function (HYBRID), Marquardt's percent standard deviation (MPSD), the average relative error (ARE), sum of the errors squared (ERRSQ) and sum of the absolute errors (EABS) were used to predict the parameters involved in the two and three parameter isotherms and also to predict the optimum isotherm. Non-linear regression was found to be a better way to obtain the parameters involved in the isotherms and also the optimum isotherm. For two parameter isotherm, MPSD was found to be the best error function in minimizing the error distribution between the experimental equilibrium data and predicted isotherms. In the case of three parameter isotherm, r(2) was found to be the best error function to minimize the error distribution structure between experimental equilibrium data and theoretical isotherms. The present study showed that the size of the error function alone is not a deciding factor to choose the optimum isotherm. In addition to the size of error function, the theory behind the predicted isotherm should be verified with the help of experimental data while selecting the optimum isotherm. A coefficient of non-determination, K(2) was explained and was found to be very useful in identifying the best error function while selecting the optimum isotherm.
A Search for the Optimum Lithium Rich Layered Metal Oxide Cathode Material for Li-Ion Batteries
Ates, Mehmet Nurullah; Mukerjee, Sanjeev; Abraham, K. M.
2015-01-01
We report the results of a comprehensive study of the relationship between electrochemical performance in Li cells and chemical composition of a series of Li rich layered metal oxides of the general formula xLi2MnO3 · (1-x)LiMn0.33Ni0.33Co0.33O2 in which x = 0,1, 0.2, 0,3, 0.5 or 0.7, synthesized using the same method. In order to identify the cathode material having the optimum Li cell performance we first varied the ratio between Li2MnO3 and LiMO2 segments of the composite oxides while maintaining the same metal ratio residing within their LiMO2 portions. The materials with the overall composition 0.5Li2MnO3 · 0.5LiMO2 containing 0.5 mole of Li2MnO3 per mole of the composite metal oxide were found to be the optimum in terms of electrochemical performance. The electrochemical properties of these materials were further tuned by changing the relative amounts of Mn, Ni and Co in the LiMO2 segment to produce xLi2MnO3 · (1-x)LiMn0.50Ni0.35Co0.15O2 with enhanced capacities and rate capabilities. The rate capability of the lithium rich compound in which x = 0.3 was further increased by preparing electrodes with about 2 weight-percent multiwall carbon nanotube in the electrode. Lithium cells prepared with such electrodes were cycled at the 4C rate with little fade in capacity for over one hundred cycles. PMID:26478598
Designing image segmentation studies: Statistical power, sample size and reference standard quality.
Gibson, Eli; Hu, Yipeng; Huisman, Henkjan J; Barratt, Dean C
2017-12-01
Segmentation algorithms are typically evaluated by comparison to an accepted reference standard. The cost of generating accurate reference standards for medical image segmentation can be substantial. Since the study cost and the likelihood of detecting a clinically meaningful difference in accuracy both depend on the size and on the quality of the study reference standard, balancing these trade-offs supports the efficient use of research resources. In this work, we derive a statistical power calculation that enables researchers to estimate the appropriate sample size to detect clinically meaningful differences in segmentation accuracy (i.e. the proportion of voxels matching the reference standard) between two algorithms. Furthermore, we derive a formula to relate reference standard errors to their effect on the sample sizes of studies using lower-quality (but potentially more affordable and practically available) reference standards. The accuracy of the derived sample size formula was estimated through Monte Carlo simulation, demonstrating, with 95% confidence, a predicted statistical power within 4% of simulated values across a range of model parameters. This corresponds to sample size errors of less than 4 subjects and errors in the detectable accuracy difference less than 0.6%. The applicability of the formula to real-world data was assessed using bootstrap resampling simulations for pairs of algorithms from the PROMISE12 prostate MR segmentation challenge data set. The model predicted the simulated power for the majority of algorithm pairs within 4% for simulated experiments using a high-quality reference standard and within 6% for simulated experiments using a low-quality reference standard. A case study, also based on the PROMISE12 data, illustrates using the formulae to evaluate whether to use a lower-quality reference standard in a prostate segmentation study. Copyright © 2017 The Authors. Published by Elsevier B.V. All rights reserved.
Luo, Xiaoming; Cao, Juhang; He, Limin; Wang, Hongping; Yan, Haipeng; Qin, Yahua
2017-01-01
The coalescence process of binary droplets in oil under ultrasonic standing waves was investigated with high-speed photography. Three motion models of binary droplets in coalescence process were illustrated: (1) slight translational oscillation; (2) sinusoidal translational oscillation; (3) migration along with acoustic streaming. To reveal the droplets coalescence mechanisms, the influence of main factors (such as acoustic intensity, droplet size, viscosity and interfacial tension, etc) on the motion and coalescence of binary droplets was studied under ultrasonic standing waves. Results indicate that the shortest coalescence time is achieved when binary droplets show sinusoidal translational oscillation. The corresponding acoustic intensity in this case is the optimum acoustic intensity. Under the optimum acoustic intensity, drop size decrease will bring about coalescence time decrease by enhancing the binary droplets oscillation. Moreover, there is an optimum interfacial tension to achieve the shortest coalescence time. Copyright © 2016 Elsevier B.V. All rights reserved.
NASA Technical Reports Server (NTRS)
Hajela, P.; Chen, J. L.
1986-01-01
The present paper describes an approach for the optimum sizing of single and joined wing structures that is based on representing the built-up finite element model of the structure by an equivalent beam model. The low order beam model is computationally more efficient in an environment that requires repetitive analysis of several trial designs. The design procedure is implemented in a computer program that requires geometry and loading data typically available from an aerodynamic synthesis program, to create the finite element model of the lifting surface and an equivalent beam model. A fully stressed design procedure is used to obtain rapid estimates of the optimum structural weight for the beam model for a given geometry, and a qualitative description of the material distribution over the wing structure. The synthesis procedure is demonstrated for representative single wing and joined wing structures.
Image Information Mining Utilizing Hierarchical Segmentation
NASA Technical Reports Server (NTRS)
Tilton, James C.; Marchisio, Giovanni; Koperski, Krzysztof; Datcu, Mihai
2002-01-01
The Hierarchical Segmentation (HSEG) algorithm is an approach for producing high quality, hierarchically related image segmentations. The VisiMine image information mining system utilizes clustering and segmentation algorithms for reducing visual information in multispectral images to a manageable size. The project discussed herein seeks to enhance the VisiMine system through incorporating hierarchical segmentations from HSEG into the VisiMine system.
Users manual for the IMA program
NASA Technical Reports Server (NTRS)
Williams, D. F.
1991-01-01
The Impulsive Mission Analysis (IMA) computer program provides a user-friendly means of designing a complete Earth-orbital mission profile using an 80386-based microcomputer. The IMA program produces a trajectory summary, an output file for use by the new Simplex Computation of Optimum Orbital Trajectories (SCOOT) program, and several graphics, including ground tracks on a world map, altitude profiles, relative motion plots, and sunlight/communication timelines. The user can design missions using any combination of three basic types of mission segments: double co-eliptic rendezvous, payload delivery, and payload de-orbit/spacecraft recovery. Each mission segment is divided into one or more transfers, and each transfer is divided into one or more legs, each leg consisting of a coast arc followed by a burn arc.
A shape-based segmentation method for mobile laser scanning point clouds
NASA Astrophysics Data System (ADS)
Yang, Bisheng; Dong, Zhen
2013-07-01
Segmentation of mobile laser point clouds of urban scenes into objects is an important step for post-processing (e.g., interpretation) of point clouds. Point clouds of urban scenes contain numerous objects with significant size variability, complex and incomplete structures, and holes or variable point densities, raising great challenges for the segmentation of mobile laser point clouds. This paper addresses these challenges by proposing a shape-based segmentation method. The proposed method first calculates the optimal neighborhood size of each point to derive the geometric features associated with it, and then classifies the point clouds according to geometric features using support vector machines (SVMs). Second, a set of rules are defined to segment the classified point clouds, and a similarity criterion for segments is proposed to overcome over-segmentation. Finally, the segmentation output is merged based on topological connectivity into a meaningful geometrical abstraction. The proposed method has been tested on point clouds of two urban scenes obtained by different mobile laser scanners. The results show that the proposed method segments large-scale mobile laser point clouds with good accuracy and computationally effective time cost, and that it segments pole-like objects particularly well.
Commowick, Olivier; Warfield, Simon K
2010-01-01
In order to evaluate the quality of segmentations of an image and assess intra- and inter-expert variability in segmentation performance, an Expectation Maximization (EM) algorithm for Simultaneous Truth And Performance Level Estimation (STAPLE) was recently developed. This algorithm, originally presented for segmentation validation, has since been used for many applications, such as atlas construction and decision fusion. However, the manual delineation of structures of interest is a very time consuming and burdensome task. Further, as the time required and burden of manual delineation increase, the accuracy of the delineation is decreased. Therefore, it may be desirable to ask the experts to delineate only a reduced number of structures or the segmentation of all structures by all experts may simply not be achieved. Fusion from data with some structures not segmented by each expert should be carried out in a manner that accounts for the missing information. In other applications, locally inconsistent segmentations may drive the STAPLE algorithm into an undesirable local optimum, leading to misclassifications or misleading experts performance parameters. We present a new algorithm that allows fusion with partial delineation and which can avoid convergence to undesirable local optima in the presence of strongly inconsistent segmentations. The algorithm extends STAPLE by incorporating prior probabilities for the expert performance parameters. This is achieved through a Maximum A Posteriori formulation, where the prior probabilities for the performance parameters are modeled by a beta distribution. We demonstrate that this new algorithm enables dramatically improved fusion from data with partial delineation by each expert in comparison to fusion with STAPLE. PMID:20879379
Commowick, Olivier; Warfield, Simon K
2010-01-01
In order to evaluate the quality of segmentations of an image and assess intra- and inter-expert variability in segmentation performance, an Expectation Maximization (EM) algorithm for Simultaneous Truth And Performance Level Estimation (STAPLE) was recently developed. This algorithm, originally presented for segmentation validation, has since been used for many applications, such as atlas construction and decision fusion. However, the manual delineation of structures of interest is a very time consuming and burdensome task. Further, as the time required and burden of manual delineation increase, the accuracy of the delineation is decreased. Therefore, it may be desirable to ask the experts to delineate only a reduced number of structures or the segmentation of all structures by all experts may simply not be achieved. Fusion from data with some structures not segmented by each expert should be carried out in a manner that accounts for the missing information. In other applications, locally inconsistent segmentations may drive the STAPLE algorithm into an undesirable local optimum, leading to misclassifications or misleading experts performance parameters. We present a new algorithm that allows fusion with partial delineation and which can avoid convergence to undesirable local optima in the presence of strongly inconsistent segmentations. The algorithm extends STAPLE by incorporating prior probabilities for the expert performance parameters. This is achieved through a Maximum A Posteriori formulation, where the prior probabilities for the performance parameters are modeled by a beta distribution. We demonstrate that this new algorithm enables dramatically improved fusion from data with partial delineation by each expert in comparison to fusion with STAPLE.
Michael E. Akresh; Daniel R. Ardia; David I. King
2017-01-01
Maintaining avian eggs and young at optimum temperatures for development can increase hatching success and nestling condition, but this maintenance requires parental energetic demands. Bird nests, which often provide a structure to safely hold the eggs and nestlings and protect them from predators, can additionally be designed to help maintain eggs' optimum...
Word Family Size and French-Speaking Children's Segmentation of Existing Compounds
ERIC Educational Resources Information Center
Nicoladis, Elena; Krott, Andrea
2007-01-01
The family size of the constituents of compound words, or the number of compounds sharing the constituents, affects English-speaking children's compound segmentation. This finding is consistent with a usage-based theory of language acquisition, whereby children learn abstract underlying linguistic structure through their experience with particular…
NASA Astrophysics Data System (ADS)
Feng, Chenchen; Jiao, Zhengbo; Li, Shaopeng; Zhang, Yan; Bi, Yingpu
2015-12-01
We demonstrate a facile method for the rational fabrication of pore-size controlled nanoporous BiVO4 photoanodes, and confirmed that the optimum pore-size distributions could effectively absorb visible light through light diffraction and confinement functions. Furthermore, in situ X-ray photoelectron spectroscopy (XPS) reveals more efficient photoexcited electron-hole separation than conventional particle films, induced by light confinement and rapid charge transfer in the inter-crossed worm-like structures.We demonstrate a facile method for the rational fabrication of pore-size controlled nanoporous BiVO4 photoanodes, and confirmed that the optimum pore-size distributions could effectively absorb visible light through light diffraction and confinement functions. Furthermore, in situ X-ray photoelectron spectroscopy (XPS) reveals more efficient photoexcited electron-hole separation than conventional particle films, induced by light confinement and rapid charge transfer in the inter-crossed worm-like structures. Electronic supplementary information (ESI) available. See DOI: 10.1039/c5nr06584d
Bailey, T.A.; Bradford, K.; Bland, C.E.
1990-01-01
Because the infective stage of most mycoses of aquatic organisms is the zoospore, we attempted to establish optimum conditions under which zoospores could be produced for use in antifungal testing. Optimum sporulation time, incubation time, inoculum size, and growth temperature were determined for each oftwo saprolegniaceous fungi, Achlya flagellata Coker and Saprolegnia hypogyna (Pringsheim) de Bary. Both species produced the largest number of zoospores after 18 hours (51.7 spores/ml for A. jlagellata and 848.0 spores/ml for S. hypogyna), and yielded maximum growth after 48 hours at 22 'C. The recommended test inoculum size for S. hypogyna (5,600 spores/ml was nearly three times that for A. flagellata (2,000 spores/ml),
An automatic optimum kernel-size selection technique for edge enhancement
Chavez, Pat S.; Bauer, Brian P.
1982-01-01
Edge enhancement is a technique that can be considered, to a first order, a correction for the modulation transfer function of an imaging system. Digital imaging systems sample a continuous function at discrete intervals so that high-frequency information cannot be recorded at the same precision as lower frequency data. Because of this, fine detail or edge information in digital images is lost. Spatial filtering techniques can be used to enhance the fine detail information that does exist in the digital image, but the filter size is dependent on the type of area being processed. A technique has been developed by the authors that uses the horizontal first difference to automatically select the optimum kernel-size that should be used to enhance the edges that are contained in the image.
Brain tissue segmentation in MR images based on a hybrid of MRF and social algorithms.
Yousefi, Sahar; Azmi, Reza; Zahedi, Morteza
2012-05-01
Effective abnormality detection and diagnosis in Magnetic Resonance Images (MRIs) requires a robust segmentation strategy. Since manual segmentation is a time-consuming task which engages valuable human resources, automatic MRI segmentations received an enormous amount of attention. For this goal, various techniques have been applied. However, Markov Random Field (MRF) based algorithms have produced reasonable results in noisy images compared to other methods. MRF seeks a label field which minimizes an energy function. The traditional minimization method, simulated annealing (SA), uses Monte Carlo simulation to access the minimum solution with heavy computation burden. For this reason, MRFs are rarely used in real time processing environments. This paper proposed a novel method based on MRF and a hybrid of social algorithms that contain an ant colony optimization (ACO) and a Gossiping algorithm which can be used for segmenting single and multispectral MRIs in real time environments. Combining ACO with the Gossiping algorithm helps find the better path using neighborhood information. Therefore, this interaction causes the algorithm to converge to an optimum solution faster. Several experiments on phantom and real images were performed. Results indicate that the proposed algorithm outperforms the traditional MRF and hybrid of MRF-ACO in speed and accuracy. Copyright © 2012 Elsevier B.V. All rights reserved.
Brown, H G; Shibata, N; Sasaki, H; Petersen, T C; Paganin, D M; Morgan, M J; Findlay, S D
2017-11-01
Electric field mapping using segmented detectors in the scanning transmission electron microscope has recently been achieved at the nanometre scale. However, converting these results to quantitative field measurements involves assumptions whose validity is unclear for thick specimens. We consider three approaches to quantitative reconstruction of the projected electric potential using segmented detectors: a segmented detector approximation to differential phase contrast and two variants on ptychographical reconstruction. Limitations to these approaches are also studied, particularly errors arising from detector segment size, inelastic scattering, and non-periodic boundary conditions. A simple calibration experiment is described which corrects the differential phase contrast reconstruction to give reliable quantitative results despite the finite detector segment size and the effects of plasmon scattering in thick specimens. A plasmon scattering correction to the segmented detector ptychography approaches is also given. Avoiding the imposition of periodic boundary conditions on the reconstructed projected electric potential leads to more realistic reconstructions. Copyright © 2017 Elsevier B.V. All rights reserved.
NASA Technical Reports Server (NTRS)
Brock, T. G.; Kaufman, P. B.
1988-01-01
Pulvini of excised segments from oats (Avena sativa L. cv Victory) were treated unilaterally with indoleacetic acid (IAA) or gibberellic acid (GA3) with or without gravistimulation to assess the effect of gravistimulation on hormone action. Optimum pulvinus elongation growth (millimeters) and segment curvature (degrees) over 24 hours were produced by 100 micromolar IAA in vertical segments. The curvature response to IAA at levels greater than 100 micromolar, applied to the lower sides of gravistimulated (90 degrees) pulvini, was significantly less than the response to identical levels in vertical segments. Furthermore, the bending response of pulvini to 100 micromolar IAA did not vary significantly over a range of presentation angles between 0 and 90 degrees. In contrast, the response to IAA at levels less than 10 micromolar, with gravistimulation, was approximately the sum of the responses to gravistimulation alone and to IAA without gravistimulation. This was observed over a range of presentation angles. Also, GA3 (0.3-30 micromolar) applied to the lower sides of horizontal segments significantly enhanced pulvinus growth and segment curvature, although exogenous GA3 over a range of concentrations had no effect on pulvinus elongation growth or segment curvature in vertical segments. The response to GA3 (10 micromolar) plus IAA (1.0 or 100 micromolar) was additive for either vertical or horizontal segments. These results indicate that gravistimulation produces changes in pulvinus responsiveness to both IAA and GA3 and that the changes are unique for each growth regulator. It is suggested that the changes in responsiveness may result from processes at the cellular level other than changes in hormonal sensitivity.
Chatterjee, Tirtha; Rickard, Mark A; Pearce, Eric; Pangburn, Todd O; Li, Yongfu; Lyons, John W; Cong, Rongjuan; deGroot, A Willem; Meunier, David M
2016-09-23
Recent advances in catalyst technology have enabled the synthesis of olefin block copolymers (OBC). One type is a "hard-soft" OBC with a high density polyethylene (HDPE) block and a relatively low density polyethylene (VLDPE) block targeted as thermoplastic elastomers. Presently, one of the major challenges is to fractionate HDPE segments from the other components in an experimental OBC sample (block copolymers and VLDPE segments). Interactive high temperature liquid chromatography (HTLC) is ineffective for OBC separation as the HDPE segments and block copolymer chains experience nearly identical enthalpic interactions with the stationary phase and co-elute. In this work we have overcome this challenge by using liquid chromatography under the limiting conditions of desorption (LC LCD). A solvent plug (discrete barrier) is introduced in front of the sample which specifically promotes the adsorption of HDPE segments on the stationary phase (porous graphitic carbon). Under selected thermodynamic conditions, VLDPE segments and block copolymer chains crossed the barrier while HDPE segments followed the pore-included barrier solvent and thus enabled separation. The barrier solvent composition was optimized and the chemical composition of fractionated polymer chains was investigated as a function of barrier solvent strength using an online Fourier-transform infrared (FTIR) detector. Our study revealed that both the HDPE segments as well as asymmetric block copolymer chains (HDPE block length≫VLDPE block length) are retained in the separation and the barrier strength can be tailored to retain a particular composition. At the optimum barrier solvent composition, this method can be applied to separate effective HDPE segments from the other components, which has been demonstrated using an experimental OBC sample. Copyright © 2016 Elsevier B.V. All rights reserved.
Eudragit RS PO nanoparticles for sustained release of pyridostigmine bromide
NASA Astrophysics Data System (ADS)
Hoobakht, Fatemeh; Ganji, Fariba; Vasheghani-Farahani, Ebrahim; Mousavi, Seyyed Mohammad
2013-09-01
Pyridostigmine bromide (PB) is an inhibitor of cholinesterase, which is used in the treatment of myasthenia gravis and administered for protection against exposure to toxic nerve agents. Tests were done to investigate prolonging the half-life of PB and improving its release behavior. PB was loaded in nanoparticles (NPs) of Eudragit RS PO (Eu-RS) prepared using the technique of quasi emulsion solvent diffusion. Variables of output power of the sonicator, bath temperature and mixing time, were chosen as the optimization factors to obtain the minimum sized NPs. In addition, emulsions were tested at different ratios of drug-to-polymer by dynamic light scattering to determine size and zeta potential of NPs. UV-spectroscopy was used to determine PB content of the NPs. Drug-loaded NPs were characterized by scanning electron microscopy, X-ray diffraction, and Fourier transform infrared spectra. Results determined that mixing time had a significant impact on the size of Eu-RS NPs, but power output of sonicator and bath temperature had no significant effect. The particle size obtained at the optimum condition (power output of 70 W, bath temperature of 33 °C, and mixing time of 7 min) was less than 200 nm (optimum sizes were 138.9 and 179.5 nm for Eu-RS and PB-loaded Eu-RS NPs, respectively). The optimum PB-loaded Eu-RS NPs at the PB to Eu-RS weight ratio of 1-4 and 20 % of loaded PB released from the nanocarriers within 100 h.
Guo, Jiin-Huarng; Luh, Wei-Ming
2009-05-01
When planning a study, sample size determination is one of the most important tasks facing the researcher. The size will depend on the purpose of the study, the cost limitations, and the nature of the data. By specifying the standard deviation ratio and/or the sample size ratio, the present study considers the problem of heterogeneous variances and non-normality for Yuen's two-group test and develops sample size formulas to minimize the total cost or maximize the power of the test. For a given power, the sample size allocation ratio can be manipulated so that the proposed formulas can minimize the total cost, the total sample size, or the sum of total sample size and total cost. On the other hand, for a given total cost, the optimum sample size allocation ratio can maximize the statistical power of the test. After the sample size is determined, the present simulation applies Yuen's test to the sample generated, and then the procedure is validated in terms of Type I errors and power. Simulation results show that the proposed formulas can control Type I errors and achieve the desired power under the various conditions specified. Finally, the implications for determining sample sizes in experimental studies and future research are discussed.
Uehara, Erica; Deguchi, Tetsuo
2017-12-07
We show that the average size of self-avoiding polygons (SAPs) with a fixed knot is much larger than that of no topological constraint if the excluded volume is small and the number of segments is large. We call it topological swelling. We argue an "enhancement" of the scaling exponent for random polygons with a fixed knot. We study them systematically through SAP consisting of hard cylindrical segments with various different values of the radius of segments. Here we mean by the average size the mean-square radius of gyration. Furthermore, we show numerically that the topological balance length of a composite knot is given by the sum of those of all constituent prime knots. Here we define the topological balance length of a knot by such a number of segments that topological entropic repulsions are balanced with the knot complexity in the average size. The additivity suggests the local knot picture.
NASA Astrophysics Data System (ADS)
Uehara, Erica; Deguchi, Tetsuo
2017-12-01
We show that the average size of self-avoiding polygons (SAPs) with a fixed knot is much larger than that of no topological constraint if the excluded volume is small and the number of segments is large. We call it topological swelling. We argue an "enhancement" of the scaling exponent for random polygons with a fixed knot. We study them systematically through SAP consisting of hard cylindrical segments with various different values of the radius of segments. Here we mean by the average size the mean-square radius of gyration. Furthermore, we show numerically that the topological balance length of a composite knot is given by the sum of those of all constituent prime knots. Here we define the topological balance length of a knot by such a number of segments that topological entropic repulsions are balanced with the knot complexity in the average size. The additivity suggests the local knot picture.
The handicap process favors exaggerated, rather than reduced, sexual ornaments.
Tazzyman, Samuel J; Iwasa, Yoh; Pomiankowski, Andrew
2014-09-01
Why are traits that function as secondary sexual ornaments generally exaggerated in size compared to the naturally selected optimum, and not reduced? Because they deviate from the naturally selected optimum, traits that are reduced in size will handicap their bearer, and could thus provide an honest signal of quality to a potential mate. Thus if secondary sexual ornaments evolve via the handicap process, current theory suggests that reduced ornamentation should be as frequent as exaggerated ornamentation, but this is not the case. To try to explain this discrepancy, we analyze a simple model of the handicap process. Our analysis shows that asymmetries in costs of preference or ornament with regard to exaggeration and reduction cannot fully explain the imbalance. Rather, the bias toward exaggeration can be best explained if either the signaling efficacy or the condition dependence of a trait increases with size. Under these circumstances, evolution always leads to more extreme exaggeration than reduction: although the two should occur just as frequently, exaggerated secondary sexual ornaments are likely to be further removed from the naturally selected optimum than reduced ornaments. © 2014 The Authors. Evolution published by Wiley Periodicals, Inc. on behalf of The Society for the Study of Evolution.
Optimum viewing distance for target acquisition
NASA Astrophysics Data System (ADS)
Holst, Gerald C.
2015-05-01
Human visual system (HVS) "resolution" (a.k.a. visual acuity) varies with illumination level, target characteristics, and target contrast. For signage, computer displays, cell phones, and TVs a viewing distance and display size are selected. Then the number of display pixels is chosen such that each pixel subtends 1 min-1. Resolution of low contrast targets is quite different. It is best described by Barten's contrast sensitivity function. Target acquisition models predict maximum range when the display pixel subtends 3.3 min-1. The optimum viewing distance is nearly independent of magnification. Noise increases the optimum viewing distance.
Environmental Influences in the Simulation of a Solar Space Heating System.
1980-01-01
this simulation an optimum collector size was determined from the energy requirements given by each model and a comparison made between the...Solar Collector Cross Section .. ............... 26 4. Solar System Schematic. .. .................. 31 5. Contributions to Annual Energy Cost...40 6. House Size I Annual Energy Cost. ....... ........ 46 7. House Size II Annual Energy Cost .. ..... ......... 47 8. House Size III Annual
NASA Technical Reports Server (NTRS)
Hixson, M. M.; Bauer, M. E.; Davis, B. J.
1979-01-01
The effect of sampling on the accuracy (precision and bias) of crop area estimates made from classifications of LANDSAT MSS data was investigated. Full-frame classifications of wheat and non-wheat for eighty counties in Kansas were repetitively sampled to simulate alternative sampling plants. Four sampling schemes involving different numbers of samples and different size sampling units were evaluated. The precision of the wheat area estimates increased as the segment size decreased and the number of segments was increased. Although the average bias associated with the various sampling schemes was not significantly different, the maximum absolute bias was directly related to sampling unit size.
Brain tumor segmentation using holistically nested neural networks in MRI images.
Zhuge, Ying; Krauze, Andra V; Ning, Holly; Cheng, Jason Y; Arora, Barbara C; Camphausen, Kevin; Miller, Robert W
2017-10-01
Gliomas are rapidly progressive, neurologically devastating, largely fatal brain tumors. Magnetic resonance imaging (MRI) is a widely used technique employed in the diagnosis and management of gliomas in clinical practice. MRI is also the standard imaging modality used to delineate the brain tumor target as part of treatment planning for the administration of radiation therapy. Despite more than 20 yr of research and development, computational brain tumor segmentation in MRI images remains a challenging task. We are presenting a novel method of automatic image segmentation based on holistically nested neural networks that could be employed for brain tumor segmentation of MRI images. Two preprocessing techniques were applied to MRI images. The N4ITK method was employed for correction of bias field distortion. A novel landmark-based intensity normalization method was developed so that tissue types have a similar intensity scale in images of different subjects for the same MRI protocol. The holistically nested neural networks (HNN), which extend from the convolutional neural networks (CNN) with a deep supervision through an additional weighted-fusion output layer, was trained to learn the multiscale and multilevel hierarchical appearance representation of the brain tumor in MRI images and was subsequently applied to produce a prediction map of the brain tumor on test images. Finally, the brain tumor was obtained through an optimum thresholding on the prediction map. The proposed method was evaluated on both the Multimodal Brain Tumor Image Segmentation (BRATS) Benchmark 2013 training datasets, and clinical data from our institute. A dice similarity coefficient (DSC) and sensitivity of 0.78 and 0.81 were achieved on 20 BRATS 2013 training datasets with high-grade gliomas (HGG), based on a two-fold cross-validation. The HNN model built on the BRATS 2013 training data was applied to ten clinical datasets with HGG from a locally developed database. DSC and sensitivity of 0.83 and 0.85 were achieved. A quantitative comparison indicated that the proposed method outperforms the popular fully convolutional network (FCN) method. In terms of efficiency, the proposed method took around 10 h for training with 50,000 iterations, and approximately 30 s for testing of a typical MRI image in the BRATS 2013 dataset with a size of 160 × 216 × 176, using a DELL PRECISION workstation T7400, with an NVIDIA Tesla K20c GPU. An effective brain tumor segmentation method for MRI images based on a HNN has been developed. The high level of accuracy and efficiency make this method practical in brain tumor segmentation. It may play a crucial role in both brain tumor diagnostic analysis and in the treatment planning of radiation therapy. Published 2017. This article is a U.S. Government work and is in the public domain in the USA.
USDA-ARS?s Scientific Manuscript database
Both insufficient and excessive male inflorescence size leads to a reduction in maize yield. Knowledge of the genetic architecture of male inflorescence is essential to achieve the optimum inflorescence size for maize breeding. In this study, we used approximately eight thousand inbreds, including b...
Robust Machine Learning-Based Correction on Automatic Segmentation of the Cerebellum and Brainstem.
Wang, Jun Yi; Ngo, Michael M; Hessl, David; Hagerman, Randi J; Rivera, Susan M
2016-01-01
Automated segmentation is a useful method for studying large brain structures such as the cerebellum and brainstem. However, automated segmentation may lead to inaccuracy and/or undesirable boundary. The goal of the present study was to investigate whether SegAdapter, a machine learning-based method, is useful for automatically correcting large segmentation errors and disagreement in anatomical definition. We further assessed the robustness of the method in handling size of training set, differences in head coil usage, and amount of brain atrophy. High resolution T1-weighted images were acquired from 30 healthy controls scanned with either an 8-channel or 32-channel head coil. Ten patients, who suffered from brain atrophy because of fragile X-associated tremor/ataxia syndrome, were scanned using the 32-channel head coil. The initial segmentations of the cerebellum and brainstem were generated automatically using Freesurfer. Subsequently, Freesurfer's segmentations were both manually corrected to serve as the gold standard and automatically corrected by SegAdapter. Using only 5 scans in the training set, spatial overlap with manual segmentation in Dice coefficient improved significantly from 0.956 (for Freesurfer segmentation) to 0.978 (for SegAdapter-corrected segmentation) for the cerebellum and from 0.821 to 0.954 for the brainstem. Reducing the training set size to 2 scans only decreased the Dice coefficient ≤0.002 for the cerebellum and ≤ 0.005 for the brainstem compared to the use of training set size of 5 scans in corrective learning. The method was also robust in handling differences between the training set and the test set in head coil usage and the amount of brain atrophy, which reduced spatial overlap only by <0.01. These results suggest that the combination of automated segmentation and corrective learning provides a valuable method for accurate and efficient segmentation of the cerebellum and brainstem, particularly in large-scale neuroimaging studies, and potentially for segmenting other neural regions as well.
Robust Machine Learning-Based Correction on Automatic Segmentation of the Cerebellum and Brainstem
Wang, Jun Yi; Ngo, Michael M.; Hessl, David; Hagerman, Randi J.; Rivera, Susan M.
2016-01-01
Automated segmentation is a useful method for studying large brain structures such as the cerebellum and brainstem. However, automated segmentation may lead to inaccuracy and/or undesirable boundary. The goal of the present study was to investigate whether SegAdapter, a machine learning-based method, is useful for automatically correcting large segmentation errors and disagreement in anatomical definition. We further assessed the robustness of the method in handling size of training set, differences in head coil usage, and amount of brain atrophy. High resolution T1-weighted images were acquired from 30 healthy controls scanned with either an 8-channel or 32-channel head coil. Ten patients, who suffered from brain atrophy because of fragile X-associated tremor/ataxia syndrome, were scanned using the 32-channel head coil. The initial segmentations of the cerebellum and brainstem were generated automatically using Freesurfer. Subsequently, Freesurfer’s segmentations were both manually corrected to serve as the gold standard and automatically corrected by SegAdapter. Using only 5 scans in the training set, spatial overlap with manual segmentation in Dice coefficient improved significantly from 0.956 (for Freesurfer segmentation) to 0.978 (for SegAdapter-corrected segmentation) for the cerebellum and from 0.821 to 0.954 for the brainstem. Reducing the training set size to 2 scans only decreased the Dice coefficient ≤0.002 for the cerebellum and ≤ 0.005 for the brainstem compared to the use of training set size of 5 scans in corrective learning. The method was also robust in handling differences between the training set and the test set in head coil usage and the amount of brain atrophy, which reduced spatial overlap only by <0.01. These results suggest that the combination of automated segmentation and corrective learning provides a valuable method for accurate and efficient segmentation of the cerebellum and brainstem, particularly in large-scale neuroimaging studies, and potentially for segmenting other neural regions as well. PMID:27213683
Carotid artery phantom designment and simulation using field II
NASA Astrophysics Data System (ADS)
Lin, Yuan; Yang, Xin; Ding, Mingyue
2013-10-01
Carotid atherosclerosis is the major cause of ischemic stroke, a leading cause of mortality and disability. Morphology and structure features of carotid plaques are the keys to identify plaques and monitoring the disease. Manually segmentation on the ultrasonic images to get the best-fitted actual size of the carotid plaques based on physicians personal experience, namely "gold standard", is a important step in the study of plaque size. However, it is difficult to qualitatively measure the segmentation error caused by the operator's subjective factors. In order to reduce the subjective factors, and the uncertainty factors of quantification, the experiments in this paper were carried out. In this study, we firstly designed a carotid artery phantom, and then use three different beam-forming algorithms of medical ultrasound to simulate the phantom. Finally obtained plaques areas were analyzed through manual segmentation on simulation images. We could (1) directly evaluate the different beam-forming algorithms for the ultrasound imaging simulation on the effect of carotid artery; (2) also analyze the sensitivity of detection on different size of plaques; (3) indirectly reflect the accuracy of the manual segmentation base on segmentation results the evaluation.
Dynamic response of fluid inside a penny shaped crack
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hayashi, Kazuo; Seki, Hitoshi
1997-12-31
In order to discuss the method for estimating the geometric characteristics of geothermal reservoir cracks, a theoretical study is performed on the dynamic response of the fluid inside a reservoir crack in a rock mass subjected to a dynamic excitation due to propagation of an elastic wave. As representative models of reservoir cracks, a penny shaped crack and a two-dimensional crack which are connected to a borehole are considered. It is found that the resonance frequency of the fluid motion is dependent on the crack size, the fluid`s viscosity and the permeability of the formation. The intensity of the resonancemore » is dependent on the fluid`s viscosity when the size, the aperture and the permeability are fixed. It is also found that, at a value of the fluid`s viscosity, the resonance of fluid pressure becomes strongest. The optimum value of the fluid`s viscosity is found to be almost perfectly determined by the permeability of the formation. Furthermore, it is revealed that, if the fluid`s viscosity is fixed to be the optimum value, the resonance frequency is almost independent of the permeability and aperture, but is dependent on the size of crack. Inversely speaking, this implies that the size of the reservoir crack can be estimated from the resonance frequency, if the fluid with the above mentioned optimum value of viscosity is employed for hydraulic fracturing.« less
High-voltage electrode optimization towards uniform surface treatment by a pulsed volume discharge
NASA Astrophysics Data System (ADS)
Ponomarev, A. V.; Pedos, M. S.; Scherbinin, S. V.; Mamontov, Y. I.; Ponomarev, S. V.
2015-11-01
In this study, the shape and material of the high-voltage electrode of an atmospheric pressure plasma generation system were optimised. The research was performed with the goal of achieving maximum uniformity of plasma treatment of the surface of the low-voltage electrode with a diameter of 100 mm. In order to generate low-temperature plasma with the volume of roughly 1 cubic decimetre, a pulsed volume discharge was used initiated with a corona discharge. The uniformity of the plasma in the region of the low-voltage electrode was assessed using a system for measuring the distribution of discharge current density. The system's low-voltage electrode - collector - was a disc of 100 mm in diameter, the conducting surface of which was divided into 64 radially located segments of equal surface area. The current at each segment was registered by a high-speed measuring system controlled by an ARM™-based 32-bit microcontroller. To facilitate the interpretation of results obtained, a computer program was developed to visualise the results. The program provides a 3D image of the current density distribution on the surface of the low-voltage electrode. Based on the results obtained an optimum shape for a high-voltage electrode was determined. Uniformity of the distribution of discharge current density in relation to distance between electrodes was studied. It was proven that the level of non-uniformity of current density distribution depends on the size of the gap between electrodes. Experiments indicated that it is advantageous to use graphite felt VGN-6 (Russian abbreviation) as the material of the high-voltage electrode's emitting surface.
Radiographic Response to Yttrium-90 Radioembolization in Anterior Versus Posterior Liver Segments
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ibrahim, Saad M.; Lewandowski, Robert J.; Ryu, Robert K.
2008-11-15
The purpose of our study was to determine if preferential radiographic tumor response occurs in tumors located in posterior versus anterior liver segments following radioembolization with yttrium-90 glass microspheres. One hundred thirty-seven patients with chemorefractory liver metastases of various primaries were treated with yttrium-90 glass microspheres. Of these, a subset analysis was performed on 89 patients who underwent 101 whole-right-lobe infusions to liver segments V, VI, VII, and VIII. Pre- and posttreatment imaging included either triphasic contrast material-enhanced CT or gadolinium-enhanced MRI. Responses to treatment were compared in anterior versus posterior right lobe lesions using both RECIST and WHO criteria.more » Statistical comparative studies were conducted in 42 patients with both anterior and posterior segment lesions using the paired-sample t-test. Pearson correlation was used to determine the relationship between pretreatment tumor size and posttreatment tumor response. Median administered activity, delivered radiation dose, and treatment volume were 2.3 GBq, 118.2 Gy, and 1,072 cm{sup 3}, respectively. Differences between the pretreatment tumor size of anterior and posterior liver segments were not statistically significant (p = 0.7981). Differences in tumor response between anterior and posterior liver segments were not statistically significant using WHO criteria (p = 0.8557). A statistically significant correlation did not exist between pretreatment tumor size and posttreatment tumor response (r = 0.0554, p = 0.4434). On imaging follow-up using WHO criteria, for anterior and posterior regions of the liver, (1) response rates were 50% (PR = 50%) and 45% (CR = 9%, PR = 36%), and (2) mean changes in tumor size were -41% and -40%. In conclusion, this study did not find evidence of preferential radiographic tumor response in posterior versus anterior liver segments treated with yttrium-90 glass microspheres.« less
Radiographic response to yttrium-90 radioembolization in anterior versus posterior liver segments.
Ibrahim, Saad M; Lewandowski, Robert J; Ryu, Robert K; Sato, Kent T; Gates, Vanessa L; Mulcahy, Mary F; Kulik, Laura; Larson, Andrew C; Omary, Reed A; Salem, Riad
2008-01-01
The purpose of our study was to determine if preferential radiographic tumor response occurs in tumors located in posterior versus anterior liver segments following radioembolization with yttrium-90 glass microspheres. One hundred thirty-seven patients with chemorefractory liver metastases of various primaries were treated with yttrium-90 glass microspheres. Of these, a subset analysis was performed on 89 patients who underwent 101 whole-right-lobe infusions to liver segments V, VI, VII, and VIII. Pre- and posttreatment imaging included either triphasic contrast material-enhanced CT or gadolinium-enhanced MRI. Responses to treatment were compared in anterior versus posterior right lobe lesions using both RECIST and WHO criteria. Statistical comparative studies were conducted in 42 patients with both anterior and posterior segment lesions using the paired-sample t-test. Pearson correlation was used to determine the relationship between pretreatment tumor size and posttreatment tumor response. Median administered activity, delivered radiation dose, and treatment volume were 2.3 GBq, 118.2 Gy, and 1,072 cm(3), respectively. Differences between the pretreatment tumor size of anterior and posterior liver segments were not statistically significant (p = 0.7981). Differences in tumor response between anterior and posterior liver segments were not statistically significant using WHO criteria (p = 0.8557). A statistically significant correlation did not exist between pretreatment tumor size and posttreatment tumor response (r = 0.0554, p = 0.4434). On imaging follow-up using WHO criteria, for anterior and posterior regions of the liver, (1) response rates were 50% (PR = 50%) and 45% (CR = 9%, PR = 36%), and (2) mean changes in tumor size were -41% and -40%. In conclusion, this study did not find evidence of preferential radiographic tumor response in posterior versus anterior liver segments treated with yttrium-90 glass microspheres.
Minding the Gaps: Literacy Enhances Lexical Segmentation in Children Learning to Read
ERIC Educational Resources Information Center
Havron, Naomi; Arnon, Inbal
2017-01-01
Can emergent literacy impact the size of the linguistic units children attend to? We examined children's ability to segment multiword sequences before and after they learned to read, in order to disentangle the effect of literacy and age on segmentation. We found that early readers were better at segmenting multiword units (after controlling for…
Selecting algorithms, sensors, and linear bases for optimum spectral recovery of skylight.
López-Alvarez, Miguel A; Hernández-Andrés, Javier; Valero, Eva M; Romero, Javier
2007-04-01
In a previous work [Appl. Opt.44, 5688 (2005)] we found the optimum sensors for a planned multispectral system for measuring skylight in the presence of noise by adapting a linear spectral recovery algorithm proposed by Maloney and Wandell [J. Opt. Soc. Am. A3, 29 (1986)]. Here we continue along these lines by simulating the responses of three to five Gaussian sensors and recovering spectral information from noise-affected sensor data by trying out four different estimation algorithms, three different sizes for the training set of spectra, and various linear bases. We attempt to find the optimum combination of sensors, recovery method, linear basis, and matrix size to recover the best skylight spectral power distributions from colorimetric and spectral (in the visible range) points of view. We show how all these parameters play an important role in the practical design of a real multispectral system and how to obtain several relevant conclusions from simulating the behavior of sensors in the presence of noise.
Buried Object Detection Method Using Optimum Frequency Range in Extremely Shallow Underground
NASA Astrophysics Data System (ADS)
Sugimoto, Tsuneyoshi; Abe, Touma
2011-07-01
We propose a new detection method for buried objects using the optimum frequency response range of the corresponding vibration velocity. Flat speakers and a scanning laser Doppler vibrometer (SLDV) are used for noncontact acoustic imaging in the extremely shallow underground. The exploration depth depends on the sound pressure, but it is usually less than 10 cm. Styrofoam, wood (silver fir), and acrylic boards of the same size, different size styrofoam boards, a hollow toy duck, a hollow plastic container, a plastic container filled with sand, a hollow steel can and an unglazed pot are used as buried objects which are buried in sand to about 2 cm depth. The imaging procedure of buried objects using the optimum frequency range is given below. First, the standardized difference from the average vibration velocity is calculated for all scan points. Next, using this result, underground images are made using a constant frequency width to search for the frequency response range of the buried object. After choosing an approximate frequency response range, the difference between the average vibration velocity for all points and that for several points that showed a clear response is calculated for the final confirmation of the optimum frequency range. Using this optimum frequency range, we can obtain the clearest image of the buried object. From the experimental results, we confirmed the effectiveness of our proposed method. In particular, a clear image of the buried object was obtained when the SLDV image was unclear.
Christel C. Kern; Anthony W. D’Amato; Terry F. Strong
2013-01-01
Managing forests for resilience is crucial in the face of uncertain future environmental conditions. Because harvest gap size alters the species diversity and vertical and horizontal structural heterogeneity, there may be an optimum range of gap sizes for conferring resilience to environmental uncertainty. We examined the impacts of different harvest gap sizes on...
Simulator Evaluation of Lineup Visual Landing Aids for Night Carrier Landing.
1987-03-10
recognized that the system is less than optimum (2,3). Because the information from the meatball is of zero order (displacement only), there are...gives the analysis-of-variance summaries of glideslope performance across the flight segments for TOT glideslope + 0.3 degrees (± 1.0 meatball ), RMS...accepted as reliable. In addition, analysis-of- variance of percent TOT glideslope ± 0.45 degrees (± 1.5 meatball ) did not reveal any statistical
Kilbourne, Brandon M
2014-01-01
In spite of considerable work on the linear proportions of limbs in amniotes, it remains unknown whether differences in scale effects between proximal and distal limb segments has the potential to influence locomotor costs in amniote lineages and how changes in the mass proportions of limbs have factored into amniote diversification. To broaden our understanding of how the mass proportions of limbs vary within amniote lineages, I collected data on hindlimb segment masses - thigh, shank, pes, tarsometatarsal segment, and digits - from 38 species of neognath birds, one of the most speciose amniote clades. I scaled each of these traits against measures of body size (body mass) and hindlimb size (hindlimb length) to test for departures from isometry. Additionally, I applied two parameters of trait evolution (Pagel's λ and δ) to understand patterns of diversification in hindlimb segment mass in neognaths. All segment masses are positively allometric with body mass. Segment masses are isometric with hindlimb length. When examining scale effects in the neognath subclade Land Birds, segment masses were again positively allometric with body mass; however, shank, pedal, and tarsometatarsal segment masses were also positively allometric with hindlimb length. Methods of branch length scaling to detect phylogenetic signal (i.e., Pagel's λ) and increasing or decreasing rates of trait change over time (i.e., Pagel's δ) suffer from wide confidence intervals, likely due to small sample size and deep divergence times. The scaling of segment masses appears to be more strongly related to the scaling of limb bone mass as opposed to length, and the scaling of hindlimb mass distribution is more a function of scale effects in limb posture than proximo-distal differences in the scaling of limb segment mass. Though negative allometry of segment masses appears to be precluded by the need for mechanically sound limbs, the positive allometry of segment masses relative to body mass may underlie scale effects in stride frequency and length between smaller and larger neognaths. While variation in linear proportions of limbs appear to be governed by developmental mechanisms, variation in mass proportions does not appear to be constrained so.
2014-01-01
Introduction In spite of considerable work on the linear proportions of limbs in amniotes, it remains unknown whether differences in scale effects between proximal and distal limb segments has the potential to influence locomotor costs in amniote lineages and how changes in the mass proportions of limbs have factored into amniote diversification. To broaden our understanding of how the mass proportions of limbs vary within amniote lineages, I collected data on hindlimb segment masses – thigh, shank, pes, tarsometatarsal segment, and digits – from 38 species of neognath birds, one of the most speciose amniote clades. I scaled each of these traits against measures of body size (body mass) and hindlimb size (hindlimb length) to test for departures from isometry. Additionally, I applied two parameters of trait evolution (Pagel’s λ and δ) to understand patterns of diversification in hindlimb segment mass in neognaths. Results All segment masses are positively allometric with body mass. Segment masses are isometric with hindlimb length. When examining scale effects in the neognath subclade Land Birds, segment masses were again positively allometric with body mass; however, shank, pedal, and tarsometatarsal segment masses were also positively allometric with hindlimb length. Methods of branch length scaling to detect phylogenetic signal (i.e., Pagel’s λ) and increasing or decreasing rates of trait change over time (i.e., Pagel’s δ) suffer from wide confidence intervals, likely due to small sample size and deep divergence times. Conclusions The scaling of segment masses appears to be more strongly related to the scaling of limb bone mass as opposed to length, and the scaling of hindlimb mass distribution is more a function of scale effects in limb posture than proximo-distal differences in the scaling of limb segment mass. Though negative allometry of segment masses appears to be precluded by the need for mechanically sound limbs, the positive allometry of segment masses relative to body mass may underlie scale effects in stride frequency and length between smaller and larger neognaths. While variation in linear proportions of limbs appear to be governed by developmental mechanisms, variation in mass proportions does not appear to be constrained so. PMID:24876886
Zeng, Guang-Ming; Zhang, Shuo-Fu; Qin, Xiao-Sheng; Huang, Guo-He; Li, Jian-Bing
2003-05-01
The paper establishes the relationship between the settling efficiency and the sizes of the sedimentation tank through the process of numerical simulation, which is taken as one of the constraints to set up a simple optimum designing model of sedimentation tank. The feasibility and advantages of this model based on numerical calculation are verified through the application of practical case.
Reduced complexity structural modeling for automated airframe synthesis
NASA Technical Reports Server (NTRS)
Hajela, Prabhat
1987-01-01
A procedure is developed for the optimum sizing of wing structures based on representing the built-up finite element assembly of the structure by equivalent beam models. The reduced-order beam models are computationally less demanding in an optimum design environment which dictates repetitive analysis of several trial designs. The design procedure is implemented in a computer program requiring geometry and loading information to create the wing finite element model and its equivalent beam model, and providing a rapid estimate of the optimum weight obtained from a fully stressed design approach applied to the beam. The synthesis procedure is demonstrated for representative conventional-cantilever and joined wing configurations.
NASA Astrophysics Data System (ADS)
Ozen, Murat; Guler, Murat
2014-02-01
Aggregate gradation is one of the key design parameters affecting the workability and strength properties of concrete mixtures. Estimating aggregate gradation from hardened concrete samples can offer valuable insights into the quality of mixtures in terms of the degree of segregation and the amount of deviation from the specified gradation limits. In this study, a methodology is introduced to determine the particle size distribution of aggregates from 2D cross sectional images of concrete samples. The samples used in the study were fabricated from six mix designs by varying the aggregate gradation, aggregate source and maximum aggregate size with five replicates of each design combination. Each sample was cut into three pieces using a diamond saw and then scanned to obtain the cross sectional images using a desktop flatbed scanner. An algorithm is proposed to determine the optimum threshold for the image analysis of the cross sections. A procedure was also suggested to determine a suitable particle shape parameter to be used in the analysis of aggregate size distribution within each cross section. Results of analyses indicated that the optimum threshold hence the pixel distribution functions may be different even for the cross sections of an identical concrete sample. Besides, the maximum ferret diameter is the most suitable shape parameter to estimate the size distribution of aggregates when computed based on the diagonal sieve opening. The outcome of this study can be of practical value for the practitioners to evaluate concrete in terms of the degree of segregation and the bounds of mixture's gradation achieved during manufacturing.
Particle size effects on viscosity of silver pastes: A manufacturer's view
NASA Technical Reports Server (NTRS)
Provance, J.; Allison, K.
1983-01-01
Particles from a variety of silver powders were investigated by scanning electron microscopy and particle size analyses. Particle size distribution curves and volume population graphs were prepared for these silver powders and for glass powders with optimum, extra fine and coarse particle sizes. The viscosity at a given shear rate and slope of viscosity over a range of shear rates were determined for thick film pastes made with these powders. Because of particle anomalies and variations, the need for flexibility to achieve the best printing qualities for silver pastes was evident. It was established that print quality, dried and fired film density and optimum contact of silver particles with silicon, important for cell electrical output, could be achieved by adjusting the slope of viscosity that fell outside of the range, -0.550 to -0.650. This was accomplished through organic vehicle technology that permitted a change in the slope of viscosity, up or down, while maintaining a constant silver and total solids content.
Launders, J H; McArdle, S; Workman, A; Cowen, A R
1995-01-01
The significance of varying the viewing conditions that may affect the perceived threshold contrast of X-ray television fluoroscopy systems has been investigated. Factors investigated include the ambient room lighting and the viewing distance. The purpose of this study is to find the optimum viewing protocol with which to measure the threshold detection index. This is a particular problem when trying to compare the image quality of television fluoroscopy systems in different input field sizes. The results show that the viewing distance makes a significant difference to the perceived threshold contrast, whereas the ambient light conditions make no significant difference. Experienced observers were found to be capable of finding the optimum viewing distance for detecting details of each size, in effect using a flexible viewing distance. This allows the results from different field sizes to be normalized to account for both the magnification and the entrance air kerma rate differences, which in turn allow for a direct comparison of performance in different field sizes.
Optimum size of nanorods for heating application
NASA Astrophysics Data System (ADS)
Seshadri, G.; Thaokar, Rochish; Mehra, Anurag
2014-08-01
Magnetic nanoparticles (MNP's) have become increasingly important in heating applications such as hyperthermia treatment of cancer due to their ability to release heat when a remote external alternating magnetic field is applied. It has been shown that the heating capability of such particles varies significantly with the size of particles used. In this paper, we theoretically evaluate the heating capability of rod-shaped MNP's and identify conditions under which these particles display highest efficiency. For optimally sized monodisperse particles, the power generated by rod-shaped particles is found to be equal to that generated by spherical particles. However, for particles which are not mono dispersed, rod-shaped particles are found to be more effective in heating as a result of the greater spread in the power density distribution curve. Additionally, for rod-shaped particles, a dispersion in the radius of the particle contributes more to the reduction in loss power when compared to a dispersion in the length. We further identify the optimum size, i.e the radius and length of nanorods, given a bi-variate log-normal distribution of particle size in two dimensions.
Study on selective laser sintering of glass fiber reinforced polystyrene
NASA Astrophysics Data System (ADS)
Yang, Laixia; Wang, Bo; Zhou, Wenming
2017-12-01
In order to improve the bending strength of Polystyrene (PS) sintered parts by selective laser sintering, Polystyrene/glass fiber (PS/GF) composite powders were prepared by mechanical mixing method. The size distribution of PS/GF composite powders was characterized by laser particle size analyzer. The optimum ratio of GF was determined by proportioning sintering experiments. The influence of process parameters on the bending strength of PS and PS/GF sintered parts was studied by orthogonal test. The result indicates that the particle size of PS/GF composite powder is mainly distributed in 24.88 μm~139.8 μm. When the content of GF is 10%, it has better strengthen effect. Finally, the article used the optimum parameter of the two materials to sinter prototype, it is found that the PS/GF prototype has the advantages of good accuracy and high strength.
Techno-economic assessment of pellets produced from steam pretreated biomass feedstock
Shahrukh, Hassan; Oyedun, Adetoyese Olajire; Kumar, Amit; ...
2016-03-10
Minimum production cost and optimum plant size are determined for pellet plants for three types of biomass feedstock e forest residue, agricultural residue, and energy crops. The life cycle cost from harvesting to the delivery of the pellets to the co-firing facility is evaluated. The cost varies from 95 to 105 t -1 for regular pellets and 146–156 t -1 for steam pretreated pellets. The difference in the cost of producing regular and steam pretreated pellets per unit energy is in the range of 2e3 GJ -1. The economic optimum plant size (i.e., the size at which pellet production costmore » is minimum) is found to be 190 kt for regular pellet production and 250 kt for steam pretreated pellet. Furthermore, sensitivity and uncertainty analyses were carried out to identify sensitivity parameters and effects of model error.« less
Lu, Xing; Zhao, Guoqun; Zhou, Jixue; Zhang, Cunsheng; Yu, Junquan
2018-01-01
In this paper, a new type of low-cost Mg-3.36Zn-1.06Sn-0.33Mn-0.27Ca (wt %) alloy ingot with a diameter of 130 mm and a length of 4800 mm was fabricated by semicontinuous casting. The microstructure and mechanical properties at different areas of the ingot were investigated. The microstructure and mechanical properties of the alloy under different one-step and two-step homogenization conditions were studied. For the as-cast alloy, the average grain size and the second phase size decrease from the center to the surface of the ingot, while the area fraction of the second phase increases gradually. At one-half of the radius of the ingot, the alloy presents the optimum comprehensive mechanical properties along the axial direction, which is attributed to the combined effect of relatively small grain size, low second-phase fraction, and uniform microstructure. For the as-homogenized alloy, the optimum two-step homogenization process parameters were determined as 340 °C × 10 h + 520 °C × 16 h. After the optimum homogenization, the proper size and morphology of CaMgSn phase are conducive to improve the microstructure uniformity and the mechanical properties of the alloy. Besides, the yield strength of the alloy is reduced by 20.7% and the elongation is increased by 56.3%, which is more favorable for the subsequent hot deformation processing. PMID:29710818
Lu, Xing; Zhao, Guoqun; Zhou, Jixue; Zhang, Cunsheng; Yu, Junquan
2018-04-29
In this paper, a new type of low-cost Mg-3.36Zn-1.06Sn-0.33Mn-0.27Ca (wt %) alloy ingot with a diameter of 130 mm and a length of 4800 mm was fabricated by semicontinuous casting. The microstructure and mechanical properties at different areas of the ingot were investigated. The microstructure and mechanical properties of the alloy under different one-step and two-step homogenization conditions were studied. For the as-cast alloy, the average grain size and the second phase size decrease from the center to the surface of the ingot, while the area fraction of the second phase increases gradually. At one-half of the radius of the ingot, the alloy presents the optimum comprehensive mechanical properties along the axial direction, which is attributed to the combined effect of relatively small grain size, low second-phase fraction, and uniform microstructure. For the as-homogenized alloy, the optimum two-step homogenization process parameters were determined as 340 °C × 10 h + 520 °C × 16 h. After the optimum homogenization, the proper size and morphology of CaMgSn phase are conducive to improve the microstructure uniformity and the mechanical properties of the alloy. Besides, the yield strength of the alloy is reduced by 20.7% and the elongation is increased by 56.3%, which is more favorable for the subsequent hot deformation processing.
Energetic tradeoffs control the size distribution of aquatic mammals
NASA Astrophysics Data System (ADS)
Gearty, William; McClain, Craig R.; Payne, Jonathan L.
2018-04-01
Four extant lineages of mammals have invaded and diversified in the water: Sirenia, Cetacea, Pinnipedia, and Lutrinae. Most of these aquatic clades are larger bodied, on average, than their closest land-dwelling relatives, but the extent to which potential ecological, biomechanical, and physiological controls contributed to this pattern remains untested quantitatively. Here, we use previously published data on the body masses of 3,859 living and 2,999 fossil mammal species to examine the evolutionary trajectories of body size in aquatic mammals through both comparative phylogenetic analysis and examination of the fossil record. Both methods indicate that the evolution of an aquatic lifestyle is driving three of the four extant aquatic mammal clades toward a size attractor at ˜500 kg. The existence of this body size attractor and the relatively rapid selection toward, and limited deviation from, this attractor rule out most hypothesized drivers of size increase. These three independent body size increases and a shared aquatic optimum size are consistent with control by differences in the scaling of energetic intake and cost functions with body size between the terrestrial and aquatic realms. Under this energetic model, thermoregulatory costs constrain minimum size, whereas limitations on feeding efficiency constrain maximum size. The optimum size occurs at an intermediate value where thermoregulatory costs are low but feeding efficiency remains high. Rather than being released from size pressures, water-dwelling mammals are driven and confined to larger body sizes by the strict energetic demands of the aquatic medium.
Size of the Dynamic Bead in Polymers
DOE Office of Scientific and Technical Information (OSTI.GOV)
Agapov, Alexander L; Sokolov, Alexei P
2010-01-01
Presented analysis of neutron, mechanical, and MD simulation data available in the literature demonstrates that the dynamic bead size (the smallest subchain that still exhibits the Rouse-like dynamics) in most of the polymers is significantly larger than the traditionally defined Kuhn segment. Moreover, our analysis emphasizes that even the static bead size (e.g., chain statistics) disagrees with the Kuhn segment length. We demonstrate that the deficiency of the Kuhn segment definition is based on the assumption of a chain being completely extended inside a single bead. The analysis suggests that representation of a real polymer chain by the bead-and-spring modelmore » with a single parameter C cannot be correct. One needs more parameters to reflect correctly details of the chain structure in the bead-and-spring model.« less
NASA Astrophysics Data System (ADS)
Han, D. Y.; Cao, P.; Liu, J.; Zhu, J. B.
2017-12-01
Cutter spacing is an essential parameter in the TBM design. However, few efforts have been made to study the optimum cutter spacing incorporating penetration depth. To investigate the influence of pre-set penetration depth and cutter spacing on sandstone breakage and TBM performance, a series of sequential laboratory indentation tests were performed in a biaxial compression state. Effects of parameters including penetration force, penetration depth, chip mass, chip size distribution, groove volume, specific energy and maximum angle of lateral crack were investigated. Results show that the total mass of chips, the groove volume and the observed optimum cutter spacing increase with increasing pre-set penetration depth. It is also found that the total mass of chips could be an alternative means to determine optimum cutter spacing. In addition, analysis of chip size distribution suggests that the mass of large chips is dominated by both cutter spacing and pre-set penetration depth. After fractal dimension analysis, we found that cutter spacing and pre-set penetration depth have negligible influence on the formation of small chips and that small chips are formed due to squeezing of cutters and surface abrasion caused by shear failure. Analysis on specific energy indicates that the observed optimum spacing/penetration ratio is 10 for the sandstone, at which, the specific energy and the maximum angle of lateral cracks are smallest. The findings in this paper contribute to better understanding of the coupled effect of cutter spacing and pre-set penetration depth on TBM performance and rock breakage, and provide some guidelines for cutter arrangement.
Xiong, Hui; Sultan, Laith R; Cary, Theodore W; Schultz, Susan M; Bouzghar, Ghizlane; Sehgal, Chandra M
2017-05-01
To assess the diagnostic performance of a leak-plugging segmentation method that we have developed for delineating breast masses on ultrasound images. Fifty-two biopsy-proven breast lesion images were analyzed by three observers using the leak-plugging and manual segmentation methods. From each segmentation method, grayscale and morphological features were extracted and classified as malignant or benign by logistic regression analysis. The performance of leak-plugging and manual segmentations was compared by: size of the lesion, overlap area ( O a ) between the margins, and area under the ROC curves ( A z ). The lesion size from leak-plugging segmentation correlated closely with that from manual tracing ( R 2 of 0.91). O a was higher for leak plugging, 0.92 ± 0.01 and 0.86 ± 0.06 for benign and malignant masses, respectively, compared to 0.80 ± 0.04 and 0.73 ± 0.02 for manual tracings. Overall O a between leak-plugging and manual segmentations was 0.79 ± 0.14 for benign and 0.73 ± 0.14 for malignant lesions. A z for leak plugging was consistently higher (0.910 ± 0.003) compared to 0.888 ± 0.012 for manual tracings. The coefficient of variation of A z between three observers was 0.29% for leak plugging compared to 1.3% for manual tracings. The diagnostic performance, size measurements, and observer variability for automated leak-plugging segmentations were either comparable to or better than those of manual tracings.
The Influence of pH on Prokaryotic Cell Size and Temperature
NASA Astrophysics Data System (ADS)
Sundararajan, D.; Gutierrez, F.; Heim, N. A.; Payne, J.
2015-12-01
The pH of a habitat is essential to an organism's growth and success in its environment. Although most organisms maintain a neutral internal pH, their environmental pH can vary greatly. However, little research has been done concerning an organism's environmental pH across a wide range of taxa. We studied pH tolerance in prokaryotes and its relationship with biovolume, taxonomic classification, and ideal temperature. We had three hypotheses: pH and temperature are not correlated; pH tolerance is similar within taxonomic groups; and extremophiles have small cell sizes. To test these hypotheses, we used pH, size, and taxonomic data from The Prokaryotes. We found that the mean optimum external pH was neutral for prokaryotes as a whole and when divided by domain, phylum, and class. Using ANOVA to test for pH within and among group variances, we found that variation of pH in domains, phyla, classes, and families was greater than between them. pH and size did not show much of a correlation, except that the largest and smallest sized prokaryotes had nearly neutral pH. This seems significant because extremophiles need to divert more of their energy from growth to maintain a neutral internal pH. Acidophiles showed a larger range of optimum pH values than alkaliphiles. A similar result was seen with the minimum and maximum pH values of acidophiles and alkaliphiles. While acidophiles were spread out and had some alkaline maximum values, alkaliphiles had smaller ranges, and unlike some acidophiles that had pH minimums close to zero, alkaliphile pH maximums did not go beyond a pH of 12. No statistically significant differences were found between sizes of acidophiles and alkaliphiles. However, optimum temperatures of acidophiles and alkaliphiles did have a statistically significant difference. pH and temperature had a negative correlation. Therefore, pH seems to have a correlation with cell size, temperature, and taxonomy to some extent.
NASA Astrophysics Data System (ADS)
Martini, R.; Barthelat, F.
2016-07-01
Flexible natural armors from fish, alligators or armadillo are attracting an increasing amount of attention from their unique and attractive combinations of hardness, flexibility and light weight. In particular, the extreme contrast of stiffness between hard plates and surrounding soft tissues give rise to unusual and attractive mechanisms, which now serve as model for the design of bio-inspired armors. Despite a growing interest in bio-inspired flexible protection, there is little guidelines as to the choice of materials, optimum thickness, size, shape and arrangement for the protective plates. In this work, we focus on a failure mode we recently observed on natural and bio-inspired scaled armors: the unstable tilting of individual scales subjected to off-centered point forces. We first present a series of experiments on this system, followed by a model based on contact mechanics and friction. We condense the result into a single stability diagram which capture the key parameters that govern the onset of plate tilting from a localized force. We found that the stability of individual plates is governed by the location of the point force on the plate, by the friction at the surface of the plate, by the size of the plate and by the stiffness of the substrate. We finally discuss how some of these parameters can be optimized at the design stage to produce bio-inspired protective systems with desired combination of surface hardness, stability and flexural compliance.
Crash energy absorption of two-segment crash box with holes under frontal load
NASA Astrophysics Data System (ADS)
Choiron, Moch. Agus; Sudjito, Hidayati, Nafisah Arina
2016-03-01
Crash box is one of the passive safety components which designed as an impact energy absorber during collision. Crash box designs have been developed in order to obtain the optimum crashworthiness performance. Circular cross section was first investigated with one segment design, it rather influenced by its length which is being sensitive to the buckling occurrence. In this study, the two-segment crash box design with additional holes is investigated and deformation behavior and crash energy absorption are observed. The crash box modelling is performed by finite element analysis. The crash test components were impactor, crash box, and fixed rigid base. Impactor and the fixed base material are modelled as a rigid, and crash box material as bilinear isotropic hardening. Crash box length of 100 mm and frontal crash velocity of 16 km/jam are selected. Crash box material of Aluminum Alloy is used. Based on simulation results, it can be shown that holes configuration with 2 holes and ¾ length locations have the largest crash energy absorption. This condition associated with deformation pattern, this crash box model produces axisymmetric mode than other models.
Leclerc, Lara; Pourchez, Jérémie; Aubert, Gérald; Leguellec, Sandrine; Vecellio, Laurent; Cottier, Michèle; Durand, Marc
2014-09-01
Improvement of clinical outcome in patients with sinuses disorders involves targeting delivery of nebulized drug into the maxillary sinuses. We investigated the impact of nebulization conditions (with and without 100 Hz acoustic airflow), particle size (9.9 μm, 2.8 μm, 550 nm and 230 nm) and breathing pattern (nasal vs. no nasal breathing) on enhancement of aerosol delivery into the sinuses using a realistic nasal replica developed by our team. After segmentation of the airways by means of high-resolution computed tomography scans, a well-characterized nasal replica was created using a rapid prototyping technology. A total of 168 intrasinus aerosol depositions were performed with changes of aerosol particle size and breathing patterns under different nebulization conditions using gentamicin as a marker. The results demonstrate that the fraction of aerosol deposited in the maxillary sinuses is enhanced by use of submicrometric aerosols, e.g. 8.155 ± 1.476 mg/L of gentamicin in the left maxillary sinus for the 2.8 μm particles vs. 2.056 ± 0.0474 for the 550 nm particles. Utilization of 100-Hz acoustic airflow nebulization also produced a 2- to 3-fold increase in drug deposition in the maxillary sinuses (e.g. 8.155 ± 1.476 vs. 3.990 ± 1.690 for the 2.8 μm particles). Our study clearly shows that optimum deposition was achieved using submicrometric particles and 100-Hz acoustic airflow nebulization with no nasal breathing. It is hoped that our new respiratory nasal replica will greatly facilitate the development of more effective delivery systems in the future.
Optimization of solar cell contacts by system cost-per-watt minimization
NASA Technical Reports Server (NTRS)
Redfield, D.
1977-01-01
New, and considerably altered, optimum dimensions for solar-cell metallization patterns are found using the recently developed procedure whose optimization criterion is the minimum cost-per-watt effect on the entire photovoltaic system. It is also found that the optimum shadow fraction by the fine grid is independent of metal cost and resistivity as well as cell size. The optimum thickness of the fine grid metal depends on all these factors, and in familiar cases it should be appreciably greater than that found by less complete analyses. The optimum bus bar thickness is much greater than those generally used. The cost-per-watt penalty due to the need for increased amounts of metal per unit area on larger cells is determined quantitatively and thereby provides a criterion for the minimum benefits that must be obtained in other process steps to make larger cells cost effective.
NASA Technical Reports Server (NTRS)
Butler, R.; Williams, F. W.
1992-01-01
A computer program for obtaining the optimum (least mass) dimensions of the kind of prismatic assemblies of laminated, composite plates which occur in advanced aerospace construction is described. Rigorous buckling analysis (derived from exact member theory) and a tailored design procedure are used to produce designs which satisfy buckling and material strength constraints and configurational requirements. Analysis is two to three orders of magnitude quicker than FEM, keeps track of all the governing modes of failure and is efficiently adapted to give sensitivities and to maintain feasibility. Tailoring encourages convergence in fewer sizing cycles than competing programs and permits start designs which are a long way from feasible and/or optimum. Comparisons with its predecessor, PASCO, show that the program is more likely to produce an optimum, will do so more quickly in some cases, and remains accurate for a wider range of problems.
NASA Astrophysics Data System (ADS)
Marshaline Seles, M.; Suryanarayanan, R.; Vivek, S. S.; Dhinakaran, G.
2017-07-01
The conventional concrete when used for structures having dense congested reinforcement, the problems such as external compaction and vibration needs special attention. In such case, the self compacting concrete (SCC) which has the properties like flow ability, passing and filling ability would be an obvious answer. All those SCC flow behavior was governed by EFNARC specifications. In present study, the combination type of SCC was prepared by replacing cement with silica fume (SF) and metakaolin (MK) along with optimum dosages of chemical admixtures. From the fresh property test, cube compressive strength and cylinder split tensile strength, optimum ternary mix was obtained. In order to study the flexural behavior, the optimum ternary mix was taken in which beam specimens of size 1200 mm x 100 mm x 200 mm was designed as singly reinforced section according to IS: 456-2000, Limit state method. Finally the comparative experimental analysis was made between conventional RCC and SCC beams of same grade in terms of flexural strength namely yield load & ultimate load, load- deflection curve, crack size and pattern respectively.
Stephen, Renu M.; Jha, Abhinav K.; Roe, Denise J.; Trouard, Theodore P.; Galons, Jean-Philippe; Kupinski, Matthew A.; Frey, Georgette; Cui, Haiyan; Squire, Scott; Pagel, Mark D.; Rodriguez, Jeffrey J.; Gillies, Robert J.; Stopeck, Alison T.
2015-01-01
Purpose To assess the value of semi-automated segmentation applied to diffusion MRI for predicting the therapeutic response of liver metastasis. Methods Conventional diffusion weighted magnetic resonance imaging (MRI) was performed using b-values of 0, 150, 300 and 450 s/mm2 at baseline and days 4, 11 and 39 following initiation of a new chemotherapy regimen in a pilot study with 18 women with 37 liver metastases from primary breast cancer. A semi-automated segmentation approach was used to identify liver metastases. Linear regression analysis was used to assess the relationship between baseline values of the apparent diffusion coefficient (ADC) and change in tumor size by day 39. Results A semi-automated segmentation scheme was critical for obtaining the most reliable ADC measurements. A statistically significant relationship between baseline ADC values and change in tumor size at day 39 was observed for minimally treated patients with metastatic liver lesions measuring 2–5 cm in size (p = 0.002), but not for heavily treated patients with the same tumor size range (p = 0.29), or for tumors of smaller or larger sizes. ROC analysis identified a baseline threshold ADC value of 1.33 μm2/ms as 75% sensitive and 83% specific for identifying non-responding metastases in minimally treated patients with 2–5 cm liver lesions. Conclusion Quantitative imaging can substantially benefit from a semi-automated segmentation scheme. Quantitative diffusion MRI results can be predictive of therapeutic outcome in selected patients with liver metastases, but not for all liver metastases, and therefore should be considered to be a restricted biomarker. PMID:26284600
Stephen, Renu M; Jha, Abhinav K; Roe, Denise J; Trouard, Theodore P; Galons, Jean-Philippe; Kupinski, Matthew A; Frey, Georgette; Cui, Haiyan; Squire, Scott; Pagel, Mark D; Rodriguez, Jeffrey J; Gillies, Robert J; Stopeck, Alison T
2015-12-01
To assess the value of semi-automated segmentation applied to diffusion MRI for predicting the therapeutic response of liver metastasis. Conventional diffusion weighted magnetic resonance imaging (MRI) was performed using b-values of 0, 150, 300 and 450s/mm(2) at baseline and days 4, 11 and 39 following initiation of a new chemotherapy regimen in a pilot study with 18 women with 37 liver metastases from primary breast cancer. A semi-automated segmentation approach was used to identify liver metastases. Linear regression analysis was used to assess the relationship between baseline values of the apparent diffusion coefficient (ADC) and change in tumor size by day 39. A semi-automated segmentation scheme was critical for obtaining the most reliable ADC measurements. A statistically significant relationship between baseline ADC values and change in tumor size at day 39 was observed for minimally treated patients with metastatic liver lesions measuring 2-5cm in size (p=0.002), but not for heavily treated patients with the same tumor size range (p=0.29), or for tumors of smaller or larger sizes. ROC analysis identified a baseline threshold ADC value of 1.33μm(2)/ms as 75% sensitive and 83% specific for identifying non-responding metastases in minimally treated patients with 2-5cm liver lesions. Quantitative imaging can substantially benefit from a semi-automated segmentation scheme. Quantitative diffusion MRI results can be predictive of therapeutic outcome in selected patients with liver metastases, but not for all liver metastases, and therefore should be considered to be a restricted biomarker. Copyright © 2015 Elsevier Inc. All rights reserved.
Segmental Isotopic Labeling of Proteins for Nuclear Magnetic Resonance
Dongsheng, Liu; Xu, Rong; Cowburn, David
2009-01-01
Nuclear Magnetic Resonance (NMR) spectroscopy has emerged as one of the principle techniques of structural biology. It is not only a powerful method for elucidating the 3D structures under near physiological conditions, but also a convenient method for studying protein-ligand interactions and protein dynamics. A major drawback of macromolecular NMR is its size limitation caused by slower tumbling rates and greater complexity of the spectra as size increases. Segmental isotopic labeling allows specific segment(s) within a protein to be selectively examined by NMR thus significantly reducing the spectral complexity for large proteins and allowing a variety of solution-based NMR strategies to be applied. Two related approaches are generally used in the segmental isotopic labeling of proteins: expressed protein ligation and protein trans-splicing. Here we describe the methodology and recent application of expressed protein ligation and protein trans-splicing for NMR structural studies of proteins and protein complexes. We also describe the protocol used in our lab for the segmental isotopic labeling of a 50 kDa protein Csk (C-terminal Src Kinase) using expressed protein ligation methods. PMID:19632474
NASA Astrophysics Data System (ADS)
Garg, Ishita; Karwoski, Ronald A.; Camp, Jon J.; Bartholmai, Brian J.; Robb, Richard A.
2005-04-01
Chronic obstructive pulmonary diseases (COPD) are debilitating conditions of the lung and are the fourth leading cause of death in the United States. Early diagnosis is critical for timely intervention and effective treatment. The ability to quantify particular imaging features of specific pathology and accurately assess progression or response to treatment with current imaging tools is relatively poor. The goal of this project was to develop automated segmentation techniques that would be clinically useful as computer assisted diagnostic tools for COPD. The lungs were segmented using an optimized segmentation threshold and the trachea was segmented using a fixed threshold characteristic of air. The segmented images were smoothed by a morphological close operation using spherical elements of different sizes. The results were compared to other segmentation approaches using an optimized threshold to segment the trachea. Comparison of the segmentation results from 10 datasets showed that the method of trachea segmentation using a fixed air threshold followed by morphological closing with spherical element of size 23x23x5 yielded the best results. Inclusion of greater number of pulmonary vessels in the lung volume is important for the development of computer assisted diagnostic tools because the physiological changes of COPD can result in quantifiable anatomic changes in pulmonary vessels. Using a fixed threshold to segment the trachea removed airways from the lungs to a better extent as compared to using an optimized threshold. Preliminary measurements gathered from patient"s CT scans suggest that segmented images can be used for accurate analysis of total lung volume and volumes of regional lung parenchyma. Additionally, reproducible segmentation allows for quantification of specific pathologic features, such as lower intensity pixels, which are characteristic of abnormal air spaces in diseases like emphysema.
Sample Size and Allocation of Effort in Point Count Sampling of Birds in Bottomland Hardwood Forests
Winston P. Smith; Daniel J. Twedt; Robert J. Cooper; David A. Wiedenfeld; Paul B. Hamel; Robert P. Ford
1995-01-01
To examine sample size requirements and optimum allocation of effort in point count sampling of bottomland hardwood forests, we computed minimum sample sizes from variation recorded during 82 point counts (May 7-May 16, 1992) from three localities containing three habitat types across three regions of the Mississippi Alluvial Valley (MAV). Also, we estimated the effect...
Content-based audio authentication using a hierarchical patchwork watermark embedding
NASA Astrophysics Data System (ADS)
Gulbis, Michael; Müller, Erika
2010-05-01
Content-based audio authentication watermarking techniques extract perceptual relevant audio features, which are robustly embedded into the audio file to protect. Manipulations of the audio file are detected on the basis of changes between the original embedded feature information and the anew extracted features during verification. The main challenges of content-based watermarking are on the one hand the identification of a suitable audio feature to distinguish between content preserving and malicious manipulations. On the other hand the development of a watermark, which is robust against content preserving modifications and able to carry the whole authentication information. The payload requirements are significantly higher compared to transaction watermarking or copyright protection. Finally, the watermark embedding should not influence the feature extraction to avoid false alarms. Current systems still lack a sufficient alignment of watermarking algorithm and feature extraction. In previous work we developed a content-based audio authentication watermarking approach. The feature is based on changes in DCT domain over time. A patchwork algorithm based watermark was used to embed multiple one bit watermarks. The embedding process uses the feature domain without inflicting distortions to the feature. The watermark payload is limited by the feature extraction, more precisely the critical bands. The payload is inverse proportional to segment duration of the audio file segmentation. Transparency behavior was analyzed in dependence of segment size and thus the watermark payload. At a segment duration of about 20 ms the transparency shows an optimum (measured in units of Objective Difference Grade). Transparency and/or robustness are fast decreased for working points beyond this area. Therefore, these working points are unsuitable to gain further payload, needed for the embedding of the whole authentication information. In this paper we present a hierarchical extension of the watermark method to overcome the limitations given by the feature extraction. The approach is a recursive application of the patchwork algorithm onto its own patches, with a modified patch selection to ensure a better signal to noise ratio for the watermark embedding. The robustness evaluation was done by compression (mp3, ogg, aac), normalization, and several attacks of the stirmark benchmark for audio suite. Compared on the base of same payload and transparency the hierarchical approach shows improved robustness.
Optimum systems design with random input and output applied to solar water heating
NASA Astrophysics Data System (ADS)
Abdel-Malek, L. L.
1980-03-01
Solar water heating systems are evaluated. Models were developed to estimate the percentage of energy supplied from the Sun to a household. Since solar water heating systems have random input and output queueing theory, birth and death processes were the major tools in developing the models of evaluation. Microeconomics methods help in determining the optimum size of the solar water heating system design parameters, i.e., the water tank volume and the collector area.
Bavarsad, Neda; Akhgari, Abbas; Seifmanesh, Somayeh; Salimi, Anayatollah; Rezaie, Annahita
2016-02-29
The aim of this study was to develop and optimize deformable liposome for topical delivery of tretinoin. Liposomal formulations were designed based on the full factorial design and prepared by fusion method. The influence of different ratio of soy phosphatidylcholine and transcutol (independent variables) on incorporation efficiency and drug release in 15 min and 24 h (responses) from liposomal formulations was evaluated. Liposomes were characterized for their vesicle size and Differential Scanning Calorimetry (DSC) was used to investigate changes in their thermal behavior. The penetration and retention of drug was determined using mouse skin. Also skin histology study was performed. Particle size of all formulations was smaller than 20 nm. Incorporation efficiency of liposomes was 79-93 %. Formulation F7 (25:5) showed maximum drug release. Optimum formulations were selected based on the contour plots resulted by statistical equations of drug release in 15 min and 24 h. Solubility properties of transcutol led to higher skin penetration for optimum formulations compared to tretinoin cream. There was no significant difference between the amount of drug retained in the skin by applying optimum formulations and cream. Histopatological investigation suggested optimum formulations could decrease the adverse effect of tretinoin in liposome compared to conventional cream. According to the results of the study, it is concluded that deformable liposome containing transcutol may be successfully used for dermal delivery of tretinoin.
Minding the gaps: literacy enhances lexical segmentation in children learning to read.
Havron, Naomi; Arnon, Inbal
2017-11-01
Can emergent literacy impact the size of the linguistic units children attend to? We examined children's ability to segment multiword sequences before and after they learned to read, in order to disentangle the effect of literacy and age on segmentation. We found that early readers were better at segmenting multiword units (after controlling for age, cognitive, and linguistic variables), and that improvement in literacy skills between the two sessions predicted improvement in segmentation abilities. Together, these findings suggest that literacy acquisition, rather than age, enhanced segmentation. We discuss implications for models of language learning.
NASA Technical Reports Server (NTRS)
Ibrahim, Mounir; Danila, Daniel; Simon, Terrence; Mantell, Susan; Sun, Liyong; Gadeon, David; Qiu, Songgang; Wood, Gary; Kelly, Kevin; McLean, Jeffrey
2007-01-01
An actual-size microfabricated regenerator comprised of a stack of 42 disks, 19 mm diameter and 0.25 mm thick, with layers of microscopic, segmented, involute-shaped flow channels was fabricated and tested. The geometry resembles layers of uniformly-spaced segmented-parallel-plates, except the plates are curved. Each disk was made from electro-plated nickel using the LiGA process. This regenerator had feature sizes close to those required for an actual Stirling engine but the overall regenerator dimensions were sized for the NASA/Sunpower oscillating-flow regenerator test rig. Testing in the oscillating-flow test rig showed the regenerator performed extremely well, significantly better than currently used random-fiber material, producing the highest figures of merit ever recorded for any regenerator tested in that rig over its approximately 20 years of use.
Algorithms for optimization of branching gravity-driven water networks
NASA Astrophysics Data System (ADS)
Dardani, Ian; Jones, Gerard F.
2018-05-01
The design of a water network involves the selection of pipe diameters that satisfy pressure and flow requirements while considering cost. A variety of design approaches can be used to optimize for hydraulic performance or reduce costs. To help designers select an appropriate approach in the context of gravity-driven water networks (GDWNs), this work assesses three cost-minimization algorithms on six moderate-scale GDWN test cases. Two algorithms, a backtracking algorithm and a genetic algorithm, use a set of discrete pipe diameters, while a new calculus-based algorithm produces a continuous-diameter solution which is mapped onto a discrete-diameter set. The backtracking algorithm finds the global optimum for all but the largest of cases tested, for which its long runtime makes it an infeasible option. The calculus-based algorithm's discrete-diameter solution produced slightly higher-cost results but was more scalable to larger network cases. Furthermore, the new calculus-based algorithm's continuous-diameter and mapped solutions provided lower and upper bounds, respectively, on the discrete-diameter global optimum cost, where the mapped solutions were typically within one diameter size of the global optimum. The genetic algorithm produced solutions even closer to the global optimum with consistently short run times, although slightly higher solution costs were seen for the larger network cases tested. The results of this study highlight the advantages and weaknesses of each GDWN design method including closeness to the global optimum, the ability to prune the solution space of infeasible and suboptimal candidates without missing the global optimum, and algorithm run time. We also extend an existing closed-form model of Jones (2011) to include minor losses and a more comprehensive two-part cost model, which realistically applies to pipe sizes that span a broad range typical of GDWNs of interest in this work, and for smooth and commercial steel roughness values.
NASA Astrophysics Data System (ADS)
Shirazi Tehrani, A.; Almasi Kashi, M.; Ramazani, A.; Montazer, A. H.
2016-07-01
Arrays of multilayered Ni/Cu nanowires (NWs) with variable segment sizes were fabricated into anodic aluminum oxide templates using a pulsed electrodeposition method in a single bath for designated potential pulse times. Increasing the pulse time between 0.125 and 2 s in the electrodeposition of Ni enabled the formation of segments with thicknesses ranging from 25 to 280 nm and 10-110 nm in 42 and 65 nm diameter NWs, respectively, leading to disk-shaped, rod-shaped and/or near wire-shaped geometries. Using hysteresis loop measurements at room temperature, the axial and perpendicular magnetic properties were investigated. Regardless of the segment geometry, the axial coercivity and squareness significantly increased with increasing Ni segment thickness, in agreement with a decrease in calculated demagnetizing factors along the NW length. On the contrary, the perpendicular magnetic properties were found to be independent of the pulse times, indicating a competition between the intrawire interactions and the shape demagnetizing field.
Toward a theory of energetically optimal body size in growing animals.
Hannon, B M; Murphy, M R
2016-06-01
Our objective was to formulate a general and useful model of the energy economy of the growing animal. We developed a theory that the respiratory energy per unit of size reaches a minimum at a particular point, when the marginal respiratory heat production rate is equal to the average rate. This occurs at what we defined as the energetically optimal size for the animal. The relationship between heat production rate and size was found to be well described by a cubic function in which heat production rate accelerates as the animal approaches and then exceeds its optimal size. Reanalysis of energetics data from the literature often detected cubic curvature in the relationship between heat production rate and body size of fish, rats, chickens, goats, sheep, swine, cattle, and horses. This finding was consistent with the theory for 13 of 17 data sets. The bias-corrected Akaike information criterion indicated that the cubic equation modeled the influence of the size of a growing animal on its heat production rate better than a power function for 11 of 17 data sets. Changes in the sizes and specific heat production rates of metabolically active internal organs, and body composition and tissue turnover rates were found to explain notable portions of the expected increase in heat production rate as animals approached and then exceeded their energetically optimum size. Accelerating maintenance costs in this region decrease net energy available for productive functions. Energetically and economically optimum size criteria were also compared.
Emoto, Akira; Fukuda, Takashi
2013-02-20
For Fourier transform holography, an effective random phase distribution with randomly displaced phase segments is proposed for obtaining a smooth finite optical intensity distribution in the Fourier transform plane. Since unitary phase segments are randomly distributed in-plane, the blanks give various spatial frequency components to an image, and thus smooth the spectrum. Moreover, by randomly changing the phase segment size, spike generation from the unitary phase segment size in the spectrum can be reduced significantly. As a result, a smooth spectrum including sidebands can be formed at a relatively narrow extent. The proposed phase distribution sustains the primary functions of a random phase mask for holographic-data recording and reconstruction. Therefore, this distribution is expected to find applications in high-density holographic memory systems, replacing conventional random phase mask patterns.
Dietary specialization is linked to reduced species durations in North American fossil canids
NASA Astrophysics Data System (ADS)
Balisi, Mairin; Casey, Corinna; Van Valkenburgh, Blaire
2018-04-01
How traits influence species persistence is a fundamental question in ecology, evolution and palaeontology. We test the relationship between dietary traits and both species duration and locality coverage over 40 million years in North American canids, a clade with considerable ecomorphological disparity and a dense fossil record. Because ecomorphological generalization-broad resource use-may enable species to withstand disturbance, we predicted that canids of average size and mesocarnivory would exhibit longer durations and wider distributions than specialized larger or smaller species. Second, because locality coverage might reflect dispersal ability and/or survivability in a range of habitats, we predicted that high coverage would correspond with longer durations. We find a nonlinear relationship between species duration and degree of carnivory: species at either end of the carnivory spectrum tend to have shorter durations than mesocarnivores. Locality coverage shows no relationship with size, diet or duration. To test whether generalization (medium size, mesocarnivory) corresponds to an adaptive optimum, we fit trait evolution models to previously generated canid phylogenies. Our analyses identify no single optimum in size or diet. Instead, the primary model of size evolution is a classic Cope's Rule increase over time, while dietary evolution does not conform to a single model.
Optimization of Gate, Runner and Sprue in Two-Plate Family Plastic Injection Mould
NASA Astrophysics Data System (ADS)
Amran, M. A.; Hadzley, M.; Amri, S.; Izamshah, R.; Hassan, A.; Samsi, S.; Shahir, K.
2010-03-01
This paper describes the optimization size of gate, runner and sprue in two-plate family plastic injection mould. An Electronic Cash Register (ECR) plastic product was used in this study, which there are three components in electronic cast register plastic product consist of top casing, bottom casing and paper holder. The objectives of this paper are to find out the optimum size of gate, runner and sprue, to locate the optimum layout of cavities and to recognize the defect problems due to the wrong size of gate, runner and sprue. Three types of software were used in this study, which Unigraphics software as CAD tool was used to design 3D modeling, Rhinoceros software as post processing tool was used to design gate, runner and sprue and Moldex software as simulation tool was used to analyze the plastic flow. As result, some modifications were made on size of feeding system and location of cavity to eliminate the short- shot, over filling and welding line problems in two-plate family plastic injection mould.
A novel pipeline for adrenal tumour segmentation.
Koyuncu, Hasan; Ceylan, Rahime; Erdogan, Hasan; Sivri, Mesut
2018-06-01
Adrenal tumours occur on adrenal glands surrounded by organs and osteoid. These tumours can be categorized as either functional, non-functional, malign, or benign. Depending on their appearance in the abdomen, adrenal tumours can arise from one adrenal gland (unilateral) or from both adrenal glands (bilateral) and can connect with other organs, including the liver, spleen, pancreas, etc. This connection phenomenon constitutes the most important handicap against adrenal tumour segmentation. Size change, variety of shape, diverse location, and low contrast (similar grey values between the various tissues) are other disadvantages compounding segmentation difficulty. Few studies have considered adrenal tumour segmentation, and no significant improvement has been achieved for unilateral, bilateral, adherent, or noncohesive tumour segmentation. There is also no recognised segmentation pipeline or method for adrenal tumours including different shape, size, or location information. This study proposes an adrenal tumour segmentation (ATUS) pipeline designed to eliminate the above disadvantages for adrenal tumour segmentation. ATUS incorporates a number of image methods, including contrast limited adaptive histogram equalization, split and merge based on quadtree decomposition, mean shift segmentation, large grey level eliminator, and region growing. Performance assessment of ATUS was realised on 32 arterial and portal phase computed tomography images using six metrics: dice, jaccard, sensitivity, specificity, accuracy, and structural similarity index. ATUS achieved remarkable segmentation performance, and was not affected by the discussed handicaps, on particularly adherence to other organs, with success rates of 83.06%, 71.44%, 86.44%, 99.66%, 99.43%, and 98.51% for the metrics, respectively, for images including sufficient contrast uptake. The proposed ATUS system realises detailed adrenal tumour segmentation, and avoids known disadvantages preventing accurate segmentation. Copyright © 2018 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Corradini, Patricia Gon; Pires, Felipe I.; Paganin, Valdecir A.; Perez, Joelma; Antolini, Ermete
2012-09-01
The effect of the relationship between particle size ( d), inter-particle distance ( x i ), and metal loading ( y) of carbon supported fuel cell Pt or PtRu catalysts on their catalytic activity, based on the optimum d (2.5-3 nm) and x i / d (>5) values, was evaluated. It was found that for y < 30 wt%, the optimum values of both d and x i / d can be always obtained. For y ≥ 30 wt%, instead, the positive effect of a thinner catalyst layer of the fuel cell electrode than that using catalysts with y < 30 wt% is concomitant to a decrease of the effective catalyst surface area due to an increase of d and/or a decrease of x i / d compared to their optimum values, with in turns gives rise to a decrease in the catalytic activity. The effect of the x i / d ratio has been successfully verified by experimental results on ethanol oxidation on PtRu/C catalysts with same particle size and same degree of alloying but different metal loading. Tests in direct ethanol fuel cells showed that, compared to 20 wt% PtRu/C, the negative effect of the lower x i / d on the catalytic activity of 30 and 40 wt% PtRu/C catalysts was superior to the positive effect of the thinner catalyst layer.
ERIC Educational Resources Information Center
Dietrich, Timo; Rundle-Thiele, Sharyn; Leo, Cheryl; Connor, Jason
2015-01-01
Background: According to commercial marketing theory, a market orientation leads to improved performance. Drawing on the social marketing principles of segmentation and audience research, the current study seeks to identify segments to examine responses to a school-based alcohol social marketing program. Methods: A sample of 371 year 10 students…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Goodsitt, Mitchell M., E-mail: goodsitt@umich.edu; Shenoy, Apeksha; Howard, David
2014-05-15
Purpose: To evaluate a three-equation three-unknown dual-energy quantitative CT (DEQCT) technique for determining region specific variations in bone spongiosa composition for improved red marrow dose estimation in radionuclide therapy. Methods: The DEQCT method was applied to 80/140 kVp images of patient-simulating lumbar sectional body phantoms of three sizes (small, medium, and large). External calibration rods of bone, red marrow, and fat-simulating materials were placed beneath the body phantoms. Similar internal calibration inserts were placed at vertebral locations within the body phantoms. Six test inserts of known volume fractions of bone, fat, and red marrow were also scanned. External-to-internal calibration correctionmore » factors were derived. The effects of body phantom size, radiation dose, spongiosa region segmentation granularity [single (∼17 × 17 mm) region of interest (ROI), 2 × 2, and 3 × 3 segmentation of that single ROI], and calibration method on the accuracy of the calculated volume fractions of red marrow (cellularity) and trabecular bone were evaluated. Results: For standard low dose DEQCT x-ray technique factors and the internal calibration method, the RMS errors of the estimated volume fractions of red marrow of the test inserts were 1.2–1.3 times greater in the medium body than in the small body phantom and 1.3–1.5 times greater in the large body than in the small body phantom. RMS errors of the calculated volume fractions of red marrow within 2 × 2 segmented subregions of the ROIs were 1.6–1.9 times greater than for no segmentation, and RMS errors for 3 × 3 segmented subregions were 2.3–2.7 times greater than those for no segmentation. Increasing the dose by a factor of 2 reduced the RMS errors of all constituent volume fractions by an average factor of 1.40 ± 0.29 for all segmentation schemes and body phantom sizes; increasing the dose by a factor of 4 reduced those RMS errors by an average factor of 1.71 ± 0.25. Results for external calibrations exhibited much larger RMS errors than size matched internal calibration. Use of an average body size external-to-internal calibration correction factor reduced the errors to closer to those for internal calibration. RMS errors of less than 30% or about 0.01 for the bone and 0.1 for the red marrow volume fractions would likely be satisfactory for human studies. Such accuracies were achieved for 3 × 3 segmentation of 5 mm slice images for: (a) internal calibration with 4 times dose for all size body phantoms, (b) internal calibration with 2 times dose for the small and medium size body phantoms, and (c) corrected external calibration with 4 times dose and all size body phantoms. Conclusions: Phantom studies are promising and demonstrate the potential to use dual energy quantitative CT to estimate the spatial distributions of red marrow and bone within the vertebral spongiosa.« less
Goodsitt, Mitchell M.; Shenoy, Apeksha; Shen, Jincheng; Howard, David; Schipper, Matthew J.; Wilderman, Scott; Christodoulou, Emmanuel; Chun, Se Young; Dewaraja, Yuni K.
2014-01-01
Purpose: To evaluate a three-equation three-unknown dual-energy quantitative CT (DEQCT) technique for determining region specific variations in bone spongiosa composition for improved red marrow dose estimation in radionuclide therapy. Methods: The DEQCT method was applied to 80/140 kVp images of patient-simulating lumbar sectional body phantoms of three sizes (small, medium, and large). External calibration rods of bone, red marrow, and fat-simulating materials were placed beneath the body phantoms. Similar internal calibration inserts were placed at vertebral locations within the body phantoms. Six test inserts of known volume fractions of bone, fat, and red marrow were also scanned. External-to-internal calibration correction factors were derived. The effects of body phantom size, radiation dose, spongiosa region segmentation granularity [single (∼17 × 17 mm) region of interest (ROI), 2 × 2, and 3 × 3 segmentation of that single ROI], and calibration method on the accuracy of the calculated volume fractions of red marrow (cellularity) and trabecular bone were evaluated. Results: For standard low dose DEQCT x-ray technique factors and the internal calibration method, the RMS errors of the estimated volume fractions of red marrow of the test inserts were 1.2–1.3 times greater in the medium body than in the small body phantom and 1.3–1.5 times greater in the large body than in the small body phantom. RMS errors of the calculated volume fractions of red marrow within 2 × 2 segmented subregions of the ROIs were 1.6–1.9 times greater than for no segmentation, and RMS errors for 3 × 3 segmented subregions were 2.3–2.7 times greater than those for no segmentation. Increasing the dose by a factor of 2 reduced the RMS errors of all constituent volume fractions by an average factor of 1.40 ± 0.29 for all segmentation schemes and body phantom sizes; increasing the dose by a factor of 4 reduced those RMS errors by an average factor of 1.71 ± 0.25. Results for external calibrations exhibited much larger RMS errors than size matched internal calibration. Use of an average body size external-to-internal calibration correction factor reduced the errors to closer to those for internal calibration. RMS errors of less than 30% or about 0.01 for the bone and 0.1 for the red marrow volume fractions would likely be satisfactory for human studies. Such accuracies were achieved for 3 × 3 segmentation of 5 mm slice images for: (a) internal calibration with 4 times dose for all size body phantoms, (b) internal calibration with 2 times dose for the small and medium size body phantoms, and (c) corrected external calibration with 4 times dose and all size body phantoms. Conclusions: Phantom studies are promising and demonstrate the potential to use dual energy quantitative CT to estimate the spatial distributions of red marrow and bone within the vertebral spongiosa. PMID:24784380
DOE Office of Scientific and Technical Information (OSTI.GOV)
Marion, William F; Deline, Christopher A; Asgharzadeh, Amir
In this paper, we present the effect of installation parameters (tilt angle, height above ground, and albedo) on the bifacial gain and energy yield of three south-facing photovoltaic (PV) system configurations: a single module, a row of five modules, and five rows of five modules utilizing RADIANCE-based ray tracing model. We show that height and albedo have a direct impact on the performance of bifacial systems. However, the impact of the tilt angle is more complicated. Seasonal optimum tilt angles are dependent on parameters such as height, albedo, size of the system, weather conditions, and time of the year. Formore » a single bifacial module installed in Albuquerque, NM, USA (35 degrees N) with a reasonable clearance (~1 m) from the ground, the seasonal optimum tilt angle is lowest (~5 degrees) for the summer solstice and highest (~65 degrees) for the winter solstice. For larger systems, seasonal optimum tilt angles are usually higher and can be up to 20 degrees greater than that for a single module system. Annual simulations also indicate that for larger fixed-tilt systems installed on a highly reflective ground (such as snow or a white roofing material with an albedo of ~81%), the optimum tilt angle is higher than the optimum angle of the smaller size systems. We also show that modules in larger scale systems generate lower energy due to horizon blocking and large shadowing area cast by the modules on the ground. For albedo of 21%, the center module in a large array generates up to 7% less energy than a single bifacial module. To validate our model, we utilize measured data from Sandia National Laboratories' fixed-tilt bifacial PV testbed and compare it with our simulations.« less
Frequency stability of maser oscillators operated with cavity Q. [hydrogen and rubidium masers
NASA Technical Reports Server (NTRS)
Tetu, M.; Tremblay, P.; Lesage, P.; Petit, P.; Audoin, C.
1982-01-01
The short term frequency stability of masers equipped with an external feedback loop to increase the cavity quality factor was studied. The frequency stability of a hydrogen and a rubidium maser were measured and compared with theoretical evaluation. It is shown that the frequency stability passes through an optimum when the cavity Q is varied. Long term fluctuations are discussed and the optimum mid term frequency stability achievably by small size active and passive H-masers is considered.
Know how to maximize maintenance spending
DOE Office of Scientific and Technical Information (OSTI.GOV)
Carrino, A.J.; Jones, R.B.; Platt, W.E.
Solomon has developed a methodology to determine a large optimum point where availability meets maintenance spending for Powder River Basin (PRB) coal-fired units. Using a database of sufficient size and composition across various operating ranges, Solomon generated an algorithm that predicts the relationship between maintenance spending and availability. Coupling this generalized algorithm with a unit-specific market-loss curve determines the optimum spending for a facility. The article presents the results of the analysis, how this methodology can be applied to develop optimum operating and financial targets for specific units and markets and a process to achieve those targets. It also describesmore » how this methodology can be used for other types of fossil-fired technologies and future enhancements to the analysis. 5 figs.« less
Design of helicopter rotor blades for optimum dynamic characteristics
NASA Technical Reports Server (NTRS)
Peters, D. A.; Ko, T.; Korn, A. E.; Rossow, M. P.
1982-01-01
The possibilities and the limitations of tailoring blade mass and stiffness distributions to give an optimum blade design in terms of weight, inertia, and dynamic characteristics are investigated. Changes in mass or stiffness distribution used to place rotor frequencies at desired locations are determined. Theoretical limits to the amount of frequency shift are established. Realistic constraints on blade properties based on weight, mass moment of inertia size, strength, and stability are formulated. The extent hub loads can be minimized by proper choice of EL distribution is determined. Configurations that are simple enough to yield clear, fundamental insights into the structural mechanisms but which are sufficiently complex to result in a realistic result for an optimum rotor blade are emphasized.
Mei, Mei; Yang, Lin; Zhan, Guodong; Wang, Huijun; Ma, Duan; Zhou, Wenhao; Huang, Guoying
2014-06-01
To screen for genomic copy number variations (CNVs) in two unrelated neonates with multiple congenital abnormalities using Affymetrix SNP chip and try to find the critical region associated with congenital heart disease. Two neonates were tested for genomic copy number variations by using Cytogenetic SNP chip.Rare CNVs with potential clinical significance were selected of which deletion segments' size was larger than 50 kb and duplication segments' size was larger than 150 kb based on the analysis of ChAs software, without false positive CNVs and segments of normal population. The identified CNVs were compared with those of the cases in DECIPHER and ISCA databases. Eleven rare CNVs with size from 546.6-27 892 kb were identified in the 2 neonates. The deletion region and size of case 1 were 8p23.3-p23.1 (387 912-11 506 771 bp) and 11.1 Mb respectively, the duplication region and size of case 1 were 8p23.1-p11.1 (11 508 387-43 321 279 bp) and 31.8 Mb respectively. The deletion region and size of case 2 were 8p23.3-p23.1 (46 385-7 809 878 bp) and 7.8 Mb respectively, the duplication region and size of case 2 were 8p23.1-p11.21 (12 260 914-40 917 092 bp) and 28.7 Mb respectively. The comparison with Decipher and ISCA databases supported previous viewpoint that 8p23.1 had been associated with congenital heart disease and the region between 7 809 878-11 506 771 bp may play a role in the severe cardiac defects associated with 8p23.1 deletions. Case 1 had serious cardiac abnormalities whose GATA4 was located in the duplication segment and the copy number increased while SOX7 was located in the deletion segment and the copy number decreased. The region between 7 809 878-11 506 771 bp in 8p23.1 is associated with heart defects and copy number variants of SOX7 and GATA4 may result in congenital heart disease.
Model-based spectral estimation of Doppler signals using parallel genetic algorithms.
Solano González, J; Rodríguez Vázquez, K; García Nocetti, D F
2000-05-01
Conventional spectral analysis methods use a fast Fourier transform (FFT) on consecutive or overlapping windowed data segments. For Doppler ultrasound signals, this approach suffers from an inadequate frequency resolution due to the time segment duration and the non-stationarity characteristics of the signals. Parametric or model-based estimators can give significant improvements in the time-frequency resolution at the expense of a higher computational complexity. This work describes an approach which implements in real-time a parametric spectral estimator method using genetic algorithms (GAs) in order to find the optimum set of parameters for the adaptive filter that minimises the error function. The aim is to reduce the computational complexity of the conventional algorithm by using the simplicity associated to GAs and exploiting its parallel characteristics. This will allow the implementation of higher order filters, increasing the spectrum resolution, and opening a greater scope for using more complex methods.
Cuenin, Léo; Lamoureux, Sophie; Schaaf, Mathieu; Bochaton, Thomas; Monassier, Jean-Pierre; Claeys, Marc J; Rioufol, Gilles; Finet, Gérard; Garcia-Dorado, David; Angoulvant, Denis; Elbaz, Meyer; Delarche, Nicolas; Coste, Pierre; Metge, Marc; Perret, Thibault; Motreff, Pascal; Bonnefoy-Cudraz, Eric; Vanzetto, Gérald; Morel, Olivier; Boussaha, Inesse; Ovize, Michel; Mewton, Nathan
2018-04-25
Up to 25% of patients with ST elevation myocardial infarction (STEMI) have ST segment re-elevation after initial regression post-reperfusion and there are few data regarding its prognostic significance.Methods and Results:A standard 12-lead electrocardiogram (ECG) was recorded in 662 patients with anterior STEMI referred for primary percutaneous coronary intervention (PPCI). ECGs were recorded 60-90 min after PPCI and at discharge. ST segment re-elevation was defined as a ≥0.1-mV increase in STMax between the post-PPCI and discharge ECGs. Infarct size (assessed as creatine kinase [CK] peak), echocardiography at baseline and follow-up, and all-cause death and heart failure events at 1 year were assessed. In all, 128 patients (19%) had ST segment re-elevation. There was no difference between patients with and without re-elevation in infarct size (CK peak [mean±SD] 4,231±2,656 vs. 3,993±2,819 IU/L; P=0.402), left ventricular (LV) ejection fraction (50.7±11.6% vs. 52.2±10.8%; P=0.186), LV adverse remodeling (20.1±38.9% vs. 18.3±30.9%; P=0.631), or all-cause mortality and heart failure events (22 [19.8%] vs. 106 [19.2%]; P=0.887) at 1 year. Among anterior STEMI patients treated by PPCI, ST segment re-elevation was present in 19% and was not associated with increased infarct size or major adverse events at 1 year.
Constraint factor in optimization of truss structures via flower pollination algorithm
NASA Astrophysics Data System (ADS)
Bekdaş, Gebrail; Nigdeli, Sinan Melih; Sayin, Baris
2017-07-01
The aim of the paper is to investigate the optimum design of truss structures by considering different stress and displacement constraints. For that reason, the flower pollination algorithm based methodology was applied for sizing optimization of space truss structures. Flower pollination algorithm is a metaheuristic algorithm inspired by the pollination process of flowering plants. By the imitation of cross-pollination and self-pollination processes, the randomly generation of sizes of truss members are done in two ways and these two types of optimization are controlled with a switch probability. In the study, a 72 bar space truss structure was optimized by using five different cases of the constraint limits. According to the results, a linear relationship between the optimum structure weight and constraint limits was observed.
Semiautomatic Segmentation of Glioma on Mobile Devices.
Wu, Ya-Ping; Lin, Yu-Song; Wu, Wei-Guo; Yang, Cong; Gu, Jian-Qin; Bai, Yan; Wang, Mei-Yun
2017-01-01
Brain tumor segmentation is the first and the most critical step in clinical applications of radiomics. However, segmenting brain images by radiologists is labor intense and prone to inter- and intraobserver variability. Stable and reproducible brain image segmentation algorithms are thus important for successful tumor detection in radiomics. In this paper, we propose a supervised brain image segmentation method, especially for magnetic resonance (MR) brain images with glioma. This paper uses hard edge multiplicative intrinsic component optimization to preprocess glioma medical image on the server side, and then, the doctors could supervise the segmentation process on mobile devices in their convenient time. Since the preprocessed images have the same brightness for the same tissue voxels, they have small data size (typically 1/10 of the original image size) and simple structure of 4 types of intensity value. This observation thus allows follow-up steps to be processed on mobile devices with low bandwidth and limited computing performance. Experiments conducted on 1935 brain slices from 129 patients show that more than 30% of the sample can reach 90% similarity; over 60% of the samples can reach 85% similarity, and more than 80% of the sample could reach 75% similarity. The comparisons with other segmentation methods also demonstrate both efficiency and stability of the proposed approach.
NASA Astrophysics Data System (ADS)
Deng, Xiang; Huang, Haibin; Zhu, Lei; Du, Guangwei; Xu, Xiaodong; Sun, Yiyong; Xu, Chenyang; Jolly, Marie-Pierre; Chen, Jiuhong; Xiao, Jie; Merges, Reto; Suehling, Michael; Rinck, Daniel; Song, Lan; Jin, Zhengyu; Jiang, Zhaoxia; Wu, Bin; Wang, Xiaohong; Zhang, Shuai; Peng, Weijun
2008-03-01
Comprehensive quantitative evaluation of tumor segmentation technique on large scale clinical data sets is crucial for routine clinical use of CT based tumor volumetry for cancer diagnosis and treatment response evaluation. In this paper, we present a systematic validation study of a semi-automatic image segmentation technique for measuring tumor volume from CT images. The segmentation algorithm was tested using clinical data of 200 tumors in 107 patients with liver, lung, lymphoma and other types of cancer. The performance was evaluated using both accuracy and reproducibility. The accuracy was assessed using 7 commonly used metrics that can provide complementary information regarding the quality of the segmentation results. The reproducibility was measured by the variation of the volume measurements from 10 independent segmentations. The effect of disease type, lesion size and slice thickness of image data on the accuracy measures were also analyzed. Our results demonstrate that the tumor segmentation algorithm showed good correlation with ground truth for all four lesion types (r = 0.97, 0.99, 0.97, 0.98, p < 0.0001 for liver, lung, lymphoma and other respectively). The segmentation algorithm can produce relatively reproducible volume measurements on all lesion types (coefficient of variation in the range of 10-20%). Our results show that the algorithm is insensitive to lesion size (coefficient of determination close to 0) and slice thickness of image data(p > 0.90). The validation framework used in this study has the potential to facilitate the development of new tumor segmentation algorithms and assist large scale evaluation of segmentation techniques for other clinical applications.
Optimization of gold ore Sumbawa separation using gravity method: Shaking table
NASA Astrophysics Data System (ADS)
Ferdana, Achmad Dhaefi; Petrus, Himawan Tri Bayu Murti; Bendiyasa, I. Made; Prijambada, Irfan Dwidya; Hamada, Fumio; Sachiko, Takahi
2018-04-01
Most of artisanal small gold mining in Indonesia has been using amalgamation method, which caused negative impact to the environment around ore processing area due to the usage of mercury. One of the more environmental-friendly method for gold processing is gravity method. Shaking table is one of separation equipment of gravity method used to increase concentrate based on difference of specific gravity. The optimum concentration result is influenced by several variables, such as rotational speed shaking, particle size and deck slope. In this research, the range of rotational speed shaking was between 100 rpm and 200 rpm, the particle size was between -100 + 200 mesh and -200 + 300 mesh and deck slope was between 3° and 7°. Gold concentration in concentrate was measured by EDX. The result shows that the optimum condition is obtained at a shaking speed of 200 rpm, with a slope of 7° and particle size of -100 + 200 mesh.
Heavy metal recovery from electric arc furnace steel slag by using hydrochloric acid leaching
NASA Astrophysics Data System (ADS)
Wei, Lim Jin; Haan, Ong Teng; Shean Yaw, Thomas Choong; Chuah Abdullah, Luqman; Razak, Mus'ab Abdul; Cionita, Tezara; Toudehdehghan, Abdolreza
2018-03-01
Electric Arc Furnace steel slag (EAFS) is the waste produced in steelmaking industry. Environmental problem such as pollution will occur when dumping the steel slag waste into the landfill. These steel slags have properties that are suitable for various applications such as water treatment and wastewater. The objective of this study is to develop efficient and economical chlorination route for EAFS extraction by using leaching process. Various parameters such as concentration of hydrochloric acid, particle size of steel slag, reaction time and reaction temperature are investigated to determine the optimum conditions. As a result, the dissolution rate can be determined by changing the parameters, such as concentration of hydrochloric acid, particle size of steel slag, reaction time and reaction temperature. The optimum conditions for dissolution rates for the leaching process is at 3.0 M hydrochloric acid, particle size of 1.18 mm, reaction time of 2.5 hour and the temperature of 90°C.
NASA Technical Reports Server (NTRS)
Piper, William S.; Mick, Mark W.
1994-01-01
Findings and results from a marketing research study are presented. The report identifies market segments and the product types to satisfy demand in each. An estimate of market size is based on the specific industries in each segment. A sample of ten industries was used in the study. The scientific study covered U.S. firms only.
ERIC Educational Resources Information Center
Bachman, C. H.
1988-01-01
Presents examples to show the ubiquitous nature of geometry. Illustrates the relationship between the perimeter and area of two-dimensional objects and between the area and volume of three-dimensional objects. Provides examples of distribution systems, optimum shapes, structural strength, biological heat engines, man's size, and reflection and…
On Compact Book Storage in Libraries.
ERIC Educational Resources Information Center
Ravindran, Arunachalam
The optimal storage of books by size in libraries is considered in this paper. It is shown that for a given collection of books of various sizes, the optimum number of shelf heights to use can be determined by finding the shortest path in an equivalent network. Applications of this model to inventory control, assortment and packaging problems are…
Craftsmen say "we want edge-glued, standard-size panels"
Philip A. Araman; Hugh W. Reynolds
1983-01-01
Wood craftsmen would like an alternative to hardwood lumber and plywood and softwood products. They are very interested in edge-glued, standard-size panels. These conclusions are based on interviews with craftsmen at two trade shows, and the results are included in this report along with our recommendations for optimum acceptance by craftsmen of this new product.
Optimum target sizes for a sequential sawing process
H. Dean Claxton
1972-01-01
A method for solving a class of problems in random sequential processes is presented. Sawing cedar pencil blocks is used to illustrate the method. Equations are developed for the function representing loss from improper sizing of blocks. A weighted over-all distribution for sawing and drying operations is developed and graphed. Loss minimizing changes in the control...
An Event-Triggered Machine Learning Approach for Accelerometer-Based Fall Detection.
Putra, I Putu Edy Suardiyana; Brusey, James; Gaura, Elena; Vesilo, Rein
2017-12-22
The fixed-size non-overlapping sliding window (FNSW) and fixed-size overlapping sliding window (FOSW) approaches are the most commonly used data-segmentation techniques in machine learning-based fall detection using accelerometer sensors. However, these techniques do not segment by fall stages (pre-impact, impact, and post-impact) and thus useful information is lost, which may reduce the detection rate of the classifier. Aligning the segment with the fall stage is difficult, as the segment size varies. We propose an event-triggered machine learning (EvenT-ML) approach that aligns each fall stage so that the characteristic features of the fall stages are more easily recognized. To evaluate our approach, two publicly accessible datasets were used. Classification and regression tree (CART), k -nearest neighbor ( k -NN), logistic regression (LR), and the support vector machine (SVM) were used to train the classifiers. EvenT-ML gives classifier F-scores of 98% for a chest-worn sensor and 92% for a waist-worn sensor, and significantly reduces the computational cost compared with the FNSW- and FOSW-based approaches, with reductions of up to 8-fold and 78-fold, respectively. EvenT-ML achieves a significantly better F-score than existing fall detection approaches. These results indicate that aligning feature segments with fall stages significantly increases the detection rate and reduces the computational cost.
Jirapatnakul, Artit C; Fotin, Sergei V; Reeves, Anthony P; Biancardi, Alberto M; Yankelevitz, David F; Henschke, Claudia I
2009-01-01
Estimation of nodule location and size is an important pre-processing step in some nodule segmentation algorithms to determine the size and location of the region of interest. Ideally, such estimation methods will consistently find the same nodule location regardless of where the the seed point (provided either manually or by a nodule detection algorithm) is placed relative to the "true" center of the nodule, and the size should be a reasonable estimate of the true nodule size. We developed a method that estimates nodule location and size using multi-scale Laplacian of Gaussian (LoG) filtering. Nodule candidates near a given seed point are found by searching for blob-like regions with high filter response. The candidates are then pruned according to filter response and location, and the remaining candidates are sorted by size and the largest candidate selected. This method was compared to a previously published template-based method. The methods were evaluated on the basis of stability of the estimated nodule location to changes in the initial seed point and how well the size estimates agreed with volumes determined by a semi-automated nodule segmentation method. The LoG method exhibited better stability to changes in the seed point, with 93% of nodules having the same estimated location even when the seed point was altered, compared to only 52% of nodules for the template-based method. Both methods also showed good agreement with sizes determined by a nodule segmentation method, with an average relative size difference of 5% and -5% for the LoG and template-based methods respectively.
A novel method for retinal optic disc detection using bat meta-heuristic algorithm.
Abdullah, Ahmad S; Özok, Yasa Ekşioğlu; Rahebi, Javad
2018-05-09
Normally, the optic disc detection of retinal images is useful during the treatment of glaucoma and diabetic retinopathy. In this paper, the novel preprocessing of a retinal image with a bat algorithm (BA) optimization is proposed to detect the optic disc of the retinal image. As the optic disk is a bright area and the vessels that emerge from it are dark, these facts lead to the selected segments being regions with a great diversity of intensity, which does not usually happen in pathological regions. First, in the preprocessing stage, the image is fully converted into a gray image using a gray scale conversion, and then morphological operations are implemented in order to remove dark elements such as blood vessels, from the images. In the next stage, a bat algorithm (BA) is used to find the optimum threshold value for the optic disc location. In order to improve the accuracy and to obtain the best result for the segmented optic disc, the ellipse fitting approach was used in the last stage to enhance and smooth the segmented optic disc boundary region. The ellipse fitting is carried out using the least square distance approach. The efficiency of the proposed method was tested on six publicly available datasets, MESSIDOR, DRIVE, DIARETDB1, DIARETDB0, STARE, and DRIONS-DB. The optic disc segmentation average overlaps and accuracy was in the range of 78.5-88.2% and 96.6-99.91% in these six databases. The optic disk of the retinal images was segmented in less than 2.1 s per image. The use of the proposed method improved the optic disc segmentation results for healthy and pathological retinal images in a low computation time. Graphical abstract ᅟ.
Best Merge Region Growing Segmentation with Integrated Non-Adjacent Region Object Aggregation
NASA Technical Reports Server (NTRS)
Tilton, James C.; Tarabalka, Yuliya; Montesano, Paul M.; Gofman, Emanuel
2012-01-01
Best merge region growing normally produces segmentations with closed connected region objects. Recognizing that spectrally similar objects often appear in spatially separate locations, we present an approach for tightly integrating best merge region growing with non-adjacent region object aggregation, which we call Hierarchical Segmentation or HSeg. However, the original implementation of non-adjacent region object aggregation in HSeg required excessive computing time even for moderately sized images because of the required intercomparison of each region with all other regions. This problem was previously addressed by a recursive approximation of HSeg, called RHSeg. In this paper we introduce a refined implementation of non-adjacent region object aggregation in HSeg that reduces the computational requirements of HSeg without resorting to the recursive approximation. In this refinement, HSeg s region inter-comparisons among non-adjacent regions are limited to regions of a dynamically determined minimum size. We show that this refined version of HSeg can process moderately sized images in about the same amount of time as RHSeg incorporating the original HSeg. Nonetheless, RHSeg is still required for processing very large images due to its lower computer memory requirements and amenability to parallel processing. We then note a limitation of RHSeg with the original HSeg for high spatial resolution images, and show how incorporating the refined HSeg into RHSeg overcomes this limitation. The quality of the image segmentations produced by the refined HSeg is then compared with other available best merge segmentation approaches. Finally, we comment on the unique nature of the hierarchical segmentations produced by HSeg.
Modeling envelope statistics of blood and myocardium for segmentation of echocardiographic images.
Nillesen, Maartje M; Lopata, Richard G P; Gerrits, Inge H; Kapusta, Livia; Thijssen, Johan M; de Korte, Chris L
2008-04-01
The objective of this study was to investigate the use of speckle statistics as a preprocessing step for segmentation of the myocardium in echocardiographic images. Three-dimensional (3D) and biplane image sequences of the left ventricle of two healthy children and one dog (beagle) were acquired. Pixel-based speckle statistics of manually segmented blood and myocardial regions were investigated by fitting various probability density functions (pdf). The statistics of heart muscle and blood could both be optimally modeled by a K-pdf or Gamma-pdf (Kolmogorov-Smirnov goodness-of-fit test). Scale and shape parameters of both distributions could differentiate between blood and myocardium. Local estimation of these parameters was used to obtain parametric images, where window size was related to speckle size (5 x 2 speckles). Moment-based and maximum-likelihood estimators were used. Scale parameters were still able to differentiate blood from myocardium; however, smoothing of edges of anatomical structures occurred. Estimation of the shape parameter required a larger window size, leading to unacceptable blurring. Using these parameters as an input for segmentation resulted in unreliable segmentation. Adaptive mean squares filtering was then introduced using the moment-based scale parameter (sigma(2)/mu) of the Gamma-pdf to automatically steer the two-dimensional (2D) local filtering process. This method adequately preserved sharpness of the edges. In conclusion, a trade-off between preservation of sharpness of edges and goodness-of-fit when estimating local shape and scale parameters is evident for parametric images. For this reason, adaptive filtering outperforms parametric imaging for the segmentation of echocardiographic images.
Automatic segmentation and measurements of gestational sac using static B-mode ultrasound images
NASA Astrophysics Data System (ADS)
Ibrahim, Dheyaa Ahmed; Al-Assam, Hisham; Du, Hongbo; Farren, Jessica; Al-karawi, Dhurgham; Bourne, Tom; Jassim, Sabah
2016-05-01
Ultrasound imagery has been widely used for medical diagnoses. Ultrasound scanning is safe and non-invasive, and hence used throughout pregnancy for monitoring growth. In the first trimester, an important measurement is that of the Gestation Sac (GS). The task of measuring the GS size from an ultrasound image is done manually by a Gynecologist. This paper presents a new approach to automatically segment a GS from a static B-mode image by exploiting its geometric features for early identification of miscarriage cases. To accurately locate the GS in the image, the proposed solution uses wavelet transform to suppress the speckle noise by eliminating the high-frequency sub-bands and prepare an enhanced image. This is followed by a segmentation step that isolates the GS through the several stages. First, the mean value is used as a threshold to binarise the image, followed by filtering unwanted objects based on their circularity, size and mean of greyscale. The mean value of each object is then used to further select candidate objects. A Region Growing technique is applied as a post-processing to finally identify the GS. We evaluated the effectiveness of the proposed solution by firstly comparing the automatic size measurements of the segmented GS against the manual measurements, and then integrating the proposed segmentation solution into a classification framework for identifying miscarriage cases and pregnancy of unknown viability (PUV). Both test results demonstrate that the proposed method is effective in segmentation the GS and classifying the outcomes with high level accuracy (sensitivity (miscarriage) of 100% and specificity (PUV) of 99.87%).
Scaling and entropy in p-median facility location along a line
NASA Astrophysics Data System (ADS)
Gastner, Michael T.
2011-09-01
The p-median problem is a common model for optimal facility location. The task is to place p facilities (e.g., warehouses or schools) in a heterogeneously populated space such that the average distance from a person's home to the nearest facility is minimized. Here we study the special case where the population lives along a line (e.g., a road or a river). If facilities are optimally placed, the length of the line segment served by a facility is inversely proportional to the square root of the population density. This scaling law is derived analytically and confirmed for concrete numerical examples of three US interstate highways and the Mississippi River. If facility locations are permitted to deviate from the optimum, the number of possible solutions increases dramatically. Using Monte Carlo simulations, we compute how scaling is affected by an increase in the average distance to the nearest facility. We find that the scaling exponents change and are most sensitive near the optimum facility distribution.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Beck, Markus H.; Inman, Ross B.; Strand, Michael R.
2007-03-01
Polydnaviruses (PDVs) are distinguished by their unique association with parasitoid wasps and their segmented, double-stranded (ds) DNA genomes that are non-equimolar in abundance. Relatively little is actually known, however, about genome packaging or segment abundance of these viruses. Here, we conducted electron microscopy (EM) and real-time polymerase chain reaction (PCR) studies to characterize packaging and segment abundance of Microplitis demolitor bracovirus (MdBV). Like other PDVs, MdBV replicates in the ovaries of females where virions accumulate to form a suspension called calyx fluid. Wasps then inject a quantity of calyx fluid when ovipositing into hosts. The MdBV genome consists of 15more » segments that range from 3.6 (segment A) to 34.3 kb (segment O). EM analysis indicated that MdBV virions contain a single nucleocapsid that encapsidates one circular DNA of variable size. We developed a semi-quantitative real-time PCR assay using SYBR Green I. This assay indicated that five (J, O, H, N and B) segments of the MdBV genome accounted for more than 60% of the viral DNAs in calyx fluid. Estimates of relative segment abundance using our real-time PCR assay were also very similar to DNA size distributions determined from micrographs. Analysis of parasitized Pseudoplusia includens larvae indicated that copy number of MdBV segments C, B and J varied between hosts but their relative abundance within a host was virtually identical to their abundance in calyx fluid. Among-tissue assays indicated that each viral segment was most abundant in hemocytes and least abundant in salivary glands. However, the relative abundance of each segment to one another was similar in all tissues. We also found no clear relationship between MdBV segment and transcript abundance in hemocytes and fat body.« less
Jet Spreading Increase by Passive Control and Associated Performance Penalty
NASA Technical Reports Server (NTRS)
Zaman, K. B. M. Q.
1999-01-01
This paper reviews the effects of 'screech', 'asymmetric nozzle shaping', 'tabs' and 'overexpansion' on the spreading of free jets. Corresponding thrust penalty for the tabs and overexpanded condition are also evaluated. The asymmetric shapes include rectangular ones with varying aspect ratio. Tabs investigated are triangular shaped 'delta-tabs' placed at the exit of a convergent circular nozzle. The effect of overexpansion is examined with circular convergent-divergent (C-D) nozzles. Tabs and overexpansion are found to yield the largest increase in jet spreading. Each, however, involves a performance penalty, i.e., a loss in thrust coefficient. Variation of the size of four delta-tabs show that there exists an optimum size for which the gain in jet spreading is the maximum per unit loss in thrust coefficient. With the C-D nozzles, the minimum in thrust coefficient is expected near the beginning of the overexpanded regime based on idealized flow calculations. The maximum increase in jet spreading, however, is found to occur at higher pressure ratios well into the overexpanded regime. The optimum benefit with the overexpanded flow, in terms of gain in spreading for unit penalty, is found to be comparable to the optimum tab case.
Usher syndrome type 1–associated cadherins shape the photoreceptor outer segment
Parain, Karine; Aghaie, Asadollah; Picaud, Serge
2017-01-01
Usher syndrome type 1 (USH1) causes combined hearing and sight defects, but how mutations in USH1 genes lead to retinal dystrophy in patients remains elusive. The USH1 protein complex is associated with calyceal processes, which are microvilli of unknown function surrounding the base of the photoreceptor outer segment. We show that in Xenopus tropicalis, these processes are connected to the outer-segment membrane by links composed of protocadherin-15 (USH1F protein). Protocadherin-15 deficiency, obtained by a knockdown approach, leads to impaired photoreceptor function and abnormally shaped photoreceptor outer segments. Rod basal outer disks displayed excessive outgrowth, and cone outer segments were curved, with lamellae of heterogeneous sizes, defects also observed upon knockdown of Cdh23, encoding cadherin-23 (USH1D protein). The calyceal processes were virtually absent in cones and displayed markedly reduced F-actin content in rods, suggesting that protocadherin-15–containing links are essential for their development and/or maintenance. We propose that calyceal processes, together with their associated links, control the sizing of rod disks and cone lamellae throughout their daily renewal. PMID:28495838
Usher syndrome type 1-associated cadherins shape the photoreceptor outer segment.
Schietroma, Cataldo; Parain, Karine; Estivalet, Amrit; Aghaie, Asadollah; Boutet de Monvel, Jacques; Picaud, Serge; Sahel, José-Alain; Perron, Muriel; El-Amraoui, Aziz; Petit, Christine
2017-06-05
Usher syndrome type 1 (USH1) causes combined hearing and sight defects, but how mutations in USH1 genes lead to retinal dystrophy in patients remains elusive. The USH1 protein complex is associated with calyceal processes, which are microvilli of unknown function surrounding the base of the photoreceptor outer segment. We show that in Xenopus tropicalis , these processes are connected to the outer-segment membrane by links composed of protocadherin-15 (USH1F protein). Protocadherin-15 deficiency, obtained by a knockdown approach, leads to impaired photoreceptor function and abnormally shaped photoreceptor outer segments. Rod basal outer disks displayed excessive outgrowth, and cone outer segments were curved, with lamellae of heterogeneous sizes, defects also observed upon knockdown of Cdh23 , encoding cadherin-23 (USH1D protein). The calyceal processes were virtually absent in cones and displayed markedly reduced F-actin content in rods, suggesting that protocadherin-15-containing links are essential for their development and/or maintenance. We propose that calyceal processes, together with their associated links, control the sizing of rod disks and cone lamellae throughout their daily renewal. © 2017 Schietroma et al.
Ultra-Stable Segmented Telescope Sensing and Control Architecture
NASA Technical Reports Server (NTRS)
Feinberg, Lee; Bolcar, Matthew; Knight, Scott; Redding, David
2017-01-01
The LUVOIR team is conducting two full architecture studies Architecture A 15 meter telescope that folds up in an 8.4m SLS Block 2 shroud is nearly complete. Architecture B 9.2 meter that uses an existing fairing size will begin study this Fall. This talk will summarize the ultra-stable architecture of the 15m segmented telescope including the basic requirements, the basic rationale for the architecture, the technologies employed, and the expected performance. This work builds on several dynamics and thermal studies performed for ATLAST segmented telescope configurations. The most important new element was an approach to actively control segments for segment to segment motions which will be discussed later.
Xu, Renfeng; Wang, Huachun; Thibos, Larry N; Bradley, Arthur
2017-04-01
Our purpose is to develop a computational approach that jointly assesses the impact of stimulus luminance and pupil size on visual quality. We compared traditional optical measures of image quality and those that incorporate the impact of retinal illuminance dependent neural contrast sensitivity. Visually weighted image quality was calculated for a presbyopic model eye with representative levels of chromatic and monochromatic aberrations as pupil diameter was varied from 7 to 1 mm, stimulus luminance varied from 2000 to 0.1 cd/m2, and defocus varied from 0 to -2 diopters. The model included the effects of quantal fluctuations on neural contrast sensitivity. We tested the model's predictions for five cycles per degree gratings by measuring contrast sensitivity at 5 cyc/deg. Unlike the traditional Strehl ratio and the visually weighted area under the modulation transfer function, the visual Strehl ratio derived from the optical transfer function was able to capture the combined impact of optics and quantal noise on visual quality. In a well-focused eye, provided retinal illuminance is held constant as pupil size varies, visual image quality scales approximately as the square root of illuminance because of quantum fluctuations, but optimum pupil size is essentially independent of retinal illuminance and quantum fluctuations. Conversely, when stimulus luminance is held constant (and therefore illuminance varies with pupil size), optimum pupil size increases as luminance decreases, thereby compensating partially for increased quantum fluctuations. However, in the presence of -1 and -2 diopters of defocus and at high photopic levels where Weber's law operates, optical aberrations and diffraction dominate image quality and pupil optimization. Similar behavior was observed in human observers viewing sinusoidal gratings. Optimum pupil size increases as stimulus luminance drops for the well-focused eye, and the benefits of small pupils for improving defocused image quality remain throughout the photopic and mesopic ranges. However, restricting pupils to <2 mm will cause significant reductions in the best focus vision at low photopic and mesopic luminances.
Development of Non-Optimum Factors for Launch Vehicle Propellant Tank Bulkhead Weight Estimation
NASA Technical Reports Server (NTRS)
Wu, K. Chauncey; Wallace, Matthew L.; Cerro, Jeffrey A.
2012-01-01
Non-optimum factors are used during aerospace conceptual and preliminary design to account for the increased weights of as-built structures due to future manufacturing and design details. Use of higher-fidelity non-optimum factors in these early stages of vehicle design can result in more accurate predictions of a concept s actual weights and performance. To help achieve this objective, non-optimum factors are calculated for the aluminum-alloy gores that compose the ogive and ellipsoidal bulkheads of the Space Shuttle Super-Lightweight Tank propellant tanks. Minimum values for actual gore skin thicknesses and weld land dimensions are extracted from selected production drawings, and are used to predict reference gore weights. These actual skin thicknesses are also compared to skin thicknesses predicted using classical structural mechanics and tank proof-test pressures. Both coarse and refined weights models are developed for the gores. The coarse model is based on the proof pressure-sized skin thicknesses, and the refined model uses the actual gore skin thicknesses and design detail dimensions. To determine the gore non-optimum factors, these reference weights are then compared to flight hardware weights reported in a mass properties database. When manufacturing tolerance weight estimates are taken into account, the gore non-optimum factors computed using the coarse weights model range from 1.28 to 2.76, with an average non-optimum factor of 1.90. Application of the refined weights model yields non-optimum factors between 1.00 and 1.50, with an average non-optimum factor of 1.14. To demonstrate their use, these calculated non-optimum factors are used to predict heavier, more realistic gore weights for a proposed heavy-lift launch vehicle s propellant tank bulkheads. These results indicate that relatively simple models can be developed to better estimate the actual weights of large structures for future launch vehicles.
Electrocardiographic evaluation of reperfusion therapy in patients with acute myocardial infarction.
Clemmensen, P
1996-02-01
The present thesis is based on 6 previously published clinical studies in patients with AMI. Thrombolytic therapy for patients with AMI improves early infarct coronary artery patency, limits AMI size, improves left ventricular function and survival, as demonstrated in large placebo-controlled clinical trials. With the advent of interventions aimed at limiting AMI size it became important to assess the amount of ischemic myocardium in the early phase of AMI, and to develop noninvasive methods for evaluation of these therapies. The aims of the present studies were to develop such methods. The studies have included 267 patients with AMI admitted up to 12 hours after onset of symptoms. All included patients had acute ECG ST-segment changes indicating subepicardial ischemia, and patients with bundle branch block were excluded. Serial ECG's were analyzed with quantitative ST-segment measurements in the acute phase and compared to the Selvester QRS score estimated final AMI size. These ECG indices were compared to and validated through comparisons with other independent noninvasive and invasive methods, used for the purpose of evaluating patients with AMI treated with thrombolytic therapy. It was found that in patients with first AMI not treated with reperfusion therapies the QRS score estimated final AMI size can be predicted from the acute ST-segment elevation. Based on the number of ECG leads with ST-segment elevation and its summated magnitude, formulas were developed to provide an "ST score" for estimating the amount of myocardium in jeopardy during the early phase of AMI. The ST-segment deviation present in the ECG in patients with documented occlusion of the infarct related coronary artery, was subsequently shown to correlate with the degree of regional and global left ventricular dysfunction. Because serial changes in ST-segment elevation, during the acute phase of AMI were believed to reflect changes is myocardial ischemia and thus possibly infarct artery patency status, the summated ST-segment elevation present on the admission ECG was compared to that present after administration of intravenous thrombolytic therapy, and immediately prior to angiographic visualization of the infarct related coronary artery. The entire spectrum of sensitivities and specificities, derived from different cut-off values for the degree of ST-segment normalization, was described for the first time. It was found that a 20% decrease in ST-segment elevation could predict coronary artery patency with a high level of accuracy: positive predictive value = 88% and negative predictive value = 80%.(ABSTRACT TRUNCATED)
Crash energy absorption of two-segment crash box with holes under frontal load
DOE Office of Scientific and Technical Information (OSTI.GOV)
Choiron, Moch Agus, E-mail: agus-choiron@ub.ac.id; Sudjito,; Hidayati, Nafisah Arina
Crash box is one of the passive safety components which designed as an impact energy absorber during collision. Crash box designs have been developed in order to obtain the optimum crashworthiness performance. Circular cross section was first investigated with one segment design, it rather influenced by its length which is being sensitive to the buckling occurrence. In this study, the two-segment crash box design with additional holes is investigated and deformation behavior and crash energy absorption are observed. The crash box modelling is performed by finite element analysis. The crash test components were impactor, crash box, and fixed rigid base.more » Impactor and the fixed base material are modelled as a rigid, and crash box material as bilinear isotropic hardening. Crash box length of 100 mm and frontal crash velocity of 16 km/jam are selected. Crash box material of Aluminum Alloy is used. Based on simulation results, it can be shown that holes configuration with 2 holes and ¾ length locations have the largest crash energy absorption. This condition associated with deformation pattern, this crash box model produces axisymmetric mode than other models.« less
Precise Alignment and Permanent Mounting of Thin and Lightweight X-ray Segments
NASA Technical Reports Server (NTRS)
Biskach, Michael P.; Chan, Kai-Wing; Hong, Melinda N.; Mazzarella, James R.; McClelland, Ryan S.; Norman, Michael J.; Saha, Timo T.; Zhang, William W.
2012-01-01
To provide observations to support current research efforts in high energy astrophysics. future X-ray telescope designs must provide matching or better angular resolution while significantly increasing the total collecting area. In such a design the permanent mounting of thin and lightweight segments is critical to the overall performance of the complete X-ray optic assembly. The thin and lightweight segments used in the assemhly of the modules are desigued to maintain and/or exceed the resolution of existing X-ray telescopes while providing a substantial increase in collecting area. Such thin and delicate X-ray segments are easily distorted and yet must be aligned to the arcsecond level and retain accurate alignment for many years. The Next Generation X-ray Optic (NGXO) group at NASA Goddard Space Flight Center has designed, assembled. and implemented new hardware and procedures mth the short term goal of aligning three pairs of X-ray segments in a technology demonstration module while maintaining 10 arcsec alignment through environmental testing as part of the eventual design and construction of a full sized module capable of housing hundreds of X-ray segments. The recent attempts at multiple segment pair alignment and permanent mounting is described along with an overview of the procedure used. A look into what the next year mll bring for the alignment and permanent segment mounting effort illustrates some of the challenges left to overcome before an attempt to populate a full sized module can begin.
Detection of gamma irradiated pepper and papain by chemiluminescence
NASA Astrophysics Data System (ADS)
Sattar, Abdus; Delincée, H.; Diehl, J. F.
Chemiluminescence (CL) measurements of black pepper and of papain using luminol and lucigenin reactions were studied. Effects of grinding, irradiation (5-20 kGy) and particle size (750-140 μm) on CL of pepper, and of irradiation (10-30 kGy) on CL of papain, were investigated. All the tested treatments affected the luminescence response in both the luminol and lucigenin reactions; however, the pattern of changes in each case, was inconsistent. Optimum pepper size for maximum luminescence was 560 μm, and optimum irradiation doses were >15 kGy for pepper and >20 kGy for papain. Chemiluminescence may possibly be used as an indicator or irradiation treatment for pepper and papain at a dose of 10 kGy or higher, but further research is needed to establish the reliability of this method.
Optimization and design of pigments for heat-insulating coatings
NASA Astrophysics Data System (ADS)
Wang, Guang-Hai; Zhang, Yue
2010-12-01
This paper reports that heat insulating property of infrared reflective coatings is obtained through the use of pigments which diffuse near-infrared thermal radiation. Suitable structure and size distribution of pigments would attain maximum diffuse infrared radiation and reduce the pigment volume concentration required. The optimum structure and size range of pigments for reflective infrared coatings are studied by using Kubelka—Munk theory, Mie model and independent scattering approximation. Taking titania particle as the pigment embedded in an inorganic coating, the computational results show that core-shell particles present excellent scattering ability, more so than solid and hollow spherical particles. The optimum radius range of core-shell particles is around 0.3 ~ 1.6 μm. Furthermore, the influence of shell thickness on optical parameters of the coating is also obvious and the optimal thickness of shell is 100-300 nm.
Collection of small-size diffraction radiation oscillators
NASA Astrophysics Data System (ADS)
Shestopalov, Victor P.; Skrynnik, Boris K.
1995-10-01
The systematic research and engineering efforts for new class of vacuum tube devices such as diffraction radiation generators are in progress in the IRE of the National Academy of Sciences of Ukraine. For its operation DRG is based on excitation of open resonator (OR) by the Smith-Pursell radiation initiated when electron flow is rectinearly moving near diffracted grating (DG) arranged on one of the OR mirrors. By now a collection of small-sized highly stable through all mm band DRG, packetized in optimum magnet systems with air clearance of 32 mm is available. The supply power is less then 500 W. The magnetic field for accompanying of electron flow is 0,4-0,7 T. The mass of optimum magnet syustem of rare- earth elements is about 2-8 kg. The device is cooling by the water system.
Enhancement of 2,3-Butanediol Production by Klebsiella oxytoca PTCC 1402
Anvari, Maesomeh; Safari Motlagh, Mohammad Reza
2011-01-01
Optimal operating parameters of 2,3-Butanediol production using Klebsiella oxytoca under submerged culture conditions are determined by using Taguchi method. The effect of different factors including medium composition, pH, temperature, mixing intensity, and inoculum size on 2,3-butanediol production was analyzed using the Taguchi method in three levels. Based on these analyses the optimum concentrations of glucose, acetic acid, and succinic acid were found to be 6, 0.5, and 1.0 (% w/v), respectively. Furthermore, optimum values for temperature, inoculum size, pH, and the shaking speed were determined as 37°C, 8 (g/L), 6.1, and 150 rpm, respectively. The optimal combinations of factors obtained from the proposed DOE methodology was further validated by conducting fermentation experiments and the obtained results revealed an enhanced 2,3-Butanediol yield of 44%. PMID:21318172
Research on the treatment of oily wastewater by coalescence technology.
Li, Chunbiao; Li, Meng; Zhang, Xiaoyan
2015-01-01
Recently, oily wastewater treatment has become a hot research topic across the world. Among the common methods for oily wastewater treatment, coalescence is one of the most promising technologies because of its high efficiency, easy operation, smaller land coverage, and lower investment and operational costs. In this research, a new type of ceramic filter material was chosen to investigate the effects of some key factors including particle size of coarse-grained materials, temperature, inflow direction and inflow velocity of the reactor. The aim was to explore the optimum operating conditions for coarse-graining. Results of a series of tests showed that the optimum operating conditions were a combination of grain size 1-3 mm, water temperature 35 °C and up-flow velocity 8 m/h, which promised a maximum oil removal efficiency of 93%.
Evaluation of a High-Resolution Benchtop Micro-CT Scanner for Application in Porous Media Research
NASA Astrophysics Data System (ADS)
Tuller, M.; Vaz, C. M.; Lasso, P. O.; Kulkarni, R.; Ferre, T. A.
2010-12-01
Recent advances in Micro Computed Tomography (MCT) provided the motivation to thoroughly evaluate and optimize scanning, image reconstruction/segmentation and pore-space analysis capabilities of a new generation benchtop MCT scanner and associated software package. To demonstrate applicability to soil research the project was focused on determination of porosities and pore size distributions of two Brazilian Oxisols from segmented MCT-data. Effects of metal filters and various acquisition parameters (e.g. total rotation, rotation step, and radiograph frame averaging) on image quality and acquisition time are evaluated. Impacts of sample size and scanning resolution on CT-derived porosities and pore-size distributions are illustrated.
Dietary specialization is linked to reduced species durations in North American fossil canids
Casey, Corinna; Van Valkenburgh, Blaire
2018-01-01
How traits influence species persistence is a fundamental question in ecology, evolution and palaeontology. We test the relationship between dietary traits and both species duration and locality coverage over 40 million years in North American canids, a clade with considerable ecomorphological disparity and a dense fossil record. Because ecomorphological generalization—broad resource use—may enable species to withstand disturbance, we predicted that canids of average size and mesocarnivory would exhibit longer durations and wider distributions than specialized larger or smaller species. Second, because locality coverage might reflect dispersal ability and/or survivability in a range of habitats, we predicted that high coverage would correspond with longer durations. We find a nonlinear relationship between species duration and degree of carnivory: species at either end of the carnivory spectrum tend to have shorter durations than mesocarnivores. Locality coverage shows no relationship with size, diet or duration. To test whether generalization (medium size, mesocarnivory) corresponds to an adaptive optimum, we fit trait evolution models to previously generated canid phylogenies. Our analyses identify no single optimum in size or diet. Instead, the primary model of size evolution is a classic Cope's Rule increase over time, while dietary evolution does not conform to a single model. PMID:29765649
Walker, David; Yu, Guoyu; Li, Hongyu; Messelink, Wilhelmus; Evans, Rob; Beaucamp, Anthony
2012-08-27
Segment-edges for extremely large telescopes are critical for observations requiring high contrast and SNR, e.g. detecting exo-planets. In parallel, industrial requirements for edge-control are emerging in several applications. This paper reports on a new approach, where edges are controlled throughout polishing of the entire surface of a part, which has been pre-machined to its final external dimensions. The method deploys compliant bonnets delivering influence functions of variable diameter, complemented by small pitch tools sized to accommodate aspheric mis-fit. We describe results on witness hexagons in preparation for full size prototype segments for the European Extremely Large Telescope, and comment on wider applications of the technology.
Pasupathy, Sivabaskari; Tavella, Rosanna; Grover, Suchi; Raman, Betty; Procter, Nathan E K; Du, Yang Timothy; Mahadavan, Gnanadevan; Stafford, Irene; Heresztyn, Tamila; Holmes, Andrew; Zeitz, Christopher; Arstall, Margaret; Selvanayagam, Joseph; Horowitz, John D; Beltrame, John F
2017-09-05
Contemporary ST-segment-elevation myocardial infarction management involves primary percutaneous coronary intervention, with ongoing studies focusing on infarct size reduction using ancillary therapies. N-acetylcysteine (NAC) is an antioxidant with reactive oxygen species scavenging properties that also potentiates the effects of nitroglycerin and thus represents a potentially beneficial ancillary therapy in primary percutaneous coronary intervention. The NACIAM trial (N-acetylcysteine in Acute Myocardial Infarction) examined the effects of NAC on infarct size in patients with ST-segment-elevation myocardial infarction undergoing percutaneous coronary intervention. This randomized, double-blind, placebo-controlled, multicenter study evaluated the effects of intravenous high-dose NAC (29 g over 2 days) with background low-dose nitroglycerin (7.2 mg over 2 days) on early cardiac magnetic resonance imaging-assessed infarct size. Secondary end points included cardiac magnetic resonance-determined myocardial salvage and creatine kinase kinetics. Of 112 randomized patients with ST-segment-elevation myocardial infarction, 75 (37 in NAC group, 38 in placebo group) underwent early cardiac magnetic resonance imaging. Median duration of ischemia pretreatment was 2.4 hours. With background nitroglycerin infusion administered to all patients, those randomized to NAC exhibited an absolute 5.5% reduction in cardiac magnetic resonance-assessed infarct size relative to placebo (median, 11.0%; [interquartile range 4.1, 16.3] versus 16.5%; [interquartile range 10.7, 24.2]; P =0.02). Myocardial salvage was approximately doubled in the NAC group (60%; interquartile range, 37-79) compared with placebo (27%; interquartile range, 14-42; P <0.01) and median creatine kinase areas under the curve were 22 000 and 38 000 IU·h in the NAC and placebo groups, respectively ( P =0.08). High-dose intravenous NAC administered with low-dose intravenous nitroglycerin is associated with reduced infarct size in patients with ST-segment-elevation myocardial infarction undergoing percutaneous coronary intervention. A larger study is required to assess the impact of this therapy on clinical cardiac outcomes. Australian New Zealand Clinical Trials Registry. URL: http://www.anzctr.org.au/. Unique identifier: 12610000280000. © 2017 American Heart Association, Inc.
Feng, Chenchen; Jiao, Zhengbo; Li, Shaopeng; Zhang, Yan; Bi, Yingpu
2015-12-28
We demonstrate a facile method for the rational fabrication of pore-size controlled nanoporous BiVO(4) photoanodes, and confirmed that the optimum pore-size distributions could effectively absorb visible light through light diffraction and confinement functions. Furthermore, in situ X-ray photoelectron spectroscopy (XPS) reveals more efficient photoexcited electron-hole separation than conventional particle films, induced by light confinement and rapid charge transfer in the inter-crossed worm-like structures.
NASA Astrophysics Data System (ADS)
DuBose, Theodore B.; Milanfar, Peyman; Izatt, Joseph A.; Farsiu, Sina
2016-03-01
The human retina is composed of several layers, visible by in vivo optical coherence tomography (OCT) imaging. To enhance diagnostics of retinal diseases, several algorithms have been developed to automatically segment one or more of the boundaries of these layers. OCT images are corrupted by noise, which is frequently the result of the detector noise and speckle, a type of coherent noise resulting from the presence of several scatterers in each voxel. However, it is unknown what the empirical distribution of noise in each layer of the retina is, and how the magnitude and distribution of the noise affects the lower bounds of segmentation accuracy. Five healthy volunteers were imaged using a spectral domain OCT probe from Bioptigen, Inc, centered at 850nm with 4.6µm full width at half maximum axial resolution. Each volume was segmented by expert manual graders into nine layers. The histograms of intensities in each layer were then fit to seven possible noise distributions from the literature on speckle and image processing. Using these empirical noise distributions and empirical estimates of the intensity of each layer, the Cramer-Rao lower bound (CRLB), a measure of the variance of an estimator, was calculated for each boundary layer. Additionally, the optimum bias of a segmentation algorithm was calculated, and a corresponding biased CRLB was calculated, which represents the improved performance an algorithm can achieve by using prior knowledge, such as the smoothness and continuity of layer boundaries. Our general mathematical model can be easily adapted for virtually any OCT modality.
Study on Sumbawa gold recovery using centrifuge
NASA Astrophysics Data System (ADS)
Ferdana, A. D.; Petrus, H. T. B. M.; Bendiyasa, I. M.; Prijambada, I. D.; Hamada, F.; Sachiko, T.
2018-01-01
The Artisanal Small Gold Mining in Sumbawa has been processing gold with mercury (Hg), which poses a serious threat to the mining and global environment. One method of gold processing that does not use mercury is by gravity method. Before processing the ore first performed an analysis of Mineragraphy and analysis of compound with XRD. Mineragraphy results show that gold is associated with chalcopyrite and covelite and is a single particle (native) on size 58.8 μm, 117 μm up to 294 μm. characterization with XRD shows that the Sumbawa Gold Ore is composed of quartz, pyrite, pyroxene, and sericite compounds. Sentrifugation is one of separation equipment of gravity method to increase concentrate based on difference of specific gravity. The optimum concentration result is influenced by several variables, such as water flow rate and particle size. In this present research, the range of flow rate is 5 lpm and 10 lpm, the particle size - 100 + 200 mesh and -200 +300 mesh. Gold concentration in concentrate is measured by EDX. The result shows that the optimum condition is obtained at a separation with flow rate 5 lpm and a particle size of -100 + 200 mesh.
Mocz, G.
1995-01-01
Fuzzy cluster analysis has been applied to the 20 amino acids by using 65 physicochemical properties as a basis for classification. The clustering products, the fuzzy sets (i.e., classical sets with associated membership functions), have provided a new measure of amino acid similarities for use in protein folding studies. This work demonstrates that fuzzy sets of simple molecular attributes, when assigned to amino acid residues in a protein's sequence, can predict the secondary structure of the sequence with reasonable accuracy. An approach is presented for discriminating standard folding states, using near-optimum information splitting in half-overlapping segments of the sequence of assigned membership functions. The method is applied to a nonredundant set of 252 proteins and yields approximately 73% matching for correctly predicted and correctly rejected residues with approximately 60% overall success rate for the correctly recognized ones in three folding states: alpha-helix, beta-strand, and coil. The most useful attributes for discriminating these states appear to be related to size, polarity, and thermodynamic factors. Van der Waals volume, apparent average thickness of surrounding molecular free volume, and a measure of dimensionless surface electron density can explain approximately 95% of prediction results. hydrogen bonding and hydrophobicity induces do not yet enable clear clustering and prediction. PMID:7549882
Fida, Benish; Bernabucci, Ivan; Bibbo, Daniele; Conforto, Silvia; Schmid, Maurizio
2015-07-01
Accuracy of systems able to recognize in real time daily living activities heavily depends on the processing step for signal segmentation. So far, windowing approaches are used to segment data and the window size is usually chosen based on previous studies. However, literature is vague on the investigation of its effect on the obtained activity recognition accuracy, if both short and long duration activities are considered. In this work, we present the impact of window size on the recognition of daily living activities, where transitions between different activities are also taken into account. The study was conducted on nine participants who wore a tri-axial accelerometer on their waist and performed some short (sitting, standing, and transitions between activities) and long (walking, stair descending and stair ascending) duration activities. Five different classifiers were tested, and among the different window sizes, it was found that 1.5 s window size represents the best trade-off in recognition among activities, with an obtained accuracy well above 90%. Differences in recognition accuracy for each activity highlight the utility of developing adaptive segmentation criteria, based on the duration of the activities. Copyright © 2015 IPEM. Published by Elsevier Ltd. All rights reserved.
Buckling Design and Analysis of a Payload Fairing One-Sixth Cylindrical Arc-Segment Panel
NASA Technical Reports Server (NTRS)
Kosareo, Daniel N.; Oliver, Stanley T.; Bednarcyk, Brett A.
2013-01-01
Design and analysis results are reported for a panel that is a 16th arc-segment of a full 33-ft diameter cylindrical barrel section of a payload fairing structure. Six such panels could be used to construct the fairing barrel, and, as such, compression buckling testing of a 16th arc-segment panel would serve as a validation test of the buckling analyses used to design the fairing panels. In this report, linear and nonlinear buckling analyses have been performed using finite element software for 16th arc-segment panels composed of aluminum honeycomb core with graphiteepoxy composite facesheets and an alternative fiber reinforced foam (FRF) composite sandwich design. The cross sections of both concepts were sized to represent realistic Space Launch Systems (SLS) Payload Fairing panels. Based on shell-based linear buckling analyses, smaller, more manageable buckling test panel dimensions were determined such that the panel would still be expected to buckle with a circumferential (as opposed to column-like) mode with significant separation between the first and second buckling modes. More detailed nonlinear buckling analyses were then conducted for honeycomb panels of various sizes using both Abaqus and ANSYS finite element codes, and for the smaller size panel, a solid-based finite element analysis was conducted. Finally, for the smaller size FRF panel, nonlinear buckling analysis was performed wherein geometric imperfections measured from an actual manufactured FRF were included. It was found that the measured imperfection did not significantly affect the panel's predicted buckling response
Application-Controlled Demand Paging for Out-of-Core Visualization
NASA Technical Reports Server (NTRS)
Cox, Michael; Ellsworth, David; Kutler, Paul (Technical Monitor)
1997-01-01
In the area of scientific visualization, input data sets are often very large. In visualization of Computational Fluid Dynamics (CFD) in particular, input data sets today can surpass 100 Gbytes, and are expected to scale with the ability of supercomputers to generate them. Some visualization tools already partition large data sets into segments, and load appropriate segments as they are needed. However, this does not remove the problem for two reasons: 1) there are data sets for which even the individual segments are too large for the largest graphics workstations, 2) many practitioners do not have access to workstations with the memory capacity required to load even a segment, especially since the state-of-the-art visualization tools tend to be developed by researchers with much more powerful machines. When the size of the data that must be accessed is larger than the size of memory, some form of virtual memory is simply required. This may be by segmentation, paging, or by paged segments. In this paper we demonstrate that complete reliance on operating system virtual memory for out-of-core visualization leads to poor performance. We then describe a paged segment system that we have implemented, and explore the principles of memory management that can be employed by the application for out-of-core visualization. We show that application control over some of these can significantly improve performance. We show that sparse traversal can be exploited by loading only those data actually required. We show also that application control over data loading can be exploited by 1) loading data from alternative storage format (in particular 3-dimensional data stored in sub-cubes), 2) controlling the page size. Both of these techniques effectively reduce the total memory required by visualization at run-time. We also describe experiments we have done on remote out-of-core visualization (when pages are read by demand from remote disk) whose results are promising.
NASA Astrophysics Data System (ADS)
Chen, Xiaolong; Honda, Hiroshi; Kuroda, Seiji; Araki, Hiroshi; Murakami, Hideyuki; Watanabe, Makoto; Sakka, Yoshio
2016-12-01
Effects of the ceramic powder size used for suspension as well as several processing parameters in suspension plasma spraying of YSZ were investigated experimentally, aiming to fabricate highly segmented microstructures for thermal barrier coating (TBC) applications. Particle image velocimetry (PIV) was used to observe the atomization process and the velocity distribution of atomized droplets and ceramic particles travelling toward the substrates. The tested parameters included the secondary plasma gas (He versus H2), suspension injection flow rate, and substrate surface roughness. Results indicated that a plasma jet with a relatively higher content of He or H2 as the secondary plasma gas was critical to produce highly segmented YSZ TBCs with a crack density up to 12 cracks/mm. The optimized suspension flow rate played an important role to realize coatings with a reduced porosity level and improved adhesion. An increased powder size and higher operation power level were beneficial for the formation of highly segmented coatings onto substrates with a wider range of surface roughness.
NASA Technical Reports Server (NTRS)
Jiang, L.; Salisbury, F. B.; Campbell, W. F.; Carman, J. G.; Nan, R.
1998-01-01
Super-Dwarf wheat plants were grown in growth chambers under 12 treatments with three photoperiods (18 h, 21 h, 24 h) and four carbon dioxide (CO2) levels (360, 1,200, 3,000 and 7,000 micromoles mol-1). Carbon dioxide concentrations affected flower initiation rates of Super-Dwarf wheat. The optimum CO2 level for flower initiation and development was 1,200 micromoles mol-1. Super-optimum CO2 levels delayed flower initiation, but did not decrease final flower bud number per head. Longer photoperiods not only accelerated flower initiation rates, but also decreased deleterious effects of super-optimum CO2. Flower bud size and head length at the same developmental stage were larger under longer photoperiods, but final flower bud number was not affected by photoperiod.
Ma, Zelan; Chen, Xin; Huang, Yanqi; He, Lan; Liang, Cuishan; Liang, Changhong; Liu, Zaiyi
2015-01-01
Accurate and repeatable measurement of the gross tumour volume(GTV) of subcutaneous xenografts is crucial in the evaluation of anti-tumour therapy. Formula and image-based manual segmentation methods are commonly used for GTV measurement but are hindered by low accuracy and reproducibility. 3D Slicer is open-source software that provides semiautomatic segmentation for GTV measurements. In our study, subcutaneous GTVs from nude mouse xenografts were measured by semiautomatic segmentation with 3D Slicer based on morphological magnetic resonance imaging(mMRI) or diffusion-weighted imaging(DWI)(b = 0,20,800 s/mm2) . These GTVs were then compared with those obtained via the formula and image-based manual segmentation methods with ITK software using the true tumour volume as the standard reference. The effects of tumour size and shape on GTVs measurements were also investigated. Our results showed that, when compared with the true tumour volume, segmentation for DWI(P = 0.060–0.671) resulted in better accuracy than that mMRI(P < 0.001) and the formula method(P < 0.001). Furthermore, semiautomatic segmentation for DWI(intraclass correlation coefficient, ICC = 0.9999) resulted in higher reliability than manual segmentation(ICC = 0.9996–0.9998). Tumour size and shape had no effects on GTV measurement across all methods. Therefore, DWI-based semiautomatic segmentation, which is accurate and reproducible and also provides biological information, is the optimal GTV measurement method in the assessment of anti-tumour treatments. PMID:26489359
NASA Astrophysics Data System (ADS)
Dadbakhsh, Sasan; Verbelen, Leander; Vandeputte, Tom; Strobbe, Dieter; Van Puyvelde, Peter; Kruth, Jean-Pierre
This work investigates the influence of powder size/shape on selective laser sintering (SLS) of a thermoplastic polyurethane (TPU) elastomer. It examines a TPU powder which had been cryogenically milled in two different sizes; coarse powder (D50∼200μm) with rough surfaces in comparison with a fine powder (D50∼63μm) with extremely fine flow additives. It is found that the coarse powder coalesces at lower temperatures and excessively smokes during the SLS processing. In comparison, the fine powder with flow additives is better processable at significantly higher powder bed temperatures, allowing a lower optimum laser energy input which minimizes smoking and degradation of the polymer. In terms of mechanical properties, good coalescence of both powders lead to parts with acceptable shear-punch strengths compared to injection molded parts. However, porosity and degradation from the optimum SLS parameters of the coarse powder drastically reduce the tensile properties to about one-third of the parts made from the fine powders as well as those made by injection molding (IM).
An Investigation Of The Effect Of Particle Size On Oxidation Of Pyrites In Coal.
NASA Astrophysics Data System (ADS)
Chan, Paul K.; Frost, David C.
1986-08-01
We have used X-ray photoelectron spectroscopy (XPS) to study the variation of surface pyrite density with coal particle size (53 4m - 250 4μm). We also detect and monitor pyrite oxidation to sulfate, an important process influencing the surface-dependency of coal-cleansing methods such as flotation. It is very likely that as coal is crushed as part of the processes employed to rid it of prospective pollutants one eventually reaches a pyrite size which may be called "characteristic". It is this parameter that we examine here. Good correlations are established between (i) the liberation of pyrite and particle size, (ii) surface pyrite/sulfate ratio, and (iii) oxidized and non-oxidized sulfur in a typical Canadian coal. For "non-oxidized", or "fresh" coal, the dispersion of pyrite on the coal surface is inversely proportional to coal particle radius, and the tangents of this curve intersect at a particular particle size (106±5 4μm). Although, for the oxidized coal, the appearance of the curves depend on oxidation time intervals at low temperature with humid air, there is an "optimum" particle size which exhibits maximum surface pyrite. Notably, this "optimum" size corresponds to the tangent's intersection for the non-oxidized coal, and hence the "characteristic" size of constituent pyrite. This should allow prediction of pyrite occurrence, a parameter of paramount interest in coal processing and cleaning technology. Coal surface characterization obtained by XPS after various conditioning steps and during flotation, allow both a functional analysis via the study of chemical shifts and a semi-quantitative analysis based on relative intensity measurements.
Local orientational mobility in regular hyperbranched polymers.
Dolgushev, Maxim; Markelov, Denis A; Fürstenberg, Florian; Guérin, Thomas
2016-07-01
We study the dynamics of local bond orientation in regular hyperbranched polymers modeled by Vicsek fractals. The local dynamics is investigated through the temporal autocorrelation functions of single bonds and the corresponding relaxation forms of the complex dielectric susceptibility. We show that the dynamic behavior of single segments depends on their remoteness from the periphery rather than on the size of the whole macromolecule. Remarkably, the dynamics of the core segments (which are most remote from the periphery) shows a scaling behavior that differs from the dynamics obtained after structural average. We analyze the most relevant processes of single segment motion and provide an analytic approximation for the corresponding relaxation times. Furthermore, we describe an iterative method to calculate the orientational dynamics in the case of very large macromolecular sizes.
Simulating the Structural Response of a Preloaded Bolted Joint
NASA Technical Reports Server (NTRS)
Knight, Norman F., Jr.; Phillips, Dawn R.; Raju, Ivatury S.
2008-01-01
The present paper describes the structural analyses performed on a preloaded bolted-joint configuration. The joint modeled was comprised of two L-shaped structures connected together using a single bolt. Each L-shaped structure involved a vertical flat segment (or shell wall) welded to a horizontal segment (or flange). Parametric studies were performed using elasto-plastic, large-deformation nonlinear finite element analyses to determine the influence of several factors on the bolted-joint response. The factors considered included bolt preload, washer-surface-bearing size, edge boundary conditions, joint segment length, and loading history. Joint response is reported in terms of displacements, gap opening, and surface strains. Most of the factors studied were determined to have minimal effect on the bolted-joint response; however, the washer-bearing-surface size affected the response significantly.
What Controls Subduction Earthquake Size and Occurrence?
NASA Astrophysics Data System (ADS)
Ruff, L. J.
2008-12-01
There is a long history of observational studies on the size and recurrence intervals of the large underthrusting earthquakes in subduction zones. In parallel with this documentation of the variability in both recurrence times and earthquake sizes -- both within and amongst subduction zones -- there have been numerous suggestions for what controls size and occurrence. In addition to the intrinsic scientific interest in these issues, there are direct applications to hazards mitigation. In this overview presentation, I review past progress, consider current paradigms, and look toward future studies that offer some resolution of long- standing questions. Given the definition of seismic moment, earthquake size is the product of overall static stress drop, down-dip fault width, and along-strike fault length. The long-standing consensus viewpoint is that for the largest earthquakes in a subduction zone: stress-drop is constant, fault width is the down-dip extent of the seismogenic portion of the plate boundary, but that along-strike fault length can vary from one large earthquake to the next. While there may be semi-permanent segments along a subduction zone, successive large earthquakes can rupture different combinations of segments. Many investigations emphasize the role of asperities within the segments, rather than segment edges. Thus, the question of earthquake size is translated into: "What controls the along-strike segmentation, and what determines which segments will rupture in a particular earthquake cycle?" There is no consensus response to these questions. Over the years, the suggestions for segmentation control include physical features in the subducted plate, physical features in the over-lying plate, and more obscure -- and possibly ever-changing -- properties of the plate interface such as the hydrologic conditions. It seems that the full global answer requires either some unforeseen breakthrough, or the long-term hard work of falsifying all candidate hypotheses except one. This falsification process requires both concentrated multidisciplinary efforts and patience. Large earthquake recurrence intervals in the same subduction zone segment display a significant, and therefore unfortunate, variability. Over the years, many of us have devised simple models to explain this variability. Of course, there are also more complicated explanations with many additional model parameters. While there has been important observational progress as both historical and paleo-seismological studies continue to add more data pairs of fault length and recurrence intervals, there has been a frustrating lack of progress in elimination of candidate models or processes that explain recurrence time variability. Some of the simple models for recurrence times offer a probabilistic or even deterministic prediction of future recurrence times - and have been used for hazards evaluation. It is important to know if these models are correct. Since we do not have the patience to wait for a strict statistical test, we must find other ways to test these ideas. For example, some of the simple deterministic models for along-strike segment interaction make predictions for variation in tectonic stress state that can be tested during the inter-seismic period. We have seen how some observational discoveries in the past decade (e.g., the episodic creep events down-dip of the seismogenic zone) give us additional insight into the physical processes in subduction zones; perhaps multi-disciplinary studies of subduction zones will discover a new way to reliably infer large-scale shear stresses on the plate interface?
Federal Register 2010, 2011, 2012, 2013, 2014
2013-11-07
... optimum yield. This action would also continue to suspend the minimum shell size for Atlantic surfclams... still above the biomass target reference points. Based on this information, the Council is recommending...
Design and evaluation of oral nanoemulsion drug delivery system of mebudipine.
Khani, Samira; Keyhanfar, Fariborz; Amani, Amir
2016-07-01
A nanoemulsion drug delivery system was developed to increase the oral bioavailability of mebudipine as a calcium channel blocker with very low bioavailability profile. The impact of nano-formulation on the pharmacokinetic parameters of mebudipine in rats was investigated. Nanoemulsion formulations containing ethyl oleate, Tween 80, Span 80, polyethylene glycol 400, ethanol and deionized water were prepared using probe sonicator. The optimum formulation was evaluated for physicochemical properties, such as particle size, morphology and stability. The particle size of optimum formulation was 22.8 ± 4.0 nm. Based on the results of this study, the relative bioavailability of mebudipine nanoemulsion was enhanced by about 2.6-, 2.0- and 1.9-fold, respectively, compared with suspension, ethyl oleate solution and micellar solution. In conclusion, nanoemulsion is an interesting option for the delivery of poorly water soluble molecules, such as mebudipine.
Optomechanical study and optimization of cantilever plate dynamics
NASA Astrophysics Data System (ADS)
Furlong, Cosme; Pryputniewicz, Ryszard J.
1995-06-01
Optimum dynamic characteristics of an aluminum cantilever plate containing holes of different sizes and located at arbitrary positions on the plate are studied computationally and experimentally. The objective function of this optimization is the minimization/maximization of the natural frequencies of the plate in terms of such design variable s as the sizes and locations of the holes. The optimization process is performed using the finite element method and mathematical programming techniques in order to obtain the natural frequencies and the optimum conditions of the plate, respectively. The modal behavior of the resultant optimal plate layout is studied experimentally through the use of holographic interferometry techniques. Comparisons of the computational and experimental results show that good agreement between theory and test is obtained. The comparisons also show that the combined, or hybrid use of experimental and computational techniques complement each other and prove to be a very efficient tool for performing optimization studies of mechanical components.
An optimization model for energy generation and distribution in a dynamic facility
NASA Technical Reports Server (NTRS)
Lansing, F. L.
1981-01-01
An analytical model is described using linear programming for the optimum generation and distribution of energy demands among competing energy resources and different economic criteria. The model, which will be used as a general engineering tool in the analysis of the Deep Space Network ground facility, considers several essential decisions for better design and operation. The decisions sought for the particular energy application include: the optimum time to build an assembly of elements, inclusion of a storage medium of some type, and the size or capacity of the elements that will minimize the total life-cycle cost over a given number of years. The model, which is structured in multiple time divisions, employ the decomposition principle for large-size matrices, the branch-and-bound method in mixed-integer programming, and the revised simplex technique for efficient and economic computer use.
One-step preparing magnesium hydroxide particles from mother liquor of salt production
NASA Astrophysics Data System (ADS)
Guo, H.; Peng, C. S.; Ding, Z. W.; Yuan, H. T.; Yang, K.
2018-01-01
In this study, MH particles were prepared from mother liquor of salt production in one-step through employing ammonia gas as precipitant and stearic acid as dispersant respectively. Since adopting microporous plate to bubble ammonia gas, the percent conversion of magnesium was boosted obviously. The influence of operating condition of reacting temperature, stirring rate, ammonia flowrate and pore size of plate to magnesium percent conversion were investigated, the maximum is 88.1 % at optimum condition according to experimental results. The MH particle preparing from mother liquor in optimum condition was characterized by XRD, the result indicated the volume of brucite was reach to 99.7% within the composition of the product. In addition, the size distribution and crystal morphology was also detected, the median particle diameter d50 is 883 nm and possessing good dispersibility. From the thermogravimetric analysis of MH particles, the thermostability of product is suitable as flame-retardant composite materials.
SU-E-I-96: A Study About the Influence of ROI Variation On Tumor Segmentation in PET
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, L; Tan, S; Lu, W
2014-06-01
Purpose: To study the influence of different regions of interest (ROI) on tumor segmentation in PET. Methods: The experiments were conducted on a cylindrical phantom. Six spheres with different volumes (0.5ml, 1ml, 6ml, 12ml, 16ml and 20 ml) were placed inside a cylindrical container to mimic tumors of different sizes. The spheres were filled with 11C solution as sources and the cylindrical container was filled with 18F-FDG solution as the background. The phantom was continuously scanned in a Biograph-40 True Point/True View PET/CT scanner, and 42 images were reconstructed with source-to-background ratio (SBR) ranging from 16:1 to 1.8:1. We tookmore » a large and a small ROI for each sphere, both of which contain the whole sphere and does not contain any other spheres. Six other ROIs of different sizes were then taken between the large and the small ROI. For each ROI, all images were segmented by eitht thresholding methods and eight advanced methods, respectively. The segmentation results were evaluated by dice similarity index (DSI), classification error (CE) and volume error (VE). The robustness of different methods to ROI variation was quantified using the interrun variation and a generalized Cohen's kappa. Results: With the change of ROI, the segmentation results of all tested methods changed more or less. Compared with all advanced methods, thresholding methods were less affected by the ROI change. In addition, most of the thresholding methods got more accurate segmentation results for all sphere sizes. Conclusion: The results showed that the segmentation performance of all tested methods was affected by the change of ROI. Thresholding methods were more robust to this change and they can segment the PET image more accurately. This work was supported in part by National Natural Science Foundation of China (NNSFC), under Grant Nos. 60971112 and 61375018, and Fundamental Research Funds for the Central Universities, under Grant No. 2012QN086. Wei Lu was supported in part by the National Institutes of Health (NIH) Grant No. R01 CA172638.« less
Kuijt, Wichert J; Green, Cindy L; Verouden, Niels J W; Haeck, Joost D E; Tzivoni, Dan; Koch, Karel T; Stone, Gregg W; Lansky, Alexandra J; Broderick, Samuel; Tijssen, Jan G P; de Winter, Robbert J; Roe, Matthew T; Krucoff, Mitchell W
ST-segment recovery (STR) is a strong mechanistic correlate of infarct size (IS) and outcome in ST-segment elevation myocardial infarction (STEMI). Characterizing measures of speed, amplitude, and completeness of STR may extend the use of this noninvasive biomarker. Core laboratory continuous 24-h 12-lead Holter ECG monitoring, IS by single-photon emission computed tomography (SPECT), and 30-day mortality of 2 clinical trials of primary percutaneous coronary intervention in STEMI were combined. Multiple ST measures (STR at last contrast injection (LC) measured from peak value; 30, 60, 90, 120, and 240min, residual deviation; time to steady ST recovery; and the 3-h area under the time trend curve [ST-AUC] from LC) were univariably correlated with IS and predictive of mortality. After multivariable adjustment for ST-parameters and GRACE risk factors, STR at 240min remained an additive predictor of mortality. Early STR, residual deviation, and ST-AUC remained associated with IS. Multiple parameters that quantify the speed, amplitude, and completeness of STR predict mortality and correlate with IS. Copyright © 2017. Published by Elsevier Inc.
Gao, Bin; Li, Xiaoqing; Woo, Wai Lok; Tian, Gui Yun
2018-05-01
Thermographic inspection has been widely applied to non-destructive testing and evaluation with the capabilities of rapid, contactless, and large surface area detection. Image segmentation is considered essential for identifying and sizing defects. To attain a high-level performance, specific physics-based models that describe defects generation and enable the precise extraction of target region are of crucial importance. In this paper, an effective genetic first-order statistical image segmentation algorithm is proposed for quantitative crack detection. The proposed method automatically extracts valuable spatial-temporal patterns from unsupervised feature extraction algorithm and avoids a range of issues associated with human intervention in laborious manual selection of specific thermal video frames for processing. An internal genetic functionality is built into the proposed algorithm to automatically control the segmentation threshold to render enhanced accuracy in sizing the cracks. Eddy current pulsed thermography will be implemented as a platform to demonstrate surface crack detection. Experimental tests and comparisons have been conducted to verify the efficacy of the proposed method. In addition, a global quantitative assessment index F-score has been adopted to objectively evaluate the performance of different segmentation algorithms.
Defect Detection of Steel Surfaces with Global Adaptive Percentile Thresholding of Gradient Image
NASA Astrophysics Data System (ADS)
Neogi, Nirbhar; Mohanta, Dusmanta K.; Dutta, Pranab K.
2017-12-01
Steel strips are used extensively for white goods, auto bodies and other purposes where surface defects are not acceptable. On-line surface inspection systems can effectively detect and classify defects and help in taking corrective actions. For detection of defects use of gradients is very popular in highlighting and subsequently segmenting areas of interest in a surface inspection system. Most of the time, segmentation by a fixed value threshold leads to unsatisfactory results. As defects can be both very small and large in size, segmentation of a gradient image based on percentile thresholding can lead to inadequate or excessive segmentation of defective regions. A global adaptive percentile thresholding of gradient image has been formulated for blister defect and water-deposit (a pseudo defect) in steel strips. The developed method adaptively changes the percentile value used for thresholding depending on the number of pixels above some specific values of gray level of the gradient image. The method is able to segment defective regions selectively preserving the characteristics of defects irrespective of the size of the defects. The developed method performs better than Otsu method of thresholding and an adaptive thresholding method based on local properties.
A new method for automated discontinuity trace mapping on rock mass 3D surface model
NASA Astrophysics Data System (ADS)
Li, Xiaojun; Chen, Jianqin; Zhu, Hehua
2016-04-01
This paper presents an automated discontinuity trace mapping method on a 3D surface model of rock mass. Feature points of discontinuity traces are first detected using the Normal Tensor Voting Theory, which is robust to noisy point cloud data. Discontinuity traces are then extracted from feature points in four steps: (1) trace feature point grouping, (2) trace segment growth, (3) trace segment connection, and (4) redundant trace segment removal. A sensitivity analysis is conducted to identify optimal values for the parameters used in the proposed method. The optimal triangular mesh element size is between 5 cm and 6 cm; the angle threshold in the trace segment growth step is between 70° and 90°; the angle threshold in the trace segment connection step is between 50° and 70°, and the distance threshold should be at least 15 times the mean triangular mesh element size. The method is applied to the excavation face trace mapping of a drill-and-blast tunnel. The results show that the proposed discontinuity trace mapping method is fast and effective and could be used as a supplement to traditional direct measurement of discontinuity traces.
NASA Astrophysics Data System (ADS)
Selwyn, Ebenezer Juliet; Florinabel, D. Jemi
2018-04-01
Compound image segmentation plays a vital role in the compression of computer screen images. Computer screen images are images which are mixed with textual, graphical, or pictorial contents. In this paper, we present a comparison of two transform based block classification of compound images based on metrics like speed of classification, precision and recall rate. Block based classification approaches normally divide the compound images into fixed size blocks of non-overlapping in nature. Then frequency transform like Discrete Cosine Transform (DCT) and Discrete Wavelet Transform (DWT) are applied over each block. Mean and standard deviation are computed for each 8 × 8 block and are used as features set to classify the compound images into text/graphics and picture/background block. The classification accuracy of block classification based segmentation techniques are measured by evaluation metrics like precision and recall rate. Compound images of smooth background and complex background images containing text of varying size, colour and orientation are considered for testing. Experimental evidence shows that the DWT based segmentation provides significant improvement in recall rate and precision rate approximately 2.3% than DCT based segmentation with an increase in block classification time for both smooth and complex background images.
Brosch, Tom; Tang, Lisa Y W; Youngjin Yoo; Li, David K B; Traboulsee, Anthony; Tam, Roger
2016-05-01
We propose a novel segmentation approach based on deep 3D convolutional encoder networks with shortcut connections and apply it to the segmentation of multiple sclerosis (MS) lesions in magnetic resonance images. Our model is a neural network that consists of two interconnected pathways, a convolutional pathway, which learns increasingly more abstract and higher-level image features, and a deconvolutional pathway, which predicts the final segmentation at the voxel level. The joint training of the feature extraction and prediction pathways allows for the automatic learning of features at different scales that are optimized for accuracy for any given combination of image types and segmentation task. In addition, shortcut connections between the two pathways allow high- and low-level features to be integrated, which enables the segmentation of lesions across a wide range of sizes. We have evaluated our method on two publicly available data sets (MICCAI 2008 and ISBI 2015 challenges) with the results showing that our method performs comparably to the top-ranked state-of-the-art methods, even when only relatively small data sets are available for training. In addition, we have compared our method with five freely available and widely used MS lesion segmentation methods (EMS, LST-LPA, LST-LGA, Lesion-TOADS, and SLS) on a large data set from an MS clinical trial. The results show that our method consistently outperforms these other methods across a wide range of lesion sizes.
Scaling Relations for the Thermal Structure of Segmented Oceanic Transform Faults
NASA Astrophysics Data System (ADS)
Wolfson-Schwehr, M.; Boettcher, M. S.; Behn, M. D.
2015-12-01
Mid-ocean ridge-transform faults (RTFs) are a natural laboratory for studying strike-slip earthquake behavior due to their relatively simple geometry, well-constrained slip rates, and quasi-periodic seismic cycles. However, deficiencies in our understanding of the limited size of the largest RTF earthquakes are due, in part, to not considering the effect of short intra-transform spreading centers (ITSCs) on fault thermal structure. We use COMSOL Multiphysics to run a series of 3D finite element simulations of segmented RTFs with visco-plastic rheology. The models test a range of RTF segment lengths (L = 10-150 km), ITSC offset lengths (O = 1-30 km), and spreading rates (V = 2-14 cm/yr). The lithosphere and upper mantle are approximated as steady-state, incompressible flow. Coulomb failure incorporates brittle processes in the lithosphere, and a temperature-dependent flow law for dislocation creep of olivine activates ductile deformation in the mantle. ITSC offsets as small as 2 km affect the thermal structure underlying many segmented RTFs, reducing the area above the 600˚C isotherm, A600, and thus the size of the largest expected earthquakes, Mc. We develop a scaling relation for the critical ITSC offset length, OC, which significantly reduces the thermal affect of adjacent fault segments of length L1 and L2. OC is defined as the ITSC offset that results in an area loss ratio of R = (Aunbroken - Acombined)/Aunbroken - Adecoupled) = 63%, where Aunbroken = C600(L1+L2)1.5V-0.6 is A600 for an RTF of length L1 + L2; Adecoupled = C600(L11.5+L21.5)V-0.6 is the combined A600 of RTFs of lengths L1 and L2, respectively; and Acombined = Aunbroken exp(-O/ OC) + Adecoupled (1-exp(-O/ OC)). C600 is a constant. We use OC and kinematic fault parameters (L1, L2, O, and V) to develop a scaling relation for the approximate seismogenic area, Aseg, for each segment of a RTF system composed of two fault segments. Finally, we estimate the size of Mc on a fault segment based on Aseg. We show that small (<1 km) offsets in the fault trace observed between MW6 rupture patches on Gofar and Discovery transform faults, located at ~4S on the East Pacific Rise, are not sufficient to thermally decouple adjacent fault patches. Thus additional factors, possibly including changes in fault zone material properties, must limit the size of Mc on these faults.
Size and Base Composition of RNA in Supercoiled Plasmid DNA
Williams, Peter H.; Boyer, Herbert W.; Helinski, Donald R.
1973-01-01
The average size and base composition of the covalently integrated RNA segment in supercoiled ColE1 DNA synthesized in Escherichia coli in the presence of chloramphenicol (CM-ColE1 DNA) have been determined by two independent methods. The two approaches yielded similar results, indicating that the RNA segment in CM-ColE1 DNA contains GMP at the 5′ end and comprises on the average 25 to 26 ribonucleotides with a base composition of 10-11 G, 3 A, 5-6 C, and 6-7 U. PMID:4359488
Mandurino-Mirizzi, Alessandro; Crimi, Gabriele; Raineri, Claudia; Pica, Silvia; Ruffinazzi, Marta; Gianni, Umberto; Repetto, Alessandra; Ferlini, Marco; Marinoni, Barbara; Leonardi, Sergio; De Servi, Stefano; Oltrona Visconti, Luigi; De Ferrari, Gaetano M; Ferrario, Maurizio
2018-05-01
Elevated serum uric acid (eSUA) was associated with unfavorable outcome in patients with ST-segment elevation myocardial infarction (STEMI). However, the effect of eSUA on myocardial reperfusion injury and infarct size has been poorly investigated. Our aim was to correlate eSUA with infarct size, infarct size shrinkage, myocardial reperfusion grade and long-term mortality in STEMI patients undergoing primary percutaneous coronary intervention. We performed a post-hoc patients-level analysis of two randomized controlled trials, testing strategies for myocardial ischemia/reperfusion injury protection. Each patient underwent acute (3-5 days) and follow-up (4-6 months) cardiac magnetic resonance. Infarct size and infarct size shrinkage were outcomes of interest. We assessed T2-weighted edema, myocardial blush grade (MBG), corrected Thrombolysis in myocardial infarction Frame Count, ST-segment resolution and long-term all-cause mortality. A total of 101 (86.1% anterior) STEMI patients were included; eSUA was found in 16 (15.8%) patients. Infarct size was larger in eSUA compared with non-eSUA patients (42.3 ± 22 vs. 29.1 ± 15 ml, P = 0.008). After adjusting for covariates, infarct size was 10.3 ml (95% confidence interval 1.2-19.3 ml, P = 0.001) larger in eSUA. Among patients with anterior myocardial infarction the difference in delayed enhancement between groups was maintained (respectively, 42.3 ± 22.4 vs. 29.9 ± 15.4 ml, P = 0.015). Infarct size shrinkage was similar between the groups. Compared with non-eSUA, eSUA patients had larger T2-weighted edema (53.8 vs. 41.2 ml, P = 0.031) and less favorable MBG (MBG < 2: 44.4 vs. 13.6%, P = 0.045). Corrected Thrombolysis in myocardial infarction Frame Count and ST-segment resolution did not significantly differ between the groups. At a median follow-up of 7.3 years, all-cause mortality was higher in the eSUA group (18.8 vs. 2.4%, P = 0.028). eSUA may affect myocardial reperfusion in patients with STEMI undergoing percutaneous coronary intervention and is associated with larger infarct size and higher long-term mortality.
NASA Astrophysics Data System (ADS)
Mohammadi, Akram; Inadama, Naoko; Yoshida, Eiji; Nishikido, Fumihiko; Shimizu, Keiji; Yamaya, Taiga
2017-09-01
We have developed a four-layer depth of interaction (DOI) detector with single-side photon readout, in which segmented crystals with the patterned reflector insertion are separately identified by the Anger-type calculation. Optical conditions between segmented crystals, where there is no reflector, affect crystal identification ability. Our objective of this work was to improve crystal identification performance of the four-layer DOI detector that uses crystals segmented with a recently developed laser processing technique to include laser processed boundaries (LPBs). The detector consisted of 2 × 2 × 4mm3 LYSO crystals and a 4 × 4 array multianode photomultiplier tube (PMT) with 4.5 mm anode pitch. The 2D position map of the detector was calculated by the Anger calculation method. At first, influence of optical condition on crystal identification was evaluated for a one-layer detector consisting of a 2 × 2 crystal array with three different optical conditions between the crystals: crystals stuck together using room temperature vulcanized (RTV) rubber, crystals with air coupling and segmented crystals with LPBs. The crystal array with LPBs gave the shortest distance between crystal responses in the 2D position map compared with the crystal array coupled with RTV rubber or air due to the great amount of cross-talk between segmented crystals with LPBs. These results were used to find optical conditions offering the optimum distance between crystal responses in the 2D position map for the four-layer DOI detector. Crystal identification performance for the four-layer DOI detector consisting of an 8 × 8 array of crystals segmented with LPBs was examined and it was not acceptable for the crystals in the first layer. The crystal identification was improved for the first layer by changing the optical conditions between all 2 × 2 crystal arrays of the first layer to RTV coupling. More improvement was observed by combining different optical conditions between all crystals of the first layer and some crystals of the second and the third layers of the segmented array.
GPU accelerated fuzzy connected image segmentation by using CUDA.
Zhuge, Ying; Cao, Yong; Miller, Robert W
2009-01-01
Image segmentation techniques using fuzzy connectedness principles have shown their effectiveness in segmenting a variety of objects in several large applications in recent years. However, one problem of these algorithms has been their excessive computational requirements when processing large image datasets. Nowadays commodity graphics hardware provides high parallel computing power. In this paper, we present a parallel fuzzy connected image segmentation algorithm on Nvidia's Compute Unified Device Architecture (CUDA) platform for segmenting large medical image data sets. Our experiments based on three data sets with small, medium, and large data size demonstrate the efficiency of the parallel algorithm, which achieves a speed-up factor of 7.2x, 7.3x, and 14.4x, correspondingly, for the three data sets over the sequential implementation of fuzzy connected image segmentation algorithm on CPU.
Medical image segmentation using 3D MRI data
NASA Astrophysics Data System (ADS)
Voronin, V.; Marchuk, V.; Semenishchev, E.; Cen, Yigang; Agaian, S.
2017-05-01
Precise segmentation of three-dimensional (3D) magnetic resonance imaging (MRI) image can be a very useful computer aided diagnosis (CAD) tool in clinical routines. Accurate automatic extraction a 3D component from images obtained by magnetic resonance imaging (MRI) is a challenging segmentation problem due to the small size objects of interest (e.g., blood vessels, bones) in each 2D MRA slice and complex surrounding anatomical structures. Our objective is to develop a specific segmentation scheme for accurately extracting parts of bones from MRI images. In this paper, we use a segmentation algorithm to extract the parts of bones from Magnetic Resonance Imaging (MRI) data sets based on modified active contour method. As a result, the proposed method demonstrates good accuracy in a comparison between the existing segmentation approaches on real MRI data.
Gonté, Frédéric; Dupuy, Christophe; Luong, Bruno; Frank, Christoph; Brast, Roland; Sedghi, Baback
2009-11-10
The primary mirror of the future European Extremely Large Telescope will be equipped with 984 hexagonal segments. The alignment of the segments in piston, tip, and tilt within a few nanometers requires an optical phasing sensor. A test bench has been designed to study four different optical phasing sensor technologies. The core element of the test bench is an active segmented mirror composed of 61 flat hexagonal segments with a size of 17 mm side to side. Each of them can be controlled in piston, tip, and tilt by three piezoactuators with a precision better than 1 nm. The context of this development, the requirements, the design, and the integration of this system are explained. The first results on the final precision obtained in closed-loop control are also presented.
Kumar, Sunil; Gupta, Asha; Yadav, J P
2008-03-01
The present investigation deals with fluoride removal from aqueous solution by thermally activated neem (Azadirachta indica) leaves carbon (ANC) and thermally activated kikar (Acacia arabica) leaves carbon (AKC) adsorbents. In this study neem leaves carbon and kikar leaves carbon prepared by heating the leaves at 400 degrees C in electric furnace was found to be useful for the removal of fluoride. The adsorbents of 0.3 mm and 1.0 mm sizes of neem and kikar leaves carbon was prepared by standard sieve. Batch experiments done to see the fluoride removal properties from synthetic solution of 5 ppm to study the influence of pH, adsorbent dose and contact time on adsorption efficiency The optimum pH was found to be 6 for both adsorbents. The optimum dose was found to be 0.5g/100 ml forANC (activated neem leaves carbon) and 0.7g/100 ml forAKC (activated kikar leaves carbon). The optimum time was found to be one hour for both the adsorbent. It was also found that adsorbent size of 0.3 mm was more efficient than the 1.0 mm size. The adsorption process obeyed Freundlich adsorption isotherm. The straight line of log (qe-q) vs time at ambient temperature indicated the validity of langergren equation consequently first order nature of the process involved in the present study. Results indicate that besides intraparticle diffusion there maybe other processes controlling the rate which may be operating simultaneously. All optimized conditions were applied for removal of fluoride from four natural water samples.
Ground truth crop proportion summaries for US segments, 1976-1979
NASA Technical Reports Server (NTRS)
Horvath, R. (Principal Investigator); Rice, D.; Wessling, T.
1981-01-01
The original ground truth data was collected, digitized, and registered to LANDSAT data for use in the LACIE and AgRISTARS projects. The numerous ground truth categories were consolidated into fewer classes of crops or crop conditions and counted occurrences of these classes for each segment. Tables are presented in which the individual entries are the percentage of total segment area assigned to a given class. The ground truth summaries were prepared from a 20% sample of the scene. An analysis indicates that this size of sample provides sufficient accuracy for use of the data in initial segment screening.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Normark, W.R.; Morton, J.L.; Delaney, J.R.
1982-01-01
This report incorporates data from two cruises of the USGS vessel SP LEE: (1) L12-80-WF from 29 October to 13 November 1980, and (2) L11-81-WF from 4 to 15 September 1981. The 1980 cruise occurred long after the optimum weather window for this region. The natural results was that no photographic or sample stations could be attempted during nearly continuous gale- and storm-force winds, which twice forced the vessel to depart the work area for safety. A detailed bathymetric survey of a 35-km segment of the ridge axial zone was completed nonetheless, and the bathymetric map compiled from this surveymore » was used as the base for our second cruise in 1981. The second visit to the area was blessed with fair weather, and most of the cruise effort was devoted to photography and sampling, including dredging and hydrocasts in the axial valley segment, which is the central part of the area surveyed in 1980.« less
Comparison between DCA - SSO - VDR and VMAT dose delivery techniques for 15 SRS/SRT patients
NASA Astrophysics Data System (ADS)
Tas, B.; Durmus, I. F.
2018-02-01
To evaluate dose delivery between Dynamic Conformal Arc (DCA) - Segment Shape Optimization (SSO) - Variation Dose Rate (VDR) and Volumetric Modulated Arc Therapy (VMAT) techniques for fifteen SRS patients using Versa HD® lineer accelerator. Fifteen SRS / SRT patient's optimum treatment planning were performed using Monaco5.11® treatment planning system (TPS) with 1 coplanar and 3 non-coplanar fields for VMAT technique, then the plans were reoptimized with the same optimization parameters for DCA - SSO - VDR technique. The advantage of DCA - SSO - VDR technique were determined less MUs and beam on time, also larger segments decrease dosimetric uncertainities of small fields quality assurance. The advantage of VMAT technique were determined a little better GI, CI, PCI, brain V12Gy and brain mean dose. The results show that the clinical objectives and plans for both techniques satisfied all organs at risks (OARs) dose constraints. Depends on the shape and localization of target, we could choose one of these techniques for linear accelerator based SRS / SRT treatment.
Rasta, Seyed Hossein; Partovi, Mahsa Eisazadeh; Seyedarabi, Hadi; Javadzadeh, Alireza
2015-01-01
To investigate the effect of preprocessing techniques including contrast enhancement and illumination correction on retinal image quality, a comparative study was carried out. We studied and implemented a few illumination correction and contrast enhancement techniques on color retinal images to find out the best technique for optimum image enhancement. To compare and choose the best illumination correction technique we analyzed the corrected red and green components of color retinal images statistically and visually. The two contrast enhancement techniques were analyzed using a vessel segmentation algorithm by calculating the sensitivity and specificity. The statistical evaluation of the illumination correction techniques were carried out by calculating the coefficients of variation. The dividing method using the median filter to estimate background illumination showed the lowest Coefficients of variations in the red component. The quotient and homomorphic filtering methods after the dividing method presented good results based on their low Coefficients of variations. The contrast limited adaptive histogram equalization increased the sensitivity of the vessel segmentation algorithm up to 5% in the same amount of accuracy. The contrast limited adaptive histogram equalization technique has a higher sensitivity than the polynomial transformation operator as a contrast enhancement technique for vessel segmentation. Three techniques including the dividing method using the median filter to estimate background, quotient based and homomorphic filtering were found as the effective illumination correction techniques based on a statistical evaluation. Applying the local contrast enhancement technique, such as CLAHE, for fundus images presented good potentials in enhancing the vasculature segmentation.
NASA Technical Reports Server (NTRS)
Kaufman, H. R.; Robinson, R. S.
1981-01-01
Using present technology as a starting point, performance predictions were made for large thrusters. The optimum beam diameter for maximum thruster efficiency was determined for a range of specific impulse. This optimum beam diameter varied greatly with specific impulse, from about 0.6 m at 3000 seconds (and below) to about 4 m at 10,000 seconds with argon, and from about 0.6 m at 2,000 seconds (and below) to about 12 m at 10,000 seconds with Xe. These beams sizes would require much larger thrusters than those presently available, but would offer substantial complexity and cost reductions for large electric propulsion systems.
Okada, Satoshi; Onogi, Akio; Iijima, Ken; Hori, Kiyosumi; Iwata, Hiroyoshi; Yokoyama, Wakana; Suehiro, Miki; Yamasaki, Masanori
2018-01-01
Grain size is important for brewing-rice cultivars, but the genetic basis for this trait is still unclear. This paper aims to identify QTLs for grain size using novel chromosomal segment substitution lines (CSSLs) harboring chromosomal segments from Yamadanishiki, an excellent sake-brewing rice, in the genetic background of Koshihikari, a cooking cultivar. We developed a set of 49 CSSLs. Grain length (GL), grain width (GWh), grain thickness (GT), 100-grain weight (GWt) and days to heading (DTH) were evaluated, and a CSSL-QTL analysis was conducted. Eighteen QTLs for grain size and DTH were identified. Seven (qGL11, qGWh5, qGWh10, qGWt6-2, qGWt10-2, qDTH3, and qDTH6) that were detected in F2 and recombinant inbred lines (RILs) from Koshihikari/Yamadanishiki were validated, suggesting that they are important for large grain size and heading date in Yamadanishiki. Additionally, QTL reanalysis for GWt showed that qGWt10-2 was only detected in early-flowering RILs, while qGWt5 (in the same region as qGWh5) was only detected in late-flowering RILs, suggesting that these QTLs show different responses to the environment. Our study revealed that grain size in the Yamadanishiki cultivar is determined by a complex genetic mechanism. These findings could be useful for the breeding of both cooking and brewing rice. PMID:29875604
Drying step optimization to obtain large-size transparent magnesium-aluminate spinel samples
NASA Astrophysics Data System (ADS)
Petit, Johan; Lallemant, Lucile
2017-05-01
In the transparent ceramics processing, the green body elaboration step is probably the most critical one. Among the known techniques, wet shaping processes are particularly interesting because they enable the particles to find an optimum position on their own. Nevertheless, the presence of water molecules leads to drying issues. During the water removal, its concentration gradient induces cracks limiting the sample size: laboratory samples are generally less damaged because of their small size but upscaling the samples for industrial applications lead to an increasing cracking probability. Thanks to the drying step optimization, large size spinel samples were obtained.
Particle size distribution of main-channel-bed sediments along the upper Mississippi River, USA
Remo, Jonathan; Heine, Ruben A.; Ickes, Brian
2016-01-01
In this study, we compared pre-lock-and-dam (ca. 1925) with a modern longitudinal survey of main-channel-bed sediments along a 740-km segment of the upper Mississippi River (UMR) between Davenport, IA, and Cairo, IL. This comparison was undertaken to gain a better understanding of how bed sediments are distributed longitudinally and to assess change since the completion of the UMR lock and dam navigation system and Missouri River dams (i.e., mid-twentieth century). The comparison of the historic and modern longitudinal bed sediment surveys showed similar bed sediment sizes and distributions along the study segment with the majority (> 90%) of bed sediment samples having a median diameter (D50) of fine to coarse sand. The fine tail (≤ D10) of the sediment size distributions was very fine to medium sand, and the coarse tail (≥ D90) of sediment-size distribution was coarse sand to gravel. Coarsest sediments in both surveys were found within or immediately downstream of bedrock-floored reaches. Statistical analysis revealed that the particle-size distributions between the survey samples were statistically identical, suggesting no overall difference in main-channel-bed sediment-size distribution between 1925 and present. This was a surprising result given the magnitude of river engineering undertaken along the study segment over the past ~ 90 years. The absence of substantial differences in main-channel-bed-sediment size suggests that flow competencies within the highly engineered navigation channel today are similar to conditions within the less-engineered historic channel.
The Study of Residential Areas Extraction Based on GF-3 Texture Image Segmentation
NASA Astrophysics Data System (ADS)
Shao, G.; Luo, H.; Tao, X.; Ling, Z.; Huang, Y.
2018-04-01
The study chooses the standard stripe and dual polarization SAR images of GF-3 as the basic data. Residential areas extraction processes and methods based upon GF-3 images texture segmentation are compared and analyzed. GF-3 images processes include radiometric calibration, complex data conversion, multi-look processing, images filtering, and then conducting suitability analysis for different images filtering methods, the filtering result show that the filtering method of Kuan is efficient for extracting residential areas, then, we calculated and analyzed the texture feature vectors using the GLCM (the Gary Level Co-occurrence Matrix), texture feature vectors include the moving window size, step size and angle, the result show that window size is 11*11, step is 1, and angle is 0°, which is effective and optimal for the residential areas extracting. And with the FNEA (Fractal Net Evolution Approach), we segmented the GLCM texture images, and extracted the residential areas by threshold setting. The result of residential areas extraction verified and assessed by confusion matrix. Overall accuracy is 0.897, kappa is 0.881, and then we extracted the residential areas by SVM classification based on GF-3 images, the overall accuracy is less 0.09 than the accuracy of extraction method based on GF-3 Texture Image Segmentation. We reached the conclusion that residential areas extraction based on GF-3 SAR texture image multi-scale segmentation is simple and highly accurate. although, it is difficult to obtain multi-spectrum remote sensing image in southern China, in cloudy and rainy weather throughout the year, this paper has certain reference significance.
Cortical bone fracture analysis using XFEM - case study.
Idkaidek, Ashraf; Jasiuk, Iwona
2017-04-01
We aim to achieve an accurate simulation of human cortical bone fracture using the extended finite element method within a commercial finite element software abaqus. A two-dimensional unit cell model of cortical bone is built based on a microscopy image of the mid-diaphysis of tibia of a 70-year-old human male donor. Each phase of this model, an interstitial bone, a cement line, and an osteon, are considered linear elastic and isotropic with material properties obtained by nanoindentation, taken from literature. The effect of using fracture analysis methods (cohesive segment approach versus linear elastic fracture mechanics approach), finite element type, and boundary conditions (traction, displacement, and mixed) on cortical bone crack initiation and propagation are studied. In this study cohesive segment damage evolution for a traction separation law based on energy and displacement is used. In addition, effects of the increment size and mesh density on analysis results are investigated. We find that both cohesive segment and linear elastic fracture mechanics approaches within the extended finite element method can effectively simulate cortical bone fracture. Mesh density and simulation increment size can influence analysis results when employing either approach, and using finer mesh and/or smaller increment size does not always provide more accurate results. Both approaches provide close but not identical results, and crack propagation speed is found to be slower when using the cohesive segment approach. Also, using reduced integration elements along with the cohesive segment approach decreases crack propagation speed compared with using full integration elements. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
Cheng, Bi-Hua; Chu, Tien-Min G.; Chang, Chawnshang; Kang, Hong-Yo; Huang, Ko-En
2013-01-01
Loss of large bone segments due to fracture resulting from trauma or tumor removal is a common clinical problem. The goal of this study was to evaluate the use of scaffolds containing testosterone, bone morphogenetic protein-2 (BMP-2), or a combination of both for treatment of critical-size segmental bone defects in mice. A 2.5-mm wide osteotomy was created on the left femur of wildtype and androgen receptor knockout (ARKO) mice. Testosterone, BMP-2, or both were delivered locally using a scaffold that bridged the fracture. Results of X-ray imaging showed that in both wildtype and ARKO mice, BMP-2 treatment induced callus formation within 14 days after initiation of the treatment. Testosterone treatment also induced callus formation within 14 days in wildtype but not in ARKO mice. Micro-computed tomography and histological examinations revealed that testosterone treatment caused similar degrees of callus formation as BMP-2 treatment in wildtype mice, but had no such effect in ARKO mice, suggesting that the androgen receptor is required for testosterone to initiate fracture healing. These results demonstrate that testosterone is as effective as BMP-2 in promoting the healing of critical-size segmental defects and that combination therapy with testosterone and BMP-2 is superior to single therapy. Results of this study may provide a foundation to develop a cost effective and efficient therapeutic modality for treatment of bone fractures with segmental defects. PMID:23940550
NASA Astrophysics Data System (ADS)
Deshmukh, Prasanna Gajanan; Mandal, Amaresh; Parihar, Padmakar S.; Nayak, Dayananda; Mishra, Deepta Sundar
2018-01-01
Segmented mirror telescopes (SMT) are built using several small hexagonal mirrors positioned and aligned by the three actuators and six edge sensors per segment to maintain the shape of the primary mirror. The actuators are responsible for maintaining and tracking the mirror segments to the desired position, in the presence of external disturbances introduced by wind, vibration, gravity, and temperature. The present paper describes our effort to develop a soft actuator and the actuator controller for prototype SMT at Indian Institute of Astrophysics, Bangalore. The actuator designed, developed, and validated is a soft actuator based on the voice coil motor and flexural elements. It is designed for the range of travel of ±1.5 mm and the force range of 25 N along with an offloading mechanism to reduce the power consumption. A precision controller using a programmable system on chip (PSoC 5Lp) and a customized drive board has also been developed for this actuator. The close loop proportional-integral-derivative (PID) controller implemented in the PSoC gets position feedback from a high-resolution linear optical encoder. The optimum PID gains are derived using relay tuning method. In the laboratory, we have conducted several experiments to test the performance of the prototype soft actuator as well as the controller. We could achieve 5.73- and 10.15-nm RMS position errors in the steady state as well as tracking with a constant speed of 350 nm/s, respectively. We also present the outcome of various performance tests carried out when off-loader is in action as well as the actuator is subjected to dynamic wind loading.
Zhao, Yi-Nan; Fan, Jun-Jun; Li, Zhi-Quan; Liu, Yan-Wu; Wu, Yao-Ping; Liu, Jian
2017-02-01
Calcium phosphate cement (CPC) porous scaffold is widely used as a suitable bone substitute to repair bone defect, but the optimal pore size is unclear yet. The current study aimed to evaluate the effect of different pore sizes on the processing of bone formation in repairing segmental bone defect of rabbits using CPC porous scaffolds. Three kinds of CPC porous scaffolds with 5 mm diameters and 12 mm length were prepared with the same porosity but different pore sizes (Group A: 200-300 µm, Group B: 300-450 µm, Group C: 450-600 µm, respectively). Twelve millimeter segmental bone defects were created in the middle of the radius bone and filled with different kinds of CPC cylindrical scaffolds. After 4, 12, and 24 weeks, alkaline phosphatase (ALP), histological assessment, and mechanical properties evaluation were performed in all three groups. After 4 weeks, ALP activity increased in all groups but was highest in Group A with smallest pore size. The new bone formation within the scaffolds was not obvious in all groups. After 12 weeks, the new bone formation within the scaffolds was obvious in each group and highest in Group A. At 24 weeks, no significant difference in new bone formation was observed among different groups. Besides the osteoconductive effect, Group A with smallest pore size also had the best mechanical properties in vivo at 12 weeks. We demonstrate that pore size has a significant effect on the osteoconductivity and mechanical properties of calcium phosphate cement porous scaffold in vivo. Small pore size favors the bone formation in the early stage and may be more suitable for repairing segmental bone defect in vivo. © 2016 International Center for Artificial Organs and Transplantation and Wiley Periodicals, Inc.
Size and composition-controlled fabrication of VO2 nanocrystals by terminated cluster growth
DOE Office of Scientific and Technical Information (OSTI.GOV)
Anders, Andre; Slack, Jonathan
2013-05-14
A physical vapor deposition-based route for the fabrication of VO2 nanoparticles is demonstrated, consisting of reactive sputtering and vapor condensation at elevated pressures. The oxidation of vanadium atoms is an efficient heterogeneous nucleation method, leading to high nanoparticle throughtput. Fine control of the nanoparticle size and composition is obtained. Post growth annealing leads to crystalline VO2 nanoparticles with optimum thermocromic and plasmonic properties.
C.J. Schwehm; P. Klinkhachorn; Charles W. McMillin; Henry A. Huber
1990-01-01
This paper describes an expert system computer program which will determine the optimum way to edge and trim a hardwood board so as to yield the highest dollar value based on the grade, size of each board, and current market prices. The program uses the Automated Hardwood Lumber Grading Program written by Klinkhachorn, et al. for determining the grade of each board...
Wavelet-based adaptive thresholding method for image segmentation
NASA Astrophysics Data System (ADS)
Chen, Zikuan; Tao, Yang; Chen, Xin; Griffis, Carl
2001-05-01
A nonuniform background distribution may cause a global thresholding method to fail to segment objects. One solution is using a local thresholding method that adapts to local surroundings. In this paper, we propose a novel local thresholding method for image segmentation, using multiscale threshold functions obtained by wavelet synthesis with weighted detail coefficients. In particular, the coarse-to- fine synthesis with attenuated detail coefficients produces a threshold function corresponding to a high-frequency- reduced signal. This wavelet-based local thresholding method adapts to both local size and local surroundings, and its implementation can take advantage of the fast wavelet algorithm. We applied this technique to physical contaminant detection for poultry meat inspection using x-ray imaging. Experiments showed that inclusion objects in deboned poultry could be extracted at multiple resolutions despite their irregular sizes and uneven backgrounds.
Spectral analysis of the 1976 aeromagnetic survey of Harrat Rahat, Kingdom of Saudi Arabia
Blank, H. Richard; Sadek, Hamdy S.
1983-01-01
Harrat Rahat, an extensive plateau of Cenozoic mafic lava on the Precambrian shield of western Saudi Arabia, has been studied for its water resource and geothermal potential. In support of these investigations, the thickness of the lava sequence at more than 300 points was estimated by spectral analysis of low-level aeromagnetic profiles utilizing the integral Fourier transform of field intensity along overlapping profile segments. The optimum length of segment for analysis was determined to be about 40 km or 600 field samples. Contributions from two discrete magnetic source ensembles could be resolved on almost all spectra computed. The depths to these ensembles correspond closely to the flight height (300 m), and, presumably, to the mean depth to basement near the center of each profile segment. The latter association was confirmed in all three cases where spectral estimates could be directly compared with basement depths measured in drill holes. The maximum thickness estimated for the lava section is 380 m and the mean about 150 m. Data from an isopach map prepared from these results suggest that thickness variations are strongly influenced by pre-harrat, north-northwest-trending topography probably consequent on Cenozoic faulting. The thickest zones show a rough correlation with three axially-disposed volcanic shields.
Optimum allocation for a dual-frame telephone survey.
Wolter, Kirk M; Tao, Xian; Montgomery, Robert; Smith, Philip J
2015-12-01
Careful design of a dual-frame random digit dial (RDD) telephone survey requires selecting from among many options that have varying impacts on cost, precision, and coverage in order to obtain the best possible implementation of the study goals. One such consideration is whether to screen cell-phone households in order to interview cell-phone only (CPO) households and exclude dual-user household, or to take all interviews obtained via the cell-phone sample. We present a framework in which to consider the tradeoffs between these two options and a method to select the optimal design. We derive and discuss the optimum allocation of sample size between the two sampling frames and explore the choice of optimum p , the mixing parameter for the dual-user domain. We illustrate our methods using the National Immunization Survey , sponsored by the Centers for Disease Control and Prevention.
A methodology for selecting optimum organizations for space communities
NASA Technical Reports Server (NTRS)
Ragusa, J. M.
1978-01-01
This paper suggests that a methodology exists for selecting optimum organizations for future space communities of various sizes and purposes. Results of an exploratory study to identify an optimum hypothetical organizational structure for a large earth-orbiting multidisciplinary research and applications (R&A) Space Base manned by a mixed crew of technologists are presented. Since such a facility does not presently exist, in situ empirical testing was not possible. Study activity was, therefore, concerned with the identification of a desired organizational structural model rather than the empirical testing of it. The principal finding of this research was that a four-level project type 'total matrix' model will optimize the effectiveness of Space Base technologists. An overall conclusion which can be reached from the research is that application of this methodology, or portions of it, may provide planning insights for the formal organizations which will be needed during the Space Industrialization Age.
Chintapalli, Ravi Kiran; Mirkhalaf, Mohammad; Dastjerdi, Ahmad Khayer; Barthelat, Francois
2014-09-01
Crocodiles, armadillo, turtles, fish and many other animal species have evolved flexible armored skins in the form of hard scales or osteoderms, which can be described as hard plates of finite size embedded in softer tissues. The individual hard segments provide protection from predators, while the relative motion of these segments provides the flexibility required for efficient locomotion. In this work, we duplicated these broad concepts in a bio-inspired segmented armor. Hexagonal segments of well-defined size and shape were carved within a thin glass plate using laser engraving. The engraved plate was then placed on a soft substrate which simulated soft tissues, and then punctured with a sharp needle mounted on a miniature loading stage. The resistance of our segmented armor was significantly higher when smaller hexagons were used, and our bio-inspired segmented glass displayed an increase in puncture resistance of up to 70% compared to a continuous plate of glass of the same thickness. Detailed structural analyses aided by finite elements revealed that this extraordinary improvement is due to the reduced span of individual segments, which decreases flexural stresses and delays fracture. This effect can however only be achieved if the plates are at least 1000 stiffer than the underlying substrate, which is the case for natural armor systems. Our bio-inspired system also displayed many of the attributes of natural armors: flexible, robust with 'multi-hit' capabilities. This new segmented glass therefore suggests interesting bio-inspired strategies and mechanisms which could be systematically exploited in high-performance flexible armors. This study also provides new insights and a better understanding of the mechanics of natural armors such as scales and osteoderms.
A segmentation approach for a delineation of terrestrial ecoregions
NASA Astrophysics Data System (ADS)
Nowosad, J.; Stepinski, T.
2017-12-01
Terrestrial ecoregions are the result of regionalization of land into homogeneous units of similar ecological and physiographic features. Terrestrial Ecoregions of the World (TEW) is a commonly used global ecoregionalization based on expert knowledge and in situ observations. Ecological Land Units (ELUs) is a global classification of 250 meters-sized cells into 4000 types on the basis of the categorical values of four environmental variables. ELUs are automatically calculated and reproducible but they are not a regionalization which makes them impractical for GIS-based spatial analysis and for comparison with TEW. We have regionalized terrestrial ecosystems on the basis of patterns of the same variables (land cover, soils, landform, and bioclimate) previously used in ELUs. Considering patterns of categorical variables makes segmentation and thus regionalization possible. Original raster datasets of the four variables are first transformed into regular grids of square-sized blocks of their cells called eco-sites. Eco-sites are elementary land units containing local patterns of physiographic characteristics and thus assumed to contain a single ecosystem. Next, eco-sites are locally aggregated using a procedure analogous to image segmentation. The procedure optimizes pattern homogeneity of all four environmental variables within each segment. The result is a regionalization of the landmass into land units characterized by uniform pattern of land cover, soils, landforms, climate, and, by inference, by uniform ecosystem. Because several disjoined segments may have very similar characteristics, we cluster the segments to obtain a smaller set of segment types which we identify with ecoregions. Our approach is automatic, reproducible, updatable, and customizable. It yields the first automatic delineation of ecoregions on the global scale. In the resulting vector database each ecoregion/segment is described by numerous attributes which make it a valuable GIS resource for global ecological and conservation studies.
Feasibility and scalability of spring parameters in distraction enterogenesis in a murine model.
Huynh, Nhan; Dubrovsky, Genia; Rouch, Joshua D; Scott, Andrew; Stelzner, Matthias; Shekherdimian, Shant; Dunn, James C Y
2017-07-01
Distraction enterogenesis has been investigated as a novel treatment for short bowel syndrome (SBS). With variable intestinal sizes, it is critical to determine safe, translatable spring characteristics in differently sized animal models before clinical use. Nitinol springs have been shown to lengthen intestines in rats and pigs. Here, we show spring-mediated intestinal lengthening is scalable and feasible in a murine model. A 10-mm nitinol spring was compressed to 3 mm and placed in a 5-mm intestinal segment isolated from continuity in mice. A noncompressed spring placed in a similar fashion served as a control. Spring parameters were proportionally extrapolated from previous spring parameters to accommodate the smaller size of murine intestines. After 2-3 wk, the intestinal segments were examined for size and histology. Experimental group with spring constants, k = 0.2-1.4 N/m, showed intestinal lengthening from 5.0 ± 0.6 mm to 9.5 ± 0.8 mm (P < 0.0001), whereas control segments lengthened from 5.3 ± 0.5 mm to 6.4 ± 1.0 mm (P < 0.02). Diameter increased similarly in both groups. Isolated segment perforation was noted when k ≥ 0.8 N/m. Histologically, lengthened segments had increased muscularis thickness and crypt depth in comparison to normal intestine. Nitinol springs with k ≤ 0.4 N/m can safely yield nearly 2-fold distraction enterogenesis in length and diameter in a scalable mouse model. Not only does this study derive the safe ranges and translatable spring characteristics in a scalable murine model for patients with short bowel syndrome, it also demonstrates the feasibility of spring-mediated intestinal lengthening in a mouse, which can be used to study underlying mechanisms in the future. Copyright © 2017 Elsevier Inc. All rights reserved.
A NDVI assisted remote sensing image adaptive scale segmentation method
NASA Astrophysics Data System (ADS)
Zhang, Hong; Shen, Jinxiang; Ma, Yanmei
2018-03-01
Multiscale segmentation of images can effectively form boundaries of different objects with different scales. However, for the remote sensing image which widely coverage with complicated ground objects, the number of suitable segmentation scales, and each of the scale size is still difficult to be accurately determined, which severely restricts the rapid information extraction of the remote sensing image. A great deal of experiments showed that the normalized difference vegetation index (NDVI) can effectively express the spectral characteristics of a variety of ground objects in remote sensing images. This paper presents a method using NDVI assisted adaptive segmentation of remote sensing images, which segment the local area by using NDVI similarity threshold to iteratively select segmentation scales. According to the different regions which consist of different targets, different segmentation scale boundaries could be created. The experimental results showed that the adaptive segmentation method based on NDVI can effectively create the objects boundaries for different ground objects of remote sensing images.
Computational study of energy filtering effects in one-dimensional composite nano-structures
NASA Astrophysics Data System (ADS)
Kim, Raseong; Lundstrom, Mark S.
2012-01-01
Possibilities to improve the Seebeck coefficient S versus electrical conductance G trade-off of diffusive composite nano-structures are explored using an electro-thermal simulation framework based on the non-equilibrium Green's function method for quantum electron transport and the lattice heat diffusion equation. We examine the role of the grain size d, potential barrier height ΦB, grain doping, and the lattice thermal conductivity κL using a one-dimensional model structure. For a uniform κL, simulation results show that the power factor of a composite structure may be improved over bulk with the optimum ΦB being about kBT, where kB and T are the Boltzmann constant and the temperature, respectively. An optimum ΦB occurs because the current flow near the Fermi level is not obstructed too much while S still improves due to barriers. The optimum grain size dopt is significantly longer than the momentum relaxation length λp so that G is not seriously degraded due to the barriers, and dopt is comparable to or somewhat larger than the energy relaxation length λE so that the carrier energy is not fully relaxed within the grain and |S| remains high. Simulation results also show that if κL in the barrier region is smaller than in the grain, S and power factor are further improved. In such cases, the optimum ΦB and dopt increase, and the power factor may improve even for ΦB (d) significantly higher (longer) than kBT (λE). We find that the results from this quantum mechanical approach are readily understood using a simple, semi-classical model.
Gourdine, J L; Sørensen, A C; Rydhmer, L
2012-01-01
Selection progress must be carefully balanced against the conservation of genetic variation in small populations of local breeds. Well-defined breeding programs with specified selection traits are rare in local pig breeds. Given the small population size, the focus is often on the management of genetic diversity. However, in local breeds, optimum contribution selection can be applied to control the rate of inbreeding and to avoid reduced performance in traits with high market value. The aim of this study was to assess the extent to which a breeding program aiming for improved product quality in a small local breed would be feasible. We used stochastic simulations to compare 25 scenarios. The scenarios differed in size of population, selection intensity of boars, type of selection (random selection, truncation selection based on BLUP breeding values, or optimum contribution selection based on BLUP breeding values), and heritability of the selection trait. It was assumed that the local breed is used in an extensive system for a high-meat-quality market. The simulations showed that in the smallest population (300 female reproducers), inbreeding increased by 0.8% when selection was performed at random. With optimum contribution selection, genetic progress can be achieved that is almost as great as that with truncation selection based on BLUP breeding values (0.2 to 0.5 vs. 0.3 to 0.5 genetic SD, P < 0.05), but at a considerably decreased rate of inbreeding (0.7 to 1.2 vs. 2.3 to 5.7%, P < 0.01). This confirmation of the potential utilization of OCS even in small populations is important in the context of sustainable management and the use of animal genetic resources.
Arabic OCR: toward a complete system
NASA Astrophysics Data System (ADS)
El-Bialy, Ahmed M.; Kandil, Ahmed H.; Hashish, Mohamed; Yamany, Sameh M.
1999-12-01
Latin and Chinese OCR systems have been studied extensively in the literature. Yet little work was performed for Arabic character recognition. This is due to the technical challenges found in the Arabic text. Due to its cursive nature, a powerful and stable text segmentation is needed. Also; features capturing the characteristics of the rich Arabic character representation are needed to build the Arabic OCR. In this paper a novel segmentation technique which is font and size independent is introduced. This technique can segment the cursive written text line even if the line suffers from small skewness. The technique is not sensitive to the location of the centerline of the text line and can segment different font sizes and type (for different character sets) occurring on the same line. Features extraction is considered one of the most important phases of the text reading system. Ideally, the features extracted from a character image should capture the essential characteristics of this character that are independent of the font type and size. In such ideal case, the classifier stores a single prototype per character. However, it is practically challenging to find such ideal set of features. In this paper, a set of features that reflect the topological aspects of Arabia characters is proposed. These proposed features integrated with a topological matching technique introduce an Arabic text reading system that is semi Omni.
Local site preference rationalizes disentangling by DNA topoisomerases
NASA Astrophysics Data System (ADS)
Liu, Zhirong; Zechiedrich, Lynn; Chan, Hue Sun
2010-03-01
To rationalize the disentangling action of type II topoisomerases, an improved wormlike DNA model was used to delineate the degree of unknotting and decatenating achievable by selective segment passage at specific juxtaposition geometries and to determine how these activities were affected by DNA circle size and solution ionic strength. We found that segment passage at hooked geometries can reduce knot populations as dramatically as seen in experiments. Selective segment passage also provided theoretical underpinning for an intriguing empirical scaling relation between unknotting and decatenating potentials.
Shahbazi, Mohammad-Ali; Hamidi, Mehrdad
2013-11-01
Today, developing an optimized nanoparticle (NP) preparation procedure is of paramount importance in all nanoparticulate drug delivery researches, leading to expanding more operative and clinically validated nanomedicines. In this study, a one-at-a-time experimental approach was used for evaluating the effect of various preparation factors on size, loading, and drug release of hydrogel NPs prepared with ionotropic gelation between heparin and chitosan. The size, loading efficiency (LE) and drug release profile of the NPs were evaluated when the chitosan molecular weight, chitosan concentration, heparin addition time to chitosan solution, heparin concentration, pH value of chitosan solution, temperature, and mixing rate were changed separately while other factors were in optimum condition. The results displayed that size and LE are highly influenced by chitosan concentration, getting an optimum of 63 ± 0.57 and 75.19 ± 2.65, respectively, when chitosan concentration was 0.75 mg/ml. Besides, heparin addition time of 3 min leaded to 74.1 ± 0.79 % LE with no sensible effect on size and release profile. In addition, pH 5.5 showed a minimum size of 63 ± 1.87, maximum LE of 73.81 ± 3.13 and the slowest drug release with 63.71 ± 3.84 % during one week. Although LE was not affected by temperature, size and release reduced to 63 ± 0 and 74.21 ± 1.99% when temperature increased from 25°C to 55°C. Also, continuous increase of mixer rate from 500 to 3500 rpm resulted in constant enhancement of LE from 58.3 ± 3.6 to 74.4 ± 2.59 as well as remarkable decrease in size from 148 ± 4.88 to 63 ± 2.64.
In-situ polymerized PLOT columns III: divinylbenzene copolymers and dimethacrylate homopolymers
NASA Technical Reports Server (NTRS)
Shen, T. C.; Fong, M. M.
1994-01-01
Studies of divinylbenzene copolymers and dimethacrylate homopolymers indicate that the polymer pore size controls the separation of water and ammonia on porous-layer-open-tubular (PLOT) columns. To a lesser degree, the polarity of the polymers also affects the separation of a water-ammonia gas mixture. Our results demonstrate that the pore size can be regulated by controlling the cross-linking density or the chain length between the cross-linking functional groups. An optimum pore size will provide the best separation of water and ammonia.
Sekido, Kota; Kitaori, Noriyuki
2008-12-01
A small-sized generator of ozonated water was developed using an electro-conductive diamond. We studied the optimum conditions for producing ozonated water. As a result, we developed a small-sized generator of ozonated water driven by a dry-cell for use in the average household. This generator was easily able to produce ozonated water with an ozone concentration (over 4 mg/L) sufficient for disinfection. In addition, we verified the high disinfecting performance of the water produced in an actual hospital.
Circuit-level optimisation of a:Si TFT-based AMOLED pixel circuits for maximum hold current
NASA Astrophysics Data System (ADS)
Foroughi, Aidin; Mehrpoo, Mohammadreza; Ashtiani, Shahin J.
2013-11-01
Design of AMOLED pixel circuits has manifold constraints and trade-offs which provides incentive for circuit designers to seek optimal solutions for different objectives. In this article, we present a discussion on the viability of an optimal solution to achieve the maximum hold current. A compact formula for component sizing in a conventional 2T1C pixel is, therefore, derived. Compared to SPICE simulation results, for several pixel sizes, our predicted optimum sizing yields maximum currents with errors less than 0.4%.
NASA Astrophysics Data System (ADS)
Andriantahina, Farafidy; Liu, Xiaolin; Huang, Hao; Xiang, Jianhai
2012-03-01
To quantify the response to selection, heritability and genetic correlations between weight and size of Litopenaeus vannamei, the body weight (BW), total length (TL), body length (BL), first abdominal segment depth (FASD), third abdominal segment depth (TASD), first abdominal segment width (FASW), and partial carapace length (PCL) of 5-month-old parents and of offspnng were measured by calculating seven body measunngs of offspnng produced by a nested mating design. Seventeen half-sib families and 42 full-sib families of L. vannamei were produced using artificial fertilization from 2-4 dams by each sire, and measured at around five months post-metamorphosis. The results show that hentabilities among vanous traits were high: 0.515±0.030 for body weight and 0.394±0.030 for total length. After one generation of selection. the selection response was 10.70% for offspring growth. In the 5th month, the realized heritability for weight was 0.296 for the offspnng generation. Genetic correlations between body weight and body size were highly variable. The results indicate that external morphological parameters can be applied dunng breeder selection for enhancing the growth without sacrificing animals for determining the body size and breed ability; and selective breeding can be improved significantly, simultaneously with increased production.
Prinyakupt, Jaroonrut; Pluempitiwiriyawej, Charnchai
2015-06-30
Blood smear microscopic images are routinely investigated by haematologists to diagnose most blood diseases. However, the task is quite tedious and time consuming. An automatic detection and classification of white blood cells within such images can accelerate the process tremendously. In this paper we propose a system to locate white blood cells within microscopic blood smear images, segment them into nucleus and cytoplasm regions, extract suitable features and finally, classify them into five types: basophil, eosinophil, neutrophil, lymphocyte and monocyte. Two sets of blood smear images were used in this study's experiments. Dataset 1, collected from Rangsit University, were normal peripheral blood slides under light microscope with 100× magnification; 555 images with 601 white blood cells were captured by a Nikon DS-Fi2 high-definition color camera and saved in JPG format of size 960 × 1,280 pixels at 15 pixels per 1 μm resolution. In dataset 2, 477 cropped white blood cell images were downloaded from CellaVision.com. They are in JPG format of size 360 × 363 pixels. The resolution is estimated to be 10 pixels per 1 μm. The proposed system comprises a pre-processing step, nucleus segmentation, cell segmentation, feature extraction, feature selection and classification. The main concept of the segmentation algorithm employed uses white blood cell's morphological properties and the calibrated size of a real cell relative to image resolution. The segmentation process combined thresholding, morphological operation and ellipse curve fitting. Consequently, several features were extracted from the segmented nucleus and cytoplasm regions. Prominent features were then chosen by a greedy search algorithm called sequential forward selection. Finally, with a set of selected prominent features, both linear and naïve Bayes classifiers were applied for performance comparison. This system was tested on normal peripheral blood smear slide images from two datasets. Two sets of comparison were performed: segmentation and classification. The automatically segmented results were compared to the ones obtained manually by a haematologist. It was found that the proposed method is consistent and coherent in both datasets, with dice similarity of 98.9 and 91.6% for average segmented nucleus and cell regions, respectively. Furthermore, the overall correction rate in the classification phase is about 98 and 94% for linear and naïve Bayes models, respectively. The proposed system, based on normal white blood cell morphology and its characteristics, was applied to two different datasets. The results of the calibrated segmentation process on both datasets are fast, robust, efficient and coherent. Meanwhile, the classification of normal white blood cells into five types shows high sensitivity in both linear and naïve Bayes models, with slightly better results in the linear classifier.
Accurate Detection of Dysmorphic Nuclei Using Dynamic Programming and Supervised Classification.
Verschuuren, Marlies; De Vylder, Jonas; Catrysse, Hannes; Robijns, Joke; Philips, Wilfried; De Vos, Winnok H
2017-01-01
A vast array of pathologies is typified by the presence of nuclei with an abnormal morphology. Dysmorphic nuclear phenotypes feature dramatic size changes or foldings, but also entail much subtler deviations such as nuclear protrusions called blebs. Due to their unpredictable size, shape and intensity, dysmorphic nuclei are often not accurately detected in standard image analysis routines. To enable accurate detection of dysmorphic nuclei in confocal and widefield fluorescence microscopy images, we have developed an automated segmentation algorithm, called Blebbed Nuclei Detector (BleND), which relies on two-pass thresholding for initial nuclear contour detection, and an optimal path finding algorithm, based on dynamic programming, for refining these contours. Using a robust error metric, we show that our method matches manual segmentation in terms of precision and outperforms state-of-the-art nuclear segmentation methods. Its high performance allowed for building and integrating a robust classifier that recognizes dysmorphic nuclei with an accuracy above 95%. The combined segmentation-classification routine is bound to facilitate nucleus-based diagnostics and enable real-time recognition of dysmorphic nuclei in intelligent microscopy workflows.
Accurate Detection of Dysmorphic Nuclei Using Dynamic Programming and Supervised Classification
Verschuuren, Marlies; De Vylder, Jonas; Catrysse, Hannes; Robijns, Joke; Philips, Wilfried
2017-01-01
A vast array of pathologies is typified by the presence of nuclei with an abnormal morphology. Dysmorphic nuclear phenotypes feature dramatic size changes or foldings, but also entail much subtler deviations such as nuclear protrusions called blebs. Due to their unpredictable size, shape and intensity, dysmorphic nuclei are often not accurately detected in standard image analysis routines. To enable accurate detection of dysmorphic nuclei in confocal and widefield fluorescence microscopy images, we have developed an automated segmentation algorithm, called Blebbed Nuclei Detector (BleND), which relies on two-pass thresholding for initial nuclear contour detection, and an optimal path finding algorithm, based on dynamic programming, for refining these contours. Using a robust error metric, we show that our method matches manual segmentation in terms of precision and outperforms state-of-the-art nuclear segmentation methods. Its high performance allowed for building and integrating a robust classifier that recognizes dysmorphic nuclei with an accuracy above 95%. The combined segmentation-classification routine is bound to facilitate nucleus-based diagnostics and enable real-time recognition of dysmorphic nuclei in intelligent microscopy workflows. PMID:28125723
Video segmentation using keywords
NASA Astrophysics Data System (ADS)
Ton-That, Vinh; Vong, Chi-Tai; Nguyen-Dao, Xuan-Truong; Tran, Minh-Triet
2018-04-01
At DAVIS-2016 Challenge, many state-of-art video segmentation methods achieve potential results, but they still much depend on annotated frames to distinguish between background and foreground. It takes a lot of time and efforts to create these frames exactly. In this paper, we introduce a method to segment objects from video based on keywords given by user. First, we use a real-time object detection system - YOLOv2 to identify regions containing objects that have labels match with the given keywords in the first frame. Then, for each region identified from the previous step, we use Pyramid Scene Parsing Network to assign each pixel as foreground or background. These frames can be used as input frames for Object Flow algorithm to perform segmentation on entire video. We conduct experiments on a subset of DAVIS-2016 dataset in half the size of its original size, which shows that our method can handle many popular classes in PASCAL VOC 2012 dataset with acceptable accuracy, about 75.03%. We suggest widely testing by combining other methods to improve this result in the future.
Tensor scale-based fuzzy connectedness image segmentation
NASA Astrophysics Data System (ADS)
Saha, Punam K.; Udupa, Jayaram K.
2003-05-01
Tangible solutions to image segmentation are vital in many medical imaging applications. Toward this goal, a framework based on fuzzy connectedness was developed in our laboratory. A fundamental notion called "affinity" - a local fuzzy hanging togetherness relation on voxels - determines the effectiveness of this segmentation framework in real applications. In this paper, we introduce the notion of "tensor scale" - a recently developed local morphometric parameter - in affinity definition and study its effectiveness. Although, our previous notion of "local scale" using the spherical model successfully incorporated local structure size into affinity and resulted in measureable improvements in segmentation results, a major limitation of the previous approach was that it ignored local structural orientation and anisotropy. The current approach of using tensor scale in affinity computation allows an effective utilization of local size, orientation, and ansiotropy in a unified manner. Tensor scale is used for computing both the homogeneity- and object-feature-based components of affinity. Preliminary results of the proposed method on several medical images and computer generated phantoms of realistic shapes are presented. Further extensions of this work are discussed.
Size-dependent trophic patterns of pallid sturgeon and shovelnose sturgeon in a large river system
French, William E.; Graeb, Brian D. S.; Bertrand, Katie N.; Chipps, Steven R.; Klumb, Robert A.
2013-01-01
This study compared patterns of δ15N and δ13C enrichment of pallid sturgeon Scaphirhynchus albus and shovelnose sturgeon S. platorynchus in the Missouri River, United States, to infer their trophic position in a large river system. We examined enrichment and energy flow for pallid sturgeon in three segments of the Missouri River (Montana/North Dakota, Nebraska/South Dakota, and Nebraska/Iowa) and made comparisons between species in the two downstream segments (Nebraska/South Dakota and Nebraska/Iowa). Patterns in isotopic composition for pallid sturgeon were consistent with gut content analyses indicating an ontogenetic diet shift from invertebrates to fish prey at sizes of >500-mm fork length (FL) in all three segments of the Missouri River. Isotopic patterns revealed shovelnose sturgeon did not experience an ontogenetic shift in diet and used similar prey resources as small (<500-mm FL) pallid sturgeon in the two downstream segments. We found stable isotope analysis to be an effective tool for evaluating the trophic position of sturgeons within a large river food web.
Cavity contour segmentation in chest radiographs using supervised learning and dynamic programming
DOE Office of Scientific and Technical Information (OSTI.GOV)
Maduskar, Pragnya, E-mail: pragnya.maduskar@radboudumc.nl; Hogeweg, Laurens; Sánchez, Clara I.
Purpose: Efficacy of tuberculosis (TB) treatment is often monitored using chest radiography. Monitoring size of cavities in pulmonary tuberculosis is important as the size predicts severity of the disease and its persistence under therapy predicts relapse. The authors present a method for automatic cavity segmentation in chest radiographs. Methods: A two stage method is proposed to segment the cavity borders, given a user defined seed point close to the center of the cavity. First, a supervised learning approach is employed to train a pixel classifier using texture and radial features to identify the border pixels of the cavity. A likelihoodmore » value of belonging to the cavity border is assigned to each pixel by the classifier. The authors experimented with four different classifiers:k-nearest neighbor (kNN), linear discriminant analysis (LDA), GentleBoost (GB), and random forest (RF). Next, the constructed likelihood map was used as an input cost image in the polar transformed image space for dynamic programming to trace the optimal maximum cost path. This constructed path corresponds to the segmented cavity contour in image space. Results: The method was evaluated on 100 chest radiographs (CXRs) containing 126 cavities. The reference segmentation was manually delineated by an experienced chest radiologist. An independent observer (a chest radiologist) also delineated all cavities to estimate interobserver variability. Jaccard overlap measure Ω was computed between the reference segmentation and the automatic segmentation; and between the reference segmentation and the independent observer's segmentation for all cavities. A median overlap Ω of 0.81 (0.76 ± 0.16), and 0.85 (0.82 ± 0.11) was achieved between the reference segmentation and the automatic segmentation, and between the segmentations by the two radiologists, respectively. The best reported mean contour distance and Hausdorff distance between the reference and the automatic segmentation were, respectively, 2.48 ± 2.19 and 8.32 ± 5.66 mm, whereas these distances were 1.66 ± 1.29 and 5.75 ± 4.88 mm between the segmentations by the reference reader and the independent observer, respectively. The automatic segmentations were also visually assessed by two trained CXR readers as “excellent,” “adequate,” or “insufficient.” The readers had good agreement in assessing the cavity outlines and 84% of the segmentations were rated as “excellent” or “adequate” by both readers. Conclusions: The proposed cavity segmentation technique produced results with a good degree of overlap with manual expert segmentations. The evaluation measures demonstrated that the results approached the results of the experienced chest radiologists, in terms of overlap measure and contour distance measures. Automatic cavity segmentation can be employed in TB clinics for treatment monitoring, especially in resource limited settings where radiologists are not available.« less
Welter, S; Stöcker, C; Dicken, V; Kühl, H; Krass, S; Stamatis, G
2012-03-01
Segmental resection in stage I non-small cell lung cancer (NSCLC) has been well described and is considered to have similar survival rates as lobectomy but with increased rates of local tumour recurrence due to inadequate parenchymal margins. In consequence, today segmentectomy is only performed when the tumour is smaller than 2 cm. Three-dimensional reconstructions from 11 thin-slice CT scans of bronchopulmonary segments were generated, and virtual spherical tumours were placed over the segments, respecting all segmental borders. As a next step, virtual parenchymal safety margins of 2 cm and 3 cm were subtracted and the size of the remaining tumour calculated. The maximum tumour diameters with a 30-mm parenchymal safety margin ranged from 26.1 mm in right-sided segments 7 + 8 to 59.8 mm in the left apical segments 1-3. Using a three-dimensional reconstruction of lung CT scans, we demonstrated that segmentectomy or resection of segmental groups should be feasible with adequate margins, even for larger tumours in selected cases. Thieme Medical Publishers 333 Seventh Avenue, New York, NY 10001, USA.
New Stopping Criteria for Segmenting DNA Sequences
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, Wentian
2001-06-18
We propose a solution on the stopping criterion in segmenting inhomogeneous DNA sequences with complex statistical patterns. This new stopping criterion is based on Bayesian information criterion in the model selection framework. When this criterion is applied to telomere of S.cerevisiae and the complete sequence of E.coli, borders of biologically meaningful units were identified, and a more reasonable number of domains was obtained. We also introduce a measure called segmentation strength which can be used to control the delineation of large domains. The relationship between the average domain size and the threshold of segmentation strength is determined for several genomemore » sequences.« less
Segmentation of left atrial intracardiac ultrasound images for image guided cardiac ablation therapy
NASA Astrophysics Data System (ADS)
Rettmann, M. E.; Stephens, T.; Holmes, D. R.; Linte, C.; Packer, D. L.; Robb, R. A.
2013-03-01
Intracardiac echocardiography (ICE), a technique in which structures of the heart are imaged using a catheter navigated inside the cardiac chambers, is an important imaging technique for guidance in cardiac ablation therapy. Automatic segmentation of these images is valuable for guidance and targeting of treatment sites. In this paper, we describe an approach to segment ICE images by generating an empirical model of blood pool and tissue intensities. Normal, Weibull, Gamma, and Generalized Extreme Value (GEV) distributions are fit to histograms of tissue and blood pool pixels from a series of ICE scans. A total of 40 images from 4 separate studies were evaluated. The model was trained and tested using two approaches. In the first approach, the model was trained on all images from 3 studies and subsequently tested on the 40 images from the 4th study. This procedure was repeated 4 times using a leave-one-out strategy. This is termed the between-subjects approach. In the second approach, the model was trained on 10 randomly selected images from a single study and tested on the remaining 30 images in that study. This is termed the within-subjects approach. For both approaches, the model was used to automatically segment ICE images into blood and tissue regions. Each pixel is classified using the Generalized Liklihood Ratio Test across neighborhood sizes ranging from 1 to 49. Automatic segmentation results were compared against manual segmentations for all images. In the between-subjects approach, the GEV distribution using a neighborhood size of 17 was found to be the most accurate with a misclassification rate of approximately 17%. In the within-subjects approach, the GEV distribution using a neighborhood size of 19 was found to be the most accurate with a misclassification rate of approximately 15%. As expected, the majority of misclassified pixels were located near the boundaries between tissue and blood pool regions for both methods.
Influence parameters of impact grinding mills
NASA Technical Reports Server (NTRS)
Hoeffl, K.; Husemann, K.; Goldacker, H.
1984-01-01
Significant parameters for impact grinding mills were investigated. Final particle size was used to evaluate grinding results. Adjustment of the parameters toward increased charge load results in improved efficiency; however, it was not possible to define a single, unified set to optimum grinding conditions.
Homopolyrotaxanes and Homopolyrotaxane Networks of PEO
NASA Technical Reports Server (NTRS)
Pugh, Coleen; Mattice, Wayne
2005-01-01
In order to identify the optimum size of macrocrown ether for threading, we first investigated the size and shape of simple crown ethers in the melt at 373 K, and their extent of threading with PEO in the melt using coarse-grained Monte Carlo simulations on the 2nnd (second nearest neighbor diamond) lattice, which is a high coordination lattice whose coarse-grained chains can be reverse mapped into fully atomistic models in continuous space.
The Soviet Population Policy Debate: Actors and Issues,
1986-12-01
the usual fertility and mortality trends, he gave estimates of their effect on the size of the working age population NO 25 - (20-59 years of age) in...toward limitation of family size . [621 Litvinova, a vocal member of the differentiated-policy school, published again in the journal of her institute...Ryabushkin regretted that population had not been incorporated in the system of concepts and categories expressing the optimum planning mechanism. [15
Han, Lin; Zhou, Jing; Sun, Yubing; Zhang, Yu; Han, Jung; Fu, Jianping; Fan, Rong
2014-11-01
Single-crystalline nanoporous gallium nitride (GaN) thin films were fabricated with the pore size readily tunable in 20-100 nm. Uniform adhesion and spreading of human mesenchymal stem cells (hMSCs) seeded on these thin films peak on the surface with pore size of 30 nm. Substantial cell elongation emerges as pore size increases to ∼80 nm. The osteogenic differentiation of hMSCs occurs preferentially on the films with 30 nm sized nanopores, which is correlated with the optimum condition for cell spreading, which suggests that adhesion, spreading, and stem cell differentiation are interlinked and might be coregulated by nanotopography.
Gebreyesus, Grum; Lund, Mogens S; Buitenhuis, Bart; Bovenhuis, Henk; Poulsen, Nina A; Janss, Luc G
2017-12-05
Accurate genomic prediction requires a large reference population, which is problematic for traits that are expensive to measure. Traits related to milk protein composition are not routinely recorded due to costly procedures and are considered to be controlled by a few quantitative trait loci of large effect. The amount of variation explained may vary between regions leading to heterogeneous (co)variance patterns across the genome. Genomic prediction models that can efficiently take such heterogeneity of (co)variances into account can result in improved prediction reliability. In this study, we developed and implemented novel univariate and bivariate Bayesian prediction models, based on estimates of heterogeneous (co)variances for genome segments (BayesAS). Available data consisted of milk protein composition traits measured on cows and de-regressed proofs of total protein yield derived for bulls. Single-nucleotide polymorphisms (SNPs), from 50K SNP arrays, were grouped into non-overlapping genome segments. A segment was defined as one SNP, or a group of 50, 100, or 200 adjacent SNPs, or one chromosome, or the whole genome. Traditional univariate and bivariate genomic best linear unbiased prediction (GBLUP) models were also run for comparison. Reliabilities were calculated through a resampling strategy and using deterministic formula. BayesAS models improved prediction reliability for most of the traits compared to GBLUP models and this gain depended on segment size and genetic architecture of the traits. The gain in prediction reliability was especially marked for the protein composition traits β-CN, κ-CN and β-LG, for which prediction reliabilities were improved by 49 percentage points on average using the MT-BayesAS model with a 100-SNP segment size compared to the bivariate GBLUP. Prediction reliabilities were highest with the BayesAS model that uses a 100-SNP segment size. The bivariate versions of our BayesAS models resulted in extra gains of up to 6% in prediction reliability compared to the univariate versions. Substantial improvement in prediction reliability was possible for most of the traits related to milk protein composition using our novel BayesAS models. Grouping adjacent SNPs into segments provided enhanced information to estimate parameters and allowing the segments to have different (co)variances helped disentangle heterogeneous (co)variances across the genome.
NASA Astrophysics Data System (ADS)
Das, Sukanta Kumar; Shukla, Ashish Kumar
2011-04-01
Single-frequency users of a satellite-based augmentation system (SBAS) rely on ionospheric models to mitigate the delay due to the ionosphere. The ionosphere is the major source of range and range rate errors for users of the Global Positioning System (GPS) who require high-accuracy positioning. The purpose of the present study is to develop a tomography model to reconstruct the total electron content (TEC) over the low-latitude Indian region which lies in the equatorial ionospheric anomaly belt. In the present study, the TEC data collected from the six TEC collection stations along a longitudinal belt of around 77 degrees are used. The main objective of the study is to find out optimum pixel size which supports a better reconstruction of the electron density and hence the TEC over the low-latitude Indian region. Performance of two reconstruction algorithms Algebraic Reconstruction Technique (ART) and Multiplicative Algebraic Reconstruction Technique (MART) is analyzed for different pixel sizes varying from 1 to 6 degrees in latitude. It is found from the analysis that the optimum pixel size is 5° × 50 km over the Indian region using both ART and MART algorithms.
Masoumi, Hamid Reza Fard; Basri, Mahiran; Samiun, Wan Sarah; Izadiyan, Zahra; Lim, Chaw Jiang
2015-01-01
Aripiprazole is considered as a third-generation antipsychotic drug with excellent therapeutic efficacy in controlling schizophrenia symptoms and was the first atypical anti-psychotic agent to be approved by the US Food and Drug Administration. Formulation of nanoemulsion-containing aripiprazole was carried out using high shear and high pressure homogenizers. Mixture experimental design was selected to optimize the composition of nanoemulsion. A very small droplet size of emulsion can provide an effective encapsulation for delivery system in the body. The effects of palm kernel oil ester (3-6 wt%), lecithin (2-3 wt%), Tween 80 (0.5-1 wt%), glycerol (1.5-3 wt%), and water (87-93 wt%) on the droplet size of aripiprazole nanoemulsions were investigated. The mathematical model showed that the optimum formulation for preparation of aripiprazole nanoemulsion having the desirable criteria was 3.00% of palm kernel oil ester, 2.00% of lecithin, 1.00% of Tween 80, 2.25% of glycerol, and 91.75% of water. Under optimum formulation, the corresponding predicted response value for droplet size was 64.24 nm, which showed an excellent agreement with the actual value (62.23 nm) with residual standard error <3.2%.
NASA Astrophysics Data System (ADS)
Park, Jong Ho; Ahn, Byung Tae
2003-01-01
A failure model for electromigration based on the "failure unit model" was presented for the prediction of lifetime in metal lines.The failure unit model, which consists of failure units in parallel and series, can predict both the median time to failure (MTTF) and the deviation in the time to failure (DTTF) in Al metal lines. The model can describe them only qualitatively. In our model, both the probability function of the failure unit in single grain segments and polygrain segments are considered instead of in polygrain segments alone. Based on our model, we calculated MTTF, DTTF, and activation energy for different median grain sizes, grain size distributions, linewidths, line lengths, current densities, and temperatures. Comparisons between our results and published experimental data showed good agreements and our model could explain the previously unexplained phenomena. Our advanced failure unit model might be further applied to other electromigration characteristics of metal lines.
GPU-based relative fuzzy connectedness image segmentation.
Zhuge, Ying; Ciesielski, Krzysztof C; Udupa, Jayaram K; Miller, Robert W
2013-01-01
Recently, clinical radiological research and practice are becoming increasingly quantitative. Further, images continue to increase in size and volume. For quantitative radiology to become practical, it is crucial that image segmentation algorithms and their implementations are rapid and yield practical run time on very large data sets. The purpose of this paper is to present a parallel version of an algorithm that belongs to the family of fuzzy connectedness (FC) algorithms, to achieve an interactive speed for segmenting large medical image data sets. The most common FC segmentations, optimizing an [script-l](∞)-based energy, are known as relative fuzzy connectedness (RFC) and iterative relative fuzzy connectedness (IRFC). Both RFC and IRFC objects (of which IRFC contains RFC) can be found via linear time algorithms, linear with respect to the image size. The new algorithm, P-ORFC (for parallel optimal RFC), which is implemented by using NVIDIA's Compute Unified Device Architecture (CUDA) platform, considerably improves the computational speed of the above mentioned CPU based IRFC algorithm. Experiments based on four data sets of small, medium, large, and super data size, achieved speedup factors of 32.8×, 22.9×, 20.9×, and 17.5×, correspondingly, on the NVIDIA Tesla C1060 platform. Although the output of P-ORFC need not precisely match that of IRFC output, it is very close to it and, as the authors prove, always lies between the RFC and IRFC objects. A parallel version of a top-of-the-line algorithm in the family of FC has been developed on the NVIDIA GPUs. An interactive speed of segmentation has been achieved, even for the largest medical image data set. Such GPU implementations may play a crucial role in automatic anatomy recognition in clinical radiology.
GPU-based relative fuzzy connectedness image segmentation
Zhuge, Ying; Ciesielski, Krzysztof C.; Udupa, Jayaram K.; Miller, Robert W.
2013-01-01
Purpose: Recently, clinical radiological research and practice are becoming increasingly quantitative. Further, images continue to increase in size and volume. For quantitative radiology to become practical, it is crucial that image segmentation algorithms and their implementations are rapid and yield practical run time on very large data sets. The purpose of this paper is to present a parallel version of an algorithm that belongs to the family of fuzzy connectedness (FC) algorithms, to achieve an interactive speed for segmenting large medical image data sets. Methods: The most common FC segmentations, optimizing an ℓ∞-based energy, are known as relative fuzzy connectedness (RFC) and iterative relative fuzzy connectedness (IRFC). Both RFC and IRFC objects (of which IRFC contains RFC) can be found via linear time algorithms, linear with respect to the image size. The new algorithm, P-ORFC (for parallel optimal RFC), which is implemented by using NVIDIA’s Compute Unified Device Architecture (CUDA) platform, considerably improves the computational speed of the above mentioned CPU based IRFC algorithm. Results: Experiments based on four data sets of small, medium, large, and super data size, achieved speedup factors of 32.8×, 22.9×, 20.9×, and 17.5×, correspondingly, on the NVIDIA Tesla C1060 platform. Although the output of P-ORFC need not precisely match that of IRFC output, it is very close to it and, as the authors prove, always lies between the RFC and IRFC objects. Conclusions: A parallel version of a top-of-the-line algorithm in the family of FC has been developed on the NVIDIA GPUs. An interactive speed of segmentation has been achieved, even for the largest medical image data set. Such GPU implementations may play a crucial role in automatic anatomy recognition in clinical radiology. PMID:23298094
Cryopreservation of Living Organs
NASA Astrophysics Data System (ADS)
Tanasawa, Ichiro; Nagata, Shinichi; Kimura, Naohiro
Cryopreservation is considered to be the most promising way of preserving living organs or tissues for a long period of time without casuing any damage to their biological functions. However, cryopreservation has been succeeded only for simple and small-size tissues such as spermatozoon, ovum, erythrocyte, bone marrow and cornea. Cryopreservation of more complex and large-scale organs are not yet succssful. The authors have attempted to establish a technique for cryopreservation of larger living organs. An experiment was carried out using daphnia (water flea). The optimum rates of freezing and thawing were determined together with the optimum selection of cryoprotectant. High recovery rate was achieved under these conditions.
Image segmentation by hierarchial agglomeration of polygons using ecological statistics
Prasad, Lakshman; Swaminarayan, Sriram
2013-04-23
A method for rapid hierarchical image segmentation based on perceptually driven contour completion and scene statistics is disclosed. The method begins with an initial fine-scale segmentation of an image, such as obtained by perceptual completion of partial contours into polygonal regions using region-contour correspondences established by Delaunay triangulation of edge pixels as implemented in VISTA. The resulting polygons are analyzed with respect to their size and color/intensity distributions and the structural properties of their boundaries. Statistical estimates of granularity of size, similarity of color, texture, and saliency of intervening boundaries are computed and formulated into logical (Boolean) predicates. The combined satisfiability of these Boolean predicates by a pair of adjacent polygons at a given segmentation level qualifies them for merging into a larger polygon representing a coarser, larger-scale feature of the pixel image and collectively obtains the next level of polygonal segments in a hierarchy of fine-to-coarse segmentations. The iterative application of this process precipitates textured regions as polygons with highly convolved boundaries and helps distinguish them from objects which typically have more regular boundaries. The method yields a multiscale decomposition of an image into constituent features that enjoy a hierarchical relationship with features at finer and coarser scales. This provides a traversable graph structure from which feature content and context in terms of other features can be derived, aiding in automated image understanding tasks. The method disclosed is highly efficient and can be used to decompose and analyze large images.
Simulation and optimum design of hybrid solar-wind and solar-wind-diesel power generation systems
NASA Astrophysics Data System (ADS)
Zhou, Wei
Solar and wind energy systems are considered as promising power generating sources due to its availability and topological advantages in local power generations. However, a drawback, common to solar and wind options, is their unpredictable nature and dependence on weather changes, both of these energy systems would have to be oversized to make them completely reliable. Fortunately, the problems caused by variable nature of these resources can be partially overcome by integrating these two resources in a proper combination to form a hybrid system. However, with the increased complexity in comparison with single energy systems, optimum design of hybrid system becomes more complicated. In order to efficiently and economically utilize the renewable energy resources, one optimal sizing method is necessary. This thesis developed an optimal sizing method to find the global optimum configuration of stand-alone hybrid (both solar-wind and solar-wind-diesel) power generation systems. By using Genetic Algorithm (GA), the optimal sizing method was developed to calculate the system optimum configuration which offers to guarantee the lowest investment with full use of the PV array, wind turbine and battery bank. For the hybrid solar-wind system, the optimal sizing method is developed based on the Loss of Power Supply Probability (LPSP) and the Annualized Cost of System (ACS) concepts. The optimization procedure aims to find the configuration that yields the best compromise between the two considered objectives: LPSP and ACS. The decision variables, which need to be optimized in the optimization process, are the PV module capacity, wind turbine capacity, battery capacity, PV module slope angle and wind turbine installation height. For the hybrid solar-wind-diesel system, minimization of the system cost is achieved not only by selecting an appropriate system configuration, but also by finding a suitable control strategy (starting and stopping point) of the diesel generator. The optimal sizing method was developed to find the system optimum configuration and settings that can achieve the custom-required Renewable Energy Fraction (fRE) of the system with minimum Annualized Cost of System (ACS). Du to the need for optimum design of the hybrid systems, an analysis of local weather conditions (solar radiation and wind speed) was carried out for the potential installation site, and mathematical simulation of the hybrid systems' components was also carried out including PV array, wind turbine and battery bank. By statistically analyzing the long-term hourly solar and wind speed data, Hong Kong area is found to have favorite solar and wind power resources compared with other areas, which validates the practical applications in Hong Kong and Guangdong area. Simulation of PV array performance includes three main parts: modeling of the maximum power output of the PV array, calculation of the total solar radiation on any tilted surface with any orientations, and PV module temperature predictions. Five parameters are introduced to account for the complex dependence of PV array performance upon solar radiation intensities and PV module temperatures. The developed simulation model was validated by using the field-measured data from one existing building-integrated photovoltaic system (BIPV) in Hong Kong, and good simulation performance of the model was achieved. Lead-acid batteries used in hybrid systems operate under very specific conditions, which often cause difficulties to predict when energy will be extracted from or supplied to the battery. In this thesis, the lead-acid battery performance is simulated by three different characteristics: battery state of charge (SOC), battery floating charge voltage and the expected battery lifetime. Good agreements were found between the predicted values and the field-measured data of a hybrid solar-wind project. At last, one 19.8kW hybrid solar-wind power generation project, designed by the optimal sizing method and set up to supply power for a telecommunication relay station on a remote island of Guangdong province, was studied. Simulation and experimental results about the operating performances and characteristics of the hybrid solar-wind project have demonstrated the feasibility and accuracy of the recommended optimal sizing method developed in this thesis.
Ripple-aware optical proximity correction fragmentation for back-end-of-line designs
NASA Astrophysics Data System (ADS)
Wang, Jingyu; Wilkinson, William
2018-01-01
Accurate characterization of image rippling is critical in early detection of back-end-of-line (BEOL) patterning weakpoints, as most defects are strongly associated with excessive rippling that does not get effectively compensated by optical proximity correction (OPC). We correlate image contour with design shapes to account for design geometry-dependent rippling signature, and explore the best practice of OPC fragmentation for BEOL geometries. Specifically, we predict the optimum contour as allowed by the lithographic process and illumination conditions and locate ripple peaks, valleys, and inflection points. This allows us to identify potential process weakpoints and segment the mask accordingly to achieve the best correction results.
Project W-320, 241-C-106 sluicing electrical calculations, Volume 2
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bailey, J.W.
1998-08-07
This supporting document has been prepared to make the FDNW calculations for Project W-320, readily retrievable. These calculations are required: To determine the power requirements needed to power electrical heat tracing segments contained within three manufactured insulated tubing assemblies; To verify thermal adequacy of tubing assembly selection by others; To size the heat tracing feeder and branch circuit conductors and conduits; To size protective circuit breaker and fuses; and To accomplish thermal design for two electrical heat tracing segments: One at C-106 tank riser 7 (CCTV) and one at the exhaust hatchway (condensate drain). Contents include: C-Farm electrical heat tracing;more » Cable ampacity, lighting, conduit fill and voltage drop; and Control circuit sizing and voltage drop analysis for the seismic shutdown system.« less
Fate of return activated sludge after ozonation: an optimization study for sludge disintegration.
Demir, Ozlem; Filibeli, Ayse
2012-09-01
The effects of ozonation on sludge disintegration should be investigated before the application of ozone during biological treatment, in order to minimize excess sludge production. In this study, changes in sludge and supernatant after ozonation of return activated sludge were investigated for seven different ozone doses. The optimum ozone dose to avoid inhibition of ozonation and high ozone cost was determined in terms of disintegration degree as 0.05 g O3/gTS. Suspended solid and volatile suspended solid concentrations of sludge decreased by 77.8% and 71.6%, respectively, at the optimum ozone dose. Ozonation significantly decomposed sludge flocs. The release of cell contents was proved by the increase of supernatant total nitrogen (TN) and phosphorus (TP). While TN increased from 7 mg/L to 151 mg/L, TP increased from 8.8 to 33 mg/L at the optimum ozone dose. The dewaterability and filterability characteristics of the ozonated sludge were also examined. Capillary suction time increased with increasing ozone dosage, but specific resistance to filtration increased to a specific value and then decreased dramatically. The particle size distribution changed significantly as a result of floc disruption at an optimum dose of 0.05 gO3/gTS.
Guaranteed Discrete Energy Optimization on Large Protein Design Problems.
Simoncini, David; Allouche, David; de Givry, Simon; Delmas, Céline; Barbe, Sophie; Schiex, Thomas
2015-12-08
In Computational Protein Design (CPD), assuming a rigid backbone and amino-acid rotamer library, the problem of finding a sequence with an optimal conformation is NP-hard. In this paper, using Dunbrack's rotamer library and Talaris2014 decomposable energy function, we use an exact deterministic method combining branch and bound, arc consistency, and tree-decomposition to provenly identify the global minimum energy sequence-conformation on full-redesign problems, defining search spaces of size up to 10(234). This is achieved on a single core of a standard computing server, requiring a maximum of 66GB RAM. A variant of the algorithm is able to exhaustively enumerate all sequence-conformations within an energy threshold of the optimum. These proven optimal solutions are then used to evaluate the frequencies and amplitudes, in energy and sequence, at which an existing CPD-dedicated simulated annealing implementation may miss the optimum on these full redesign problems. The probability of finding an optimum drops close to 0 very quickly. In the worst case, despite 1,000 repeats, the annealing algorithm remained more than 1 Rosetta unit away from the optimum, leading to design sequences that could differ from the optimal sequence by more than 30% of their amino acids.
Optimum structure of Whipple shield against hypervelocity impact
NASA Astrophysics Data System (ADS)
Lee, M.
2014-05-01
Hypervelocity impact of a spherical aluminum projectile onto two spaced aluminum plates (Whipple shield) was simulated to estimate an optimum structure. The Smooth Particle Hydrodynamics (SPH) code which has a unique migration scheme from a rectangular coordinate to an axisymmetic coordinate was used. The ratio of the front plate thickness to sphere diameter varied from 0.06 to 0.48. The impact velocities considered here were 6.7 km/s. This is the procedure we explored. To guarantee the early stage simulation, the shapes of debris clouds were first compared with the previous experimental pictures, indicating a good agreement. Next, the debris cloud expansion angle was predicted and it shows a maximum value of 23 degree for thickness ratio of front bumper to sphere diameter of 0.23. A critical sphere diameter causing failure of rear wall was also examined while keeping the total thickness of two plates constant. There exists an optimum thickness ratio of front bumper to rear wall, which is identified as a function of the size combination of the impacting body, front and rear plates. The debris cloud expansion-correlated-optimum thickness ratio study provides a good insight on the hypervelocity impact onto spaced target system.
Kim, Eunjong; Lee, Dong-Hyun; Won, Seunggun; Ahn, Heekwon
2016-01-01
Moisture content influences physiological characteristics of microbes and physical structure of solid matrices during composting of animal manure. If moisture content is maintained at a proper level, aerobic microorganisms show more active oxygen consumption during composting due to increased microbial activity. In this study, optimum moisture levels for composting of two bedding materials (sawdust, rice hull) and two different mixtures of bedding and beef manure (BS, Beef cattle manure+sawdust; BR, Beef cattle manure+rice hull) were determined based on oxygen uptake rate measured by a pressure sensor method. A broad range of oxygen uptake rates (0.3 to 33.3 mg O2/g VS d) were monitored as a function of moisture level and composting feedstock type. The maximum oxygen consumption of each material was observed near the saturated condition, which ranged from 75% to 98% of water holding capacity. The optimum moisture content of BS and BR were 70% and 57% on a wet basis, respectively. Although BS’s optimum moisture content was near saturated state, its free air space kept a favorable level (above 30%) for aerobic composting due to the sawdust’s coarse particle size and bulking effect. PMID:26954138
Kim, Eunjong; Lee, Dong-Hyun; Won, Seunggun; Ahn, Heekwon
2016-05-01
Moisture content influences physiological characteristics of microbes and physical structure of solid matrices during composting of animal manure. If moisture content is maintained at a proper level, aerobic microorganisms show more active oxygen consumption during composting due to increased microbial activity. In this study, optimum moisture levels for composting of two bedding materials (sawdust, rice hull) and two different mixtures of bedding and beef manure (BS, Beef cattle manure+sawdust; BR, Beef cattle manure+rice hull) were determined based on oxygen uptake rate measured by a pressure sensor method. A broad range of oxygen uptake rates (0.3 to 33.3 mg O2/g VS d) were monitored as a function of moisture level and composting feedstock type. The maximum oxygen consumption of each material was observed near the saturated condition, which ranged from 75% to 98% of water holding capacity. The optimum moisture content of BS and BR were 70% and 57% on a wet basis, respectively. Although BS's optimum moisture content was near saturated state, its free air space kept a favorable level (above 30%) for aerobic composting due to the sawdust's coarse particle size and bulking effect.
Segmental maxillary distraction with a novel device for closure of a wide alveolar cleft
Bousdras, Vasilios A.; Liyanage, Chandra; Mars, Michael; Ayliffe, Peter R
2014-01-01
Treatment of a wide alveolar cleft with initial application of segmental distraction osteogenesis is reported, in order to minimise cleft size prior to secondary alveolar bone grafting. The lesser maxillary segment was mobilised with osteotomy at Le Fort I level and, a novel distractor, facilitated horizontal movement of the dental/alveolar segment along the curvature of the maxillary dental arch. Following a latency period of 4 days distraction was applied for 7 days at a rate of 0.5 mm twice daily. Radiographic, ultrasonographic and clinical assessment revealed new bone and soft tissue formation 8 weeks after completion of the distraction phase. Overall the maxillary segment did move minimising the width of the cleft, which allowed successful closure with a secondary alveolar bone graft. PMID:24987601
Segmental maxillary distraction with a novel device for closure of a wide alveolar cleft.
Bousdras, Vasilios A; Liyanage, Chandra; Mars, Michael; Ayliffe, Peter R
2014-01-01
Treatment of a wide alveolar cleft with initial application of segmental distraction osteogenesis is reported, in order to minimise cleft size prior to secondary alveolar bone grafting. The lesser maxillary segment was mobilised with osteotomy at Le Fort I level and, a novel distractor, facilitated horizontal movement of the dental/alveolar segment along the curvature of the maxillary dental arch. Following a latency period of 4 days distraction was applied for 7 days at a rate of 0.5 mm twice daily. Radiographic, ultrasonographic and clinical assessment revealed new bone and soft tissue formation 8 weeks after completion of the distraction phase. Overall the maxillary segment did move minimising the width of the cleft, which allowed successful closure with a secondary alveolar bone graft.
Shultz, Rebecca; Jenkyn, Thomas
2012-01-01
Measuring individual foot joint motions requires a multi-segment foot model, even when the subject is wearing a shoe. Each foot segment must be tracked with at least three skin-mounted markers, but for these markers to be visible to an optical motion capture system holes or 'windows' must be cut into the structure of the shoe. The holes must be sufficiently large avoiding interfering with the markers, but small enough that they do not compromise the shoe's structural integrity. The objective of this study was to determine the maximum size of hole that could be cut into a running shoe upper without significantly compromising its structural integrity or changing the kinematics of the foot within the shoe. Three shoe designs were tested: (1) neutral cushioning, (2) motion control and (3) stability shoes. Holes were cut progressively larger, with four sizes tested in all. Foot joint motions were measured: (1) hindfoot with respect to midfoot in the frontal plane, (2) forefoot twist with respect to midfoot in the frontal plane, (3) the height-to-length ratio of the medial longitudinal arch and (4) the hallux angle with respect to first metatarsal in the sagittal plane. A single subject performed level walking at her preferred pace in each of the three shoes with ten repetitions for each hole size. The largest hole that did not disrupt shoe integrity was an oval of 1.7cm×2.5cm. The smallest shoe deformations were seen with the motion control shoe. The least change in foot joint motion was forefoot twist in both the neutral shoe and stability shoe for any size hole. This study demonstrates that for a hole smaller than this size, optical motion capture with a cluster-based multi-segment foot model is feasible for measure foot in shoe kinematics in vivo. Copyright © 2011. Published by Elsevier Ltd.
ERIC Educational Resources Information Center
Smolansky, Bettie M.
The question of whether the market for administrators is segmented by institutional types (i.e., region, affiliation, size, mission, and resource level) was investigated. One facet of the research was the applicability of segmentation theory to the occupational labor market for college managers. Principal data were provided by career histories of…
Valley segments, stream reaches, and channel units [Chapter 2
Peter A. Bisson; David R. Montgomery; John M. Buffington
2006-01-01
Valley segments, stream reaches, and channel units are three hierarchically nested subdivisions of the drainage network (Frissell et al. 1986), falling in size between landscapes and watersheds (see Chapter 1) and individual point measurements made along the stream network (Table 2.1; also see Chapters 3 and 4). These three subdivisions compose the habitat for large,...
Packaging of electronic modules
NASA Technical Reports Server (NTRS)
Katzin, L.
1966-01-01
Study of design approaches that are taken toward optimizing the packaging of electronic modules with respect to size, shape, component orientation, interconnections, and structural support. The study does not present a solution to specific packaging problems, but rather the factors to be considered to achieve optimum packaging designs.
NASA Astrophysics Data System (ADS)
Sun, Zhiming; Hu, Zhibo; Yan, Yang; Zheng, Shuilin
2014-09-01
TiO2/purified diatomite composite materials were prepared through a modified hydrolysis-deposition method under low temperature using titanium tetrachloride as precursor combined with a calcination crystallization process. The microstructure and crystalline phases of the obtained composites prepared under different preparation conditions were characterized by high resolution scanning electron microscope (SEM) and X-ray diffraction (XRD), respectively. The photocatalytic performance of TiO2/purified diatomite composites was evaluated by Rhodamine B as the target pollutant under UV irradiation, and the optimum preparation conditions of composites were obtained. The TiO2 crystal form in composites prepared under optimum conditions was anatase, the grain size of which was 34.12 nm. The relationships between structure and property of composite materials were analyzed and discussed. It is indicated that the TiO2 nanoparticles uniformly dispersed on the surface of diatoms, and the photocatalytic performance of the composite materials was mainly determined by the dispersity and grain size of loaded TiO2 nanoparticles.
Aniesrani Delfiya, D S; Thangavel, K; Amirtham, D
2016-04-01
In this study, acetone was used as a desolvating agent to prepare the curcumin-loaded egg albumin nanoparticles. Response surface methodology was employed to analyze the influence of process parameters namely concentration (5-15%w/v) and pH (5-7) of egg albumin solution on solubility, curcumin loading and entrapment efficiency, nanoparticles yield and particle size. Optimum processing conditions obtained from response surface analysis were found to be the egg albumin solution concentration of 8.85%w/v and pH of 5. At this optimum condition, the solubility of 33.57%, curcumin loading of 4.125%, curcumin entrapment efficiency of 55.23%, yield of 72.85% and particles size of 232.6 nm were obtained and these values were related to the values which are predicted using polynomial model equations. Thus, the model equations generated for each response was validated and it can be used to predict the response values at any concentration and pH.
Yu, Lei; Li, Haibo; Wan, Weishi; Wei, Zheng; Grzelakowski, Krzysztof P; Tromp, Rudolf M; Tang, Wen-Xin
2017-12-01
The effects of space charge, aberrations and relativity on temporal compression are investigated for a compact spherical electrostatic capacitor (α-SDA). By employing the three-dimensional (3D) field simulation and the 3D space charge model based on numerical General Particle Tracer and SIMION, we map the compression efficiency for a wide range of initial beam size and single-pulse electron number and determine the optimum conditions of electron pulses for the most effective compression. The results demonstrate that both space charge effects and aberrations prevent the compression of electron pulses into the sub-ps region if the electron number and the beam size are not properly optimized. Our results suggest that α-SDA is an effective compression approach for electron pulses under the optimum conditions. It may serve as a potential key component in designing future time-resolved electron sources for electron diffraction and spectroscopy experiments. Copyright © 2017 Elsevier B.V. All rights reserved.
Madi, Haifa A; Dinah, Christiana; Rees, Jon; Steel, David H W
2015-01-01
Analysis of pre-operative spectral domain optical coherence tomography (SD-OCT) characteristics of full-thickness macular holes (FTMH) and effect on optimum management. We retrospectively reviewed SD-OCT characteristics of a consecutive cohort of patients waitlisted for FTMH surgery and categorized them by current evidence-based treatments. Out of the 106 holes analysed, 36 were small, 40 medium and 30 large. Initially, 33 holes had vitreomacular adhesion (VMA). 41 holes were analysed for change in characteristics with a median duration of 8 weeks between the scans. The number of small or medium holes decreased from 20 to 6 and that of large holes doubled. The number of holes with VMA halved. Smaller hole size (p = 0.014) and being phakic (p = 0.048) were associated with a larger increase in size. The strongest predictor of hole progression into a different surgical management category was the presence of VMA. FTMH characteristics can change significantly pre-operatively and affect optimal treatment choice.
Branquinho, Luis C.; Carrião, Marcus S.; Costa, Anderson S.; Zufelato, Nicholas; Sousa, Marcelo H.; Miotto, Ronei; Ivkov, Robert; Bakuzis, Andris F.
2013-01-01
Nanostructured magnetic systems have many applications, including potential use in cancer therapy deriving from their ability to heat in alternating magnetic fields. In this work we explore the influence of particle chain formation on the normalized heating properties, or specific loss power (SLP) of both low- (spherical) and high- (parallelepiped) anisotropy ferrite-based magnetic fluids. Analysis of ferromagnetic resonance (FMR) data shows that high particle concentrations correlate with increasing chain length producing decreasing SLP. Monte Carlo simulations corroborate the FMR results. We propose a theoretical model describing dipole interactions valid for the linear response regime to explain the observed trends. This model predicts optimum particle sizes for hyperthermia to about 30% smaller than those previously predicted, depending on the nanoparticle parameters and chain size. Also, optimum chain lengths depended on nanoparticle surface-to-surface distance. Our results might have important implications to cancer treatment and could motivate new strategies to optimize magnetic hyperthermia. PMID:24096272
NASA Astrophysics Data System (ADS)
Prasetya, A.; Mawadati, A.; Putri, A. M. R.; Petrus, H. T. B. M.
2018-01-01
Comminution is one of crucial steps in gold ore processing used to liberate the valuable minerals from gaunge mineral. This research is done to find the particle size distribution of gold ore after it has been treated through the comminution process in a rod mill with various number of rod and rotational speed that will results in one optimum milling condition. For the initial step, Sumbawa gold ore was crushed and then sieved to pass the 2.5 mesh and retained on the 5 mesh (this condition was taken to mimic real application in artisanal gold mining). Inserting the prepared sample into the rod mill, the observation on effect of rod-number and rotational speed was then conducted by variating the rod number of 7 and 10 while the rotational speed was varied from 60, 85, and 110 rpm. In order to be able to provide estimation on particle distribution of every condition, the comminution kinetic was applied by taking sample at 15, 30, 60, and 120 minutes for size distribution analysis. The change of particle distribution of top and bottom product as time series was then treated using Rosin-Rammler distribution equation. The result shows that the homogenity of particle size and particle size distribution is affected by rod-number and rotational speed. The particle size distribution is more homogeneous by increasing of milling time, regardless of rod-number and rotational speed. Mean size of particles do not change significantly after 60 minutes milling time. Experimental results showed that the optimum condition was achieved at rotational speed of 85 rpm, using rod-number of 7.
Rasta, Seyed Hossein; Partovi, Mahsa Eisazadeh; Seyedarabi, Hadi; Javadzadeh, Alireza
2015-01-01
To investigate the effect of preprocessing techniques including contrast enhancement and illumination correction on retinal image quality, a comparative study was carried out. We studied and implemented a few illumination correction and contrast enhancement techniques on color retinal images to find out the best technique for optimum image enhancement. To compare and choose the best illumination correction technique we analyzed the corrected red and green components of color retinal images statistically and visually. The two contrast enhancement techniques were analyzed using a vessel segmentation algorithm by calculating the sensitivity and specificity. The statistical evaluation of the illumination correction techniques were carried out by calculating the coefficients of variation. The dividing method using the median filter to estimate background illumination showed the lowest Coefficients of variations in the red component. The quotient and homomorphic filtering methods after the dividing method presented good results based on their low Coefficients of variations. The contrast limited adaptive histogram equalization increased the sensitivity of the vessel segmentation algorithm up to 5% in the same amount of accuracy. The contrast limited adaptive histogram equalization technique has a higher sensitivity than the polynomial transformation operator as a contrast enhancement technique for vessel segmentation. Three techniques including the dividing method using the median filter to estimate background, quotient based and homomorphic filtering were found as the effective illumination correction techniques based on a statistical evaluation. Applying the local contrast enhancement technique, such as CLAHE, for fundus images presented good potentials in enhancing the vasculature segmentation. PMID:25709940
Pancreas and cyst segmentation
NASA Astrophysics Data System (ADS)
Dmitriev, Konstantin; Gutenko, Ievgeniia; Nadeem, Saad; Kaufman, Arie
2016-03-01
Accurate segmentation of abdominal organs from medical images is an essential part of surgical planning and computer-aided disease diagnosis. Many existing algorithms are specialized for the segmentation of healthy organs. Cystic pancreas segmentation is especially challenging due to its low contrast boundaries, variability in shape, location and the stage of the pancreatic cancer. We present a semi-automatic segmentation algorithm for pancreata with cysts. In contrast to existing automatic segmentation approaches for healthy pancreas segmentation which are amenable to atlas/statistical shape approaches, a pancreas with cysts can have even higher variability with respect to the shape of the pancreas due to the size and shape of the cyst(s). Hence, fine results are better attained with semi-automatic steerable approaches. We use a novel combination of random walker and region growing approaches to delineate the boundaries of the pancreas and cysts with respective best Dice coefficients of 85.1% and 86.7%, and respective best volumetric overlap errors of 26.0% and 23.5%. Results show that the proposed algorithm for pancreas and pancreatic cyst segmentation is accurate and stable.
A fully convolutional networks (FCN) based image segmentation algorithm in binocular imaging system
NASA Astrophysics Data System (ADS)
Long, Zourong; Wei, Biao; Feng, Peng; Yu, Pengwei; Liu, Yuanyuan
2018-01-01
This paper proposes an image segmentation algorithm with fully convolutional networks (FCN) in binocular imaging system under various circumstance. Image segmentation is perfectly solved by semantic segmentation. FCN classifies the pixels, so as to achieve the level of image semantic segmentation. Different from the classical convolutional neural networks (CNN), FCN uses convolution layers instead of the fully connected layers. So it can accept image of arbitrary size. In this paper, we combine the convolutional neural network and scale invariant feature matching to solve the problem of visual positioning under different scenarios. All high-resolution images are captured with our calibrated binocular imaging system and several groups of test data are collected to verify this method. The experimental results show that the binocular images are effectively segmented without over-segmentation. With these segmented images, feature matching via SURF method is implemented to obtain regional information for further image processing. The final positioning procedure shows that the results are acceptable in the range of 1.4 1.6 m, the distance error is less than 10mm.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jaech, J.L.
The use of a pooling technique in leak testing Plutonium Recycle Test Reactor fuel elements to reduce the number of tests is discussed. Since the proportion of defectives in this case is small, application of the method would suggest that the group size be large. It was suggested that additional savings might be introduced by subgrouping the originally grouped items in the event of a positive result, rather than testing them individually. An investigation was made to determine optimum subgrouping sizes. (M.C.G.)
Stone, Gregg W; Martin, Jack L; de Boer, Menko-Jan; Margheri, Massimo; Bramucci, Ezio; Blankenship, James C; Metzger, D Christopher; Gibbons, Raymond J; Lindsay, Barbara S; Weiner, Bonnie H; Lansky, Alexandra J; Krucoff, Mitchell W; Fahy, Martin; Boscardin, W John
2009-10-01
Myocardial salvage is often suboptimal after percutaneous coronary intervention in ST-segment elevation myocardial infarction. Posthoc subgroup analysis from a previous trial (AMIHOT I) suggested that intracoronary delivery of supersaturated oxygen (SSO(2)) may reduce infarct size in patients with large ST-segment elevation myocardial infarction treated early. A prospective, multicenter trial was performed in which 301 patients with anterior ST-segment elevation myocardial infarction undergoing percutaneous coronary intervention within 6 hours of symptom onset were randomized to a 90-minute intracoronary SSO(2) infusion in the left anterior descending artery infarct territory (n=222) or control (n=79). The primary efficacy measure was infarct size in the intention-to-treat population (powered for superiority), and the primary safety measure was composite major adverse cardiovascular events at 30 days in the intention-to-treat and per-protocol populations (powered for noninferiority), with Bayesian hierarchical modeling used to allow partial pooling of evidence from AMIHOT I. Among 281 randomized patients with tc-99m-sestamibi single-photon emission computed tomography data in AMIHOT II, median (interquartile range) infarct size was 26.5% (8.5%, 44%) with control compared with 20% (6%, 37%) after SSO(2). The pooled adjusted infarct size was 25% (7%, 42%) with control compared with 18.5% (3.5%, 34.5%) after SSO(2) (P(Wilcoxon)=0.02; Bayesian posterior probability of superiority, 96.9%). The Bayesian pooled 30-day mean (+/-SE) rates of major adverse cardiovascular events were 5.0+/-1.4% for control and 5.9+/-1.4% for SSO(2) by intention-to-treat, and 5.1+/-1.5% for control and 4.7+/-1.5% for SSO(2) by per-protocol analysis (posterior probability of noninferiority, 99.5% and 99.9%, respectively). Among patients with anterior ST-segment elevation myocardial infarction undergoing percutaneous coronary intervention within 6 hours of symptom onset, infusion of SSO(2) into the left anterior descending artery infarct territory results in a significant reduction in infarct size with noninferior rates of major adverse cardiovascular events at 30 days. Clinical Trial Registration- clinicaltrials.gov Identifier: NCT00175058.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yanping Guo; Abhishek Yadav; Tanju Karanfil
Adsorption of trichloroethylene (TCE) and atrazine, two synthetic organic contaminants (SOCs) having different optimum adsorption pore regions, by four activated carbons and an activated carbon fiber (ACF) was examined. Adsorbents included two coconut-shell based granular activated carbons (GACs), two coal-based GACs (F400 and HD4000) and a phenol formaldehyde-based activated carbon fiber. The selected adsorbents had a wide range of pore size distributions but similar surface acidity and hydrophobicity. Single solute and preloading (with a dissolved organic matter (DOM)) isotherms were performed. Single solute adsorption results showed that (i) the adsorbents having higher amounts of pores with sizes about the dimensionsmore » of the adsorbate molecules exhibited higher uptakes, (ii) there were some pore structure characteristics, which were not completely captured by pore size distribution analysis, that also affected the adsorption, and (iii) the BET surface area and total pore volume were not the primary factors controlling the adsorption of SOCs. The preloading isotherm results showed that for TCE adsorbing primarily in pores <10 {angstrom}, the highly microporous ACF and GACs, acting like molecular sieves, exhibited the highest uptakes. For atrazine with an optimum adsorption pore region of 10-20 {angstrom}, which overlaps with the adsorption region of some DOM components, the GACs with a broad pore size distribution and high pore volumes in the 10-20 {angstrom} region had the least impact of DOM on the adsorption. 25 refs., 3 figs., 3 tabs.« less
Design and Analysis of Mirror Modules for IXO and Beyond
NASA Technical Reports Server (NTRS)
McClelland, Ryan S.; Powell, Cory; Saha, Timo T.; Zhang, William W.
2011-01-01
Advancements in X-ray astronomy demand thin, light, and closely packed thin optics which lend themselves to segmentation of the annular mirrors and, in turn, a modular approach to the mirror design. The functionality requirements of such a mirror module are well understood. A baseline modular concept for the proposed International X-Ray Observatory (IXO) Flight Mirror Assembly (FMA) consisting of 14,000 glass mirror segments divided into 60 modules was developed and extensively analyzed. Through this development, our understanding of module loads, mirror stress, thermal performance, and gravity distortion have greatly progressed. The latest progress in each of these areas is discussed herein. Gravity distortion during horizontal X-ray testing and on-orbit thermal performance have proved especially difficult design challenges. In light of these challenges, fundamental trades in modular X-ray mirror design have been performed. Future directions in module X-ray mirror design are explored including the development of a 1.8 m diameter FMA utilizing smaller mirror modules. The effect of module size on mirror stress, module self-weight distortion, thermal control, and range of segment sizes required is explored with advantages demonstrated from smaller module size in most cases.
Code of Federal Regulations, 2012 CFR
2012-10-01
.... Maximum net productivity is the greatest net annual increment in population numbers or biomass resulting... term species includes any population stock. (b) Optimum Sustainable Population or OSP means a population size which falls within a range from the population level of a given species or stock which is the...
Code of Federal Regulations, 2014 CFR
2014-10-01
.... Maximum net productivity is the greatest net annual increment in population numbers or biomass resulting... term species includes any population stock. (b) Optimum Sustainable Population or OSP means a population size which falls within a range from the population level of a given species or stock which is the...
Code of Federal Regulations, 2013 CFR
2013-10-01
.... Maximum net productivity is the greatest net annual increment in population numbers or biomass resulting... term species includes any population stock. (b) Optimum Sustainable Population or OSP means a population size which falls within a range from the population level of a given species or stock which is the...
Code of Federal Regulations, 2011 CFR
2011-10-01
.... Maximum net productivity is the greatest net annual increment in population numbers or biomass resulting... term species includes any population stock. (b) Optimum Sustainable Population or OSP means a population size which falls within a range from the population level of a given species or stock which is the...
Influence of riparian and watershed alterations on sandbars in a Great Plains river
Fischer, Jeffrey M.; Paukert, Craig P.; Daniels, M.L.
2014-01-01
Anthropogenic alterations have caused sandbar habitats in rivers and the biota dependent on them to decline. Restoring large river sandbars may be needed as these habitats are important components of river ecosystems and provide essential habitat to terrestrial and aquatic organisms. We quantified factors within the riparian zone of the Kansas River, USA, and within its tributaries that influenced sandbar size and density using aerial photographs and land use/land cover (LULC) data. We developed, a priori, 16 linear regression models focused on LULC at the local, adjacent upstream river bend, and the segment (18–44 km upstream) scales and used an information theoretic approach to determine what alterations best predicted the size and density of sandbars. Variation in sandbar density was best explained by the LULC within contributing tributaries at the segment scale, which indicated reduced sandbar density with increased forest cover within tributary watersheds. Similarly, LULC within contributing tributary watersheds at the segment scale best explained variation in sandbar size. These models indicated that sandbar size increased with agriculture and forest and decreased with urban cover within tributary watersheds. Our findings suggest that sediment supply and delivery from upstream tributary watersheds may be influential on sandbars within the Kansas River and that preserving natural grassland and reducing woody encroachment within tributary watersheds in Great Plains rivers may help improve sediment delivery to help restore natural river function.
Optimal Design of Functionally Graded Metallic Foam Insulations
NASA Technical Reports Server (NTRS)
Haftka, Raphael T.; Sankar, Bhavani; Venkataraman, Satchi; Zhu, Huadong
2002-01-01
The focus of our work has been on developing an insight into the physics that govern the optimum design of thermal insulation for use in thermal protection systems of launch vehicle. Of particular interest was to obtain optimality criteria for designing foam insulations that have density (or porosity) distributions through the thickness for optimum thermal performance. We investigate the optimum design of functionally graded thermal insulation for steady state heat transfer through the foam. We showed that the heat transfer in the foam has competing modes, of radiation and conduction. The problem assumed a fixed inside temperature of 400 K and varied the aerodynamic surface heating on the outside surface from 0.2 to 1.0 MW/sq m. The thermal insulation develops a high temperature gradient through the thickness. Investigation of the model developed for heat conduction in foams showed that at high temperatures (as on outside wall) intracellular radiation dominates the heat transfer in the foam. Minimizing radiation requires reducing the pore size, which increases the density of the foam. At low temperatures (as on the inside wall), intracellular conduction (of the metal and air) dominates the heat transfer. Minimizing conduction requires increasing the pore size. This indicated that for every temperature there was an optimum value of density that minimized the heat transfer coefficient. Two optimization studies were performed. One was to minimize the heat transmitted though a fixed thickness insulation by varying density profiles. The second was to obtain the minimum mass insulation for specified thickness. Analytical optimality criteria were derived for the cases considered. The optimality condition for minimum heat transfer required that at each temperature we find the density that minimizes the heat transfer coefficient. Once a relationship between the optimum heat transfer coefficient and the temperature was found, the design problem reduced to the solution of a simple nonlinear differential equation. Preliminary results of this work were presented at the American Society of Composites meeting, and the final version was submitted for publication in the AIAA Journal. In addition to minimizing the transmitted heat, we investigated the optimum design for minimum weight given an acceptable level of heat transmission through the insulation. The optimality criterion developed was different from that obtained for minimizing beat transfer coefficient. For minimum mass design, we had to find for a given temperature the optimum density, which minimized the logarithmic derivative of the insulation thermal conductivity with respect to its density. The logarithmic derivative is defined as the ratio of relative change in the dependent response (thermal conductivity) to the relative change in the independent variable (density). The results have been documented as a conference paper that will be presented at the upcoming AIAA.
In Search of Conversational Grain Size: Modelling Semantic Structure Using Moving Stanza Windows
ERIC Educational Resources Information Center
Siebert-Evenstone, Amanda L.; Irgens, Golnaz Arastoopour; Collier, Wesley; Swiecki, Zachari; Ruis, Andrew R.; Shaffer, David Williamson
2017-01-01
Analyses of learning based on student discourse need to account not only for the content of the utterances but also for the ways in which students make connections across turns of talk. This requires segmentation of discourse data to define when connections are likely to be meaningful. In this paper, we present an approach to segmenting data for…
Chen, Xiaoping; Song, Fengyu; Jhamb, Deepali; Li, Jiliang; Bottino, Marco C.; Palakal, Mathew J.; Stocum, David L.
2015-01-01
We tested the ability of the axolotl (Ambystoma mexicanum) fibula to regenerate across segment defects of different size in the absence of intervention or after implant of a unique 8-braid pig small intestine submucosa (SIS) scaffold, with or without incorporated growth factor combinations or tissue protein extract. Fractures and defects of 10% and 20% of the total limb length regenerated well without any intervention, but 40% and 50% defects failed to regenerate after either simple removal of bone or implanting SIS scaffold alone. By contrast, scaffold soaked in the growth factor combination BMP-4/HGF or in protein extract of intact limb tissue promoted partial or extensive induction of cartilage and bone across 50% segment defects in 30%-33% of cases. These results show that BMP-4/HGF and intact tissue protein extract can promote the events required to induce cartilage and bone formation across a segment defect larger than critical size and that the long bones of axolotl limbs are an inexpensive model to screen soluble factors and natural and synthetic scaffolds for their efficacy in stimulating this process. PMID:26098852
NASA Astrophysics Data System (ADS)
Teutsch, Michael; Saur, Günter
2011-11-01
Spaceborne SAR imagery offers high capability for wide-ranging maritime surveillance especially in situations, where AIS (Automatic Identification System) data is not available. Therefore, maritime objects have to be detected and optional information such as size, orientation, or object/ship class is desired. In recent research work, we proposed a SAR processing chain consisting of pre-processing, detection, segmentation, and classification for single-polarimetric (HH) TerraSAR-X StripMap images to finally assign detection hypotheses to class "clutter", "non-ship", "unstructured ship", or "ship structure 1" (bulk carrier appearance) respectively "ship structure 2" (oil tanker appearance). In this work, we extend the existing processing chain and are now able to handle full-polarimetric (HH, HV, VH, VV) TerraSAR-X data. With the possibility of better noise suppression using the different polarizations, we slightly improve both the segmentation and the classification process. In several experiments we demonstrate the potential benefit for segmentation and classification. Precision of size and orientation estimation as well as correct classification rates are calculated individually for single- and quad-polarization and compared to each other.
Chain and mirophase-separated structures of ultrathin polyurethane films
NASA Astrophysics Data System (ADS)
Kojio, Ken; Uchiba, Yusuke; Yamamoto, Yasunori; Motokucho, Suguru; Furukawa, Mutsuhisa
2009-08-01
Measurements are presented how chain and microphase-separated structures of ultrathin polyurethane (PU) films are controlled by the thickness. The film thickness is varied by a solution concentration for spin coating. The systems are PUs prepared from commercial raw materials. Fourier-transform infrared spectroscopic measurement revealed that the degree of hydrogen bonding among hard segment chains decreased and increased with decreasing film thickness for strong and weak microphase separation systems, respectively. The microphase-separated structure, which is formed from hard segment domains and a surrounding soft segment matrix, were observed by atomic force microscopy. The size of hard segment domains decreased with decreasing film thickness, and possibility of specific orientation of the hard segment chains was exhibited for both systems. These results are due to decreasing space for the formation of the microphase-separated structure.
NASA Technical Reports Server (NTRS)
Sielken, R. L., Jr. (Principal Investigator)
1981-01-01
Several methods of estimating individual crop acreages using a mixture of completely identified and partially identified (generic) segments from a single growing year are derived and discussed. A small Monte Carlo study of eight estimators is presented. The relative empirical behavior of these estimators is discussed as are the effects of segment sample size and amount of partial identification. The principle recommendations are (1) to not exclude, but rather incorporate partially identified sample segments into the estimation procedure, (2) try to avoid having a large percentage (say 80%) of only partially identified segments, in the sample, and (3) use the maximum likelihood estimator although the weighted least squares estimator and least squares ratio estimator both perform almost as well. Sets of spring small grains (North Dakota) data were used.
Davies, Emlyn J.; Buscombe, Daniel D.; Graham, George W.; Nimmo-Smith, W. Alex M.
2015-01-01
Substantial information can be gained from digital in-line holography of marine particles, eliminating depth-of-field and focusing errors associated with standard lens-based imaging methods. However, for the technique to reach its full potential in oceanographic research, fully unsupervised (automated) methods are required for focusing, segmentation, sizing and classification of particles. These computational challenges are the subject of this paper, in which we draw upon data collected using a variety of holographic systems developed at Plymouth University, UK, from a significant range of particle types, sizes and shapes. A new method for noise reduction in reconstructed planes is found to be successful in aiding particle segmentation and sizing. The performance of an automated routine for deriving particle characteristics (and subsequent size distributions) is evaluated against equivalent size metrics obtained by a trained operative measuring grain axes on screen. The unsupervised method is found to be reliable, despite some errors resulting from over-segmentation of particles. A simple unsupervised particle classification system is developed, and is capable of successfully differentiating sand grains, bubbles and diatoms from within the surf-zone. Avoiding miscounting bubbles and biological particles as sand grains enables more accurate estimates of sand concentrations, and is especially important in deployments of particle monitoring instrumentation in aerated water. Perhaps the greatest potential for further development in the computational aspects of particle holography is in the area of unsupervised particle classification. The simple method proposed here provides a foundation upon which further development could lead to reliable identification of more complex particle populations, such as those containing phytoplankton, zooplankton, flocculated cohesive sediments and oil droplets.
Compaction of quasi-one-dimensional elastoplastic materials.
Shaebani, M Reza; Najafi, Javad; Farnudi, Ali; Bonn, Daniel; Habibi, Mehdi
2017-06-06
Insight into crumpling or compaction of one-dimensional objects is important for understanding biopolymer packaging and designing innovative technological devices. By compacting various types of wires in rigid confinements and characterizing the morphology of the resulting crumpled structures, here, we report how friction, plasticity and torsion enhance disorder, leading to a transition from coiled to folded morphologies. In the latter case, where folding dominates the crumpling process, we find that reducing the relative wire thickness counter-intuitively causes the maximum packing density to decrease. The segment size distribution gradually becomes more asymmetric during compaction, reflecting an increase of spatial correlations. We introduce a self-avoiding random walk model and verify that the cumulative injected wire length follows a universal dependence on segment size, allowing for the prediction of the efficiency of compaction as a function of material properties, container size and injection force.
Functional significance of the taper of vertebrate cone photoreceptors
Hárosi, Ferenc I.
2012-01-01
Vertebrate photoreceptors are commonly distinguished based on the shape of their outer segments: those of cones taper, whereas the ones from rods do not. The functional advantages of cone taper, a common occurrence in vertebrate retinas, remain elusive. In this study, we investigate this topic using theoretical analyses aimed at revealing structure–function relationships in photoreceptors. Geometrical optics combined with spectrophotometric and morphological data are used to support the analyses and to test predictions. Three functions are considered for correlations between taper and functionality. The first function proposes that outer segment taper serves to compensate for self-screening of the visual pigment contained within. The second function links outer segment taper to compensation for a signal-to-noise ratio decline along the longitudinal dimension. Both functions are supported by the data: real cones taper more than required for these compensatory roles. The third function relates outer segment taper to the optical properties of the inner compartment whereby the primary determinant is the inner segment’s ability to concentrate light via its ellipsoid. In support of this idea, the rod/cone ratios of primarily diurnal animals are predicted based on a principle of equal light flux gathering between photoreceptors. In addition, ellipsoid concentration factor, a measure of ellipsoid ability to concentrate light onto the outer segment, correlates positively with outer segment taper expressed as a ratio of characteristic lengths, where critical taper is the yardstick. Depending on a light-funneling property and the presence of focusing organelles such as oil droplets, cone outer segments can be reduced in size to various degrees. We conclude that outer segment taper is but one component of a miniaturization process that reduces metabolic costs while improving signal detection. Compromise solutions in the various retinas and retinal regions occur between ellipsoid size and acuity, on the one hand, and faster response time and reduced light sensitivity, on the other. PMID:22250013
Two-stage atlas subset selection in multi-atlas based image segmentation.
Zhao, Tingting; Ruan, Dan
2015-06-01
Fast growing access to large databases and cloud stored data presents a unique opportunity for multi-atlas based image segmentation and also presents challenges in heterogeneous atlas quality and computation burden. This work aims to develop a novel two-stage method tailored to the special needs in the face of large atlas collection with varied quality, so that high-accuracy segmentation can be achieved with low computational cost. An atlas subset selection scheme is proposed to substitute a significant portion of the computationally expensive full-fledged registration in the conventional scheme with a low-cost alternative. More specifically, the authors introduce a two-stage atlas subset selection method. In the first stage, an augmented subset is obtained based on a low-cost registration configuration and a preliminary relevance metric; in the second stage, the subset is further narrowed down to a fusion set of desired size, based on full-fledged registration and a refined relevance metric. An inference model is developed to characterize the relationship between the preliminary and refined relevance metrics, and a proper augmented subset size is derived to ensure that the desired atlases survive the preliminary selection with high probability. The performance of the proposed scheme has been assessed with cross validation based on two clinical datasets consisting of manually segmented prostate and brain magnetic resonance images, respectively. The proposed scheme demonstrates comparable end-to-end segmentation performance as the conventional single-stage selection method, but with significant computation reduction. Compared with the alternative computation reduction method, their scheme improves the mean and medium Dice similarity coefficient value from (0.74, 0.78) to (0.83, 0.85) and from (0.82, 0.84) to (0.95, 0.95) for prostate and corpus callosum segmentation, respectively, with statistical significance. The authors have developed a novel two-stage atlas subset selection scheme for multi-atlas based segmentation. It achieves good segmentation accuracy with significantly reduced computation cost, making it a suitable configuration in the presence of extensive heterogeneous atlases.
Wang, Chang; Qin, Xin; Liu, Yan; Zhang, Wenchao
2016-06-01
An adaptive inertia weight particle swarm algorithm is proposed in this study to solve the local optimal problem with the method of traditional particle swarm optimization in the process of estimating magnetic resonance(MR)image bias field.An indicator measuring the degree of premature convergence was designed for the defect of traditional particle swarm optimization algorithm.The inertia weight was adjusted adaptively based on this indicator to ensure particle swarm to be optimized globally and to avoid it from falling into local optimum.The Legendre polynomial was used to fit bias field,the polynomial parameters were optimized globally,and finally the bias field was estimated and corrected.Compared to those with the improved entropy minimum algorithm,the entropy of corrected image was smaller and the estimated bias field was more accurate in this study.Then the corrected image was segmented and the segmentation accuracy obtained in this research was 10% higher than that with improved entropy minimum algorithm.This algorithm can be applied to the correction of MR image bias field.
NASA Technical Reports Server (NTRS)
Ko, William L.; Olona, Timothy; Muramoto, Kyle M.
1990-01-01
Different finite element models previously set up for thermal analysis of the space shuttle orbiter structure are discussed and their shortcomings identified. Element density criteria are established for the finite element thermal modelings of space shuttle orbiter-type large, hypersonic aircraft structures. These criteria are based on rigorous studies on solution accuracies using different finite element models having different element densities set up for one cell of the orbiter wing. Also, a method for optimization of the transient thermal analysis computer central processing unit (CPU) time is discussed. Based on the newly established element density criteria, the orbiter wing midspan segment was modeled for the examination of thermal analysis solution accuracies and the extent of computation CPU time requirements. The results showed that the distributions of the structural temperatures and the thermal stresses obtained from this wing segment model were satisfactory and the computation CPU time was at the acceptable level. The studies offered the hope that modeling the large, hypersonic aircraft structures using high-density elements for transient thermal analysis is possible if a CPU optimization technique was used.
NASA Astrophysics Data System (ADS)
Park, Gilsoon; Hong, Jinwoo; Lee, Jong-Min
2018-03-01
In human brain, Corpus Callosum (CC) is the largest white matter structure, connecting between right and left hemispheres. Structural features such as shape and size of CC in midsagittal plane are of great significance for analyzing various neurological diseases, for example Alzheimer's disease, autism and epilepsy. For quantitative and qualitative studies of CC in brain MR images, robust segmentation of CC is important. In this paper, we present a novel method for CC segmentation. Our approach is based on deep neural networks and the prior information generated from multi-atlas images. Deep neural networks have recently shown good performance in various image processing field. Convolutional neural networks (CNN) have shown outstanding performance for classification and segmentation in medical image fields. We used convolutional neural networks for CC segmentation. Multi-atlas based segmentation model have been widely used in medical image segmentation because atlas has powerful information about the target structure we want to segment, consisting of MR images and corresponding manual segmentation of the target structure. We combined the prior information, such as location and intensity distribution of target structure (i.e. CC), made from multi-atlas images in CNN training process for more improving training. The CNN with prior information showed better segmentation performance than without.
Origin of amphibian and avian chromosomes by fission, fusion, and retention of ancestral chromosomes
Voss, Stephen R.; Kump, D. Kevin; Putta, Srikrishna; Pauly, Nathan; Reynolds, Anna; Henry, Rema J.; Basa, Saritha; Walker, John A.; Smith, Jeramiah J.
2011-01-01
Amphibian genomes differ greatly in DNA content and chromosome size, morphology, and number. Investigations of this diversity are needed to identify mechanisms that have shaped the evolution of vertebrate genomes. We used comparative mapping to investigate the organization of genes in the Mexican axolotl (Ambystoma mexicanum), a species that presents relatively few chromosomes (n = 14) and a gigantic genome (>20 pg/N). We show extensive conservation of synteny between Ambystoma, chicken, and human, and a positive correlation between the length of conserved segments and genome size. Ambystoma segments are estimated to be four to 51 times longer than homologous human and chicken segments. Strikingly, genes demarking the structures of 28 chicken chromosomes are ordered among linkage groups defining the Ambystoma genome, and we show that these same chromosomal segments are also conserved in a distantly related anuran amphibian (Xenopus tropicalis). Using linkage relationships from the amphibian maps, we predict that three chicken chromosomes originated by fusion, nine to 14 originated by fission, and 12–17 evolved directly from ancestral tetrapod chromosomes. We further show that some ancestral segments were fused prior to the divergence of salamanders and anurans, while others fused independently and randomly as chromosome numbers were reduced in lineages leading to Ambystoma and Xenopus. The maintenance of gene order relationships between chromosomal segments that have greatly expanded and contracted in salamander and chicken genomes, respectively, suggests selection to maintain synteny relationships and/or extremely low rates of chromosomal rearrangement. Overall, the results demonstrate the value of data from diverse, amphibian genomes in studies of vertebrate genome evolution. PMID:21482624
Zhang, Bo; Edwards, Brian J
2015-06-07
A combination of self-consistent field theory and density functional theory was used to examine the effect of particle size on the stable, 3-dimensional equilibrium morphologies formed by diblock copolymers with a tethered nanoparticle attached either between the two blocks or at the end of one of the blocks. Particle size was varied between one and four tenths of the radius of gyration of the diblock polymer chain for neutral particles as well as those either favoring or disfavoring segments of the copolymer blocks. Phase diagrams were constructed and analyzed in terms of thermodynamic diagrams to understand the physics associated with the molecular-level self-assembly processes. Typical morphologies were observed, such as lamellar, spheroidal, cylindrical, gyroidal, and perforated lamellar, with the primary concentration region of the tethered particles being influenced heavily by particle size and tethering location, strength of the particle-segment energetic interactions, chain length, and copolymer radius of gyration. The effect of the simulation box size on the observed morphology and system thermodynamics was also investigated, indicating possible effects of confinement upon the system self-assembly processes.
NASA Technical Reports Server (NTRS)
Wilson, D. A.
1976-01-01
Specific requirements for a wash/rinse capability to support Spacelab biological experimentation and to identify various concepts for achieving this capability were determined. This included the examination of current state-of-the-art and emerging technology designs that would meet the wash/rinse requirements. Once several concepts were identified, including the disposable utensils, tools and gloves or other possible alternatives, a tradeoff analysis involving system cost, weight, volume utilization, functional performance, maintainability, reliability, power utilization, safety, complexity, etc., was performed so as to determine an optimum approach for achieving a wash/rinse capability to support future space flights. Missions of varying crew size and durations were considered.
Arboreal nests of Phenacomys longgicaudus in Oregon.
A.M. Gillesberg; A.B. Carey
1991-01-01
Searching felled trees proved effective for finding nests of Phenacomys longicaudus; 117 nests were found in 50 trees. Nests were located throughout the live crowns, but were concentrated in the lower two-thirds of the canopy. Abundance of nests increased with tree size; old-growth forests provide optimum habitat.
Storage Optimization of Educational System Data
ERIC Educational Resources Information Center
Boja, Catalin
2006-01-01
There are described methods used to minimize data files dimension. There are defined indicators for measuring size of files and databases. The storage optimization process is based on selecting from a multitude of data storage models the one that satisfies the propose problem objective, maximization or minimization of the optimum criterion that is…
Interactive vs. automatic ultrasound image segmentation methods for staging hepatic lipidosis.
Weijers, Gert; Starke, Alexander; Haudum, Alois; Thijssen, Johan M; Rehage, Jürgen; De Korte, Chris L
2010-07-01
The aim of this study was to test the hypothesis that automatic segmentation of vessels in ultrasound (US) images can produce similar or better results in grading fatty livers than interactive segmentation. A study was performed in postpartum dairy cows (N=151), as an animal model of human fatty liver disease, to test this hypothesis. Five transcutaneous and five intraoperative US liver images were acquired in each animal and a liverbiopsy was taken. In liver tissue samples, triacylglycerol (TAG) was measured by biochemical analysis and hepatic diseases other than hepatic lipidosis were excluded by histopathologic examination. Ultrasonic tissue characterization (UTC) parameters--Mean echo level, standard deviation (SD) of echo level, signal-to-noise ratio (SNR), residual attenuation coefficient (ResAtt) and axial and lateral speckle size--were derived using a computer-aided US (CAUS) protocol and software package. First, the liver tissue was interactively segmented by two observers. With increasing fat content, fewer hepatic vessels were visible in the ultrasound images and, therefore, a smaller proportion of the liver needed to be excluded from these images. Automatic-segmentation algorithms were implemented and it was investigated whether better results could be achieved than with the subjective and time-consuming interactive-segmentation procedure. The automatic-segmentation algorithms were based on both fixed and adaptive thresholding techniques in combination with a 'speckle'-shaped moving-window exclusion technique. All data were analyzed with and without postprocessing as contained in CAUS and with different automated-segmentation techniques. This enabled us to study the effect of the applied postprocessing steps on single and multiple linear regressions ofthe various UTC parameters with TAG. Improved correlations for all US parameters were found by using automatic-segmentation techniques. Stepwise multiple linear-regression formulas where derived and used to predict TAG level in the liver. Receiver-operating-characteristics (ROC) analysis was applied to assess the performance and area under the curve (AUC) of predicting TAG and to compare the sensitivity and specificity of the methods. Best speckle-size estimates and overall performance (R2 = 0.71, AUC = 0.94) were achieved by using an SNR-based adaptive automatic-segmentation method (used TAG threshold: 50 mg/g liver wet weight). Automatic segmentation is thus feasible and profitable.
Size dependence of energetic properties in nanowire-based energetic materials
NASA Astrophysics Data System (ADS)
Menon, L.; Aurongzeb, D.; Patibandla, S.; Bhargava Ram, K.; Richter, C.; Sacco, A.
2006-08-01
We prepared nanowire-array-based thin film energetic nanocomposites based on Al -Fe2O3. The ignition properties as a function of wire dimensions and interwire spacing have been investigated. We show significant variations in ignition behavior, which we relate to the kinetic and heat transfer dynamics of the various configurations studied. Our results indicate the possibility for nanoscale control of reaction parameters such as flame temperature and burn rate in such composites for optimized configurations (optimum wire size, interwire spacing, film thickness, etc.).
Eissenberg, David M.; Liu, Yin-An
1980-01-01
This invention relates to an improved device and method for the high gradient magnetic beneficiation of dry pulverized coal, for the purpose of removing sulfur and ash from the coal whereby the product is a dry environmentally acceptable, low-sulfur fuel. The process involves upwardly directed recirculating air fluidization of selectively sized powdered coal in a separator having sections of increasing diameters in the direction of air flow, with magnetic field and flow rates chosen for optimum separations depending upon particulate size.
[C57BL/6 mice open field behaviour qualitatively depends on arena size].
Lebedev, I V; Pleskacheva, M G; Anokhin, K V
2012-01-01
Open field behavior is well known to depend on physical characteristics of the apparatus. However many of such effects are poorly described especially with using of modern methods of behavioral registration and analysis. The previous results of experiments on the effect of arena size on behavior are not numerous and contradictory. We compared the behavioral scores of four groups of C57BL/6 mice in round open field arenas of four different sizes (diameter 35, 75, 150 and 220 cm). The behavior was registered and analyzed using Noldus EthoVision, WinTrack and SegmentAnalyzer software. A significant effect of arena size was found. Traveled distance and velocity increased, but not in proportion to increase of arena size. Moreover a significant effect on segment characteristics of the trajectory was revealed. Detailed behavior analysis revealed drastic differences in trajectory structure and number of rears between smaller (35 and 75 cm) and bigger (150 and 220 cm) arenas. We conclude, that the character of exploration in smaller and bigger arenas depends on relative size of central open zone in arena. Apparently its extension increases the motivational heterogeneity of space, that requires another than in smaller arenas, strategy of exploration.
Mahdavi, Mahnaz; Ahmad, Mansor Bin; Haron, Md Jelas; Namvar, Farideh; Nadi, Behzad; Rahman, Mohamad Zaki Ab; Amin, Jamileh
2013-06-27
Superparamagnetic iron oxide nanoparticles (MNPs) with appropriate surface chemistry exhibit many interesting properties that can be exploited in a variety of biomedical applications such as magnetic resonance imaging contrast enhancement, tissue repair, hyperthermia, drug delivery and in cell separation. These applications required that the MNPs such as iron oxide Fe₃O₄ magnetic nanoparticles (Fe₃O₄ MNPs) having high magnetization values and particle size smaller than 100 nm. This paper reports the experimental detail for preparation of monodisperse oleic acid (OA)-coated Fe₃O₄ MNPs by chemical co-precipitation method to determine the optimum pH, initial temperature and stirring speed in order to obtain the MNPs with small particle size and size distribution that is needed for biomedical applications. The obtained nanoparticles were characterized by Fourier transform infrared spectroscopy (FTIR), transmission electron microscopy (TEM), scanning electron microscopy (SEM), energy dispersive X-ray fluorescence spectrometry (EDXRF), thermogravimetric analysis (TGA), X-ray powder diffraction (XRD), and vibrating sample magnetometer (VSM). The results show that the particle size as well as the magnetization of the MNPs was very much dependent on pH, initial temperature of Fe²⁺ and Fe³⁺ solutions and steering speed. The monodisperse Fe₃O₄ MNPs coated with oleic acid with size of 7.8 ± 1.9 nm were successfully prepared at optimum pH 11, initial temperature of 45°C and at stirring rate of 800 rpm. FTIR and XRD data reveal that the oleic acid molecules were adsorbed on the magnetic nanoparticles by chemisorption. Analyses of TEM show the oleic acid provided the Fe₃O₄ particles with better dispersibility. The synthesized Fe₃O₄ nanoparticles exhibited superparamagnetic behavior and the saturation magnetization of the Fe₃O₄ nanoparticles increased with the particle size.
Weakly Supervised Segmentation-Aided Classification of Urban Scenes from 3d LIDAR Point Clouds
NASA Astrophysics Data System (ADS)
Guinard, S.; Landrieu, L.
2017-05-01
We consider the problem of the semantic classification of 3D LiDAR point clouds obtained from urban scenes when the training set is limited. We propose a non-parametric segmentation model for urban scenes composed of anthropic objects of simple shapes, partionning the scene into geometrically-homogeneous segments which size is determined by the local complexity. This segmentation can be integrated into a conditional random field classifier (CRF) in order to capture the high-level structure of the scene. For each cluster, this allows us to aggregate the noisy predictions of a weakly-supervised classifier to produce a higher confidence data term. We demonstrate the improvement provided by our method over two publicly-available large-scale data sets.
Segmental Refinement: A Multigrid Technique for Data Locality
Adams, Mark F.; Brown, Jed; Knepley, Matt; ...
2016-08-04
In this paper, we investigate a domain decomposed multigrid technique, termed segmental refinement, for solving general nonlinear elliptic boundary value problems. We extend the method first proposed in 1994 by analytically and experimentally investigating its complexity. We confirm that communication of traditional parallel multigrid is eliminated on fine grids, with modest amounts of extra work and storage, while maintaining the asymptotic exactness of full multigrid. We observe an accuracy dependence on the segmental refinement subdomain size, which was not considered in the original analysis. Finally, we present a communication complexity analysis that quantifies the communication costs ameliorated by segmental refinementmore » and report performance results with up to 64K cores on a Cray XC30.« less
Optimum strata boundaries and sample sizes in health surveys using auxiliary variables
2018-01-01
Using convenient stratification criteria such as geographical regions or other natural conditions like age, gender, etc., is not beneficial in order to maximize the precision of the estimates of variables of interest. Thus, one has to look for an efficient stratification design to divide the whole population into homogeneous strata that achieves higher precision in the estimation. In this paper, a procedure for determining Optimum Stratum Boundaries (OSB) and Optimum Sample Sizes (OSS) for each stratum of a variable of interest in health surveys is developed. The determination of OSB and OSS based on the study variable is not feasible in practice since the study variable is not available prior to the survey. Since many variables in health surveys are generally skewed, the proposed technique considers the readily-available auxiliary variables to determine the OSB and OSS. This stratification problem is formulated into a Mathematical Programming Problem (MPP) that seeks minimization of the variance of the estimated population parameter under Neyman allocation. It is then solved for the OSB by using a dynamic programming (DP) technique. A numerical example with a real data set of a population, aiming to estimate the Haemoglobin content in women in a national Iron Deficiency Anaemia survey, is presented to illustrate the procedure developed in this paper. Upon comparisons with other methods available in literature, results reveal that the proposed approach yields a substantial gain in efficiency over the other methods. A simulation study also reveals similar results. PMID:29621265
Optimum strata boundaries and sample sizes in health surveys using auxiliary variables.
Reddy, Karuna Garan; Khan, Mohammad G M; Khan, Sabiha
2018-01-01
Using convenient stratification criteria such as geographical regions or other natural conditions like age, gender, etc., is not beneficial in order to maximize the precision of the estimates of variables of interest. Thus, one has to look for an efficient stratification design to divide the whole population into homogeneous strata that achieves higher precision in the estimation. In this paper, a procedure for determining Optimum Stratum Boundaries (OSB) and Optimum Sample Sizes (OSS) for each stratum of a variable of interest in health surveys is developed. The determination of OSB and OSS based on the study variable is not feasible in practice since the study variable is not available prior to the survey. Since many variables in health surveys are generally skewed, the proposed technique considers the readily-available auxiliary variables to determine the OSB and OSS. This stratification problem is formulated into a Mathematical Programming Problem (MPP) that seeks minimization of the variance of the estimated population parameter under Neyman allocation. It is then solved for the OSB by using a dynamic programming (DP) technique. A numerical example with a real data set of a population, aiming to estimate the Haemoglobin content in women in a national Iron Deficiency Anaemia survey, is presented to illustrate the procedure developed in this paper. Upon comparisons with other methods available in literature, results reveal that the proposed approach yields a substantial gain in efficiency over the other methods. A simulation study also reveals similar results.
NASA Astrophysics Data System (ADS)
Mozaffari, Ahmad; Vajedi, Mahyar; Azad, Nasser L.
2015-06-01
The main proposition of the current investigation is to develop a computational intelligence-based framework which can be used for the real-time estimation of optimum battery state-of-charge (SOC) trajectory in plug-in hybrid electric vehicles (PHEVs). The estimated SOC trajectory can be then employed for an intelligent power management to significantly improve the fuel economy of the vehicle. The devised intelligent SOC trajectory builder takes advantage of the upcoming route information preview to achieve the lowest possible total cost of electricity and fossil fuel. To reduce the complexity of real-time optimization, the authors propose an immune system-based clustering approach which allows categorizing the route information into a predefined number of segments. The intelligent real-time optimizer is also inspired on the basis of interactions in biological immune systems, and is called artificial immune algorithm (AIA). The objective function of the optimizer is derived from a computationally efficient artificial neural network (ANN) which is trained by a database obtained from a high-fidelity model of the vehicle built in the Autonomie software. The simulation results demonstrate that the integration of immune inspired clustering tool, AIA and ANN, will result in a powerful framework which can generate a near global optimum SOC trajectory for the baseline vehicle, that is, the Toyota Prius PHEV. The outcomes of the current investigation prove that by taking advantage of intelligent approaches, it is possible to design a computationally efficient and powerful SOC trajectory builder for the intelligent power management of PHEVs.
Lung tumor segmentation in PET images using graph cuts.
Ballangan, Cherry; Wang, Xiuying; Fulham, Michael; Eberl, Stefan; Feng, David Dagan
2013-03-01
The aim of segmentation of tumor regions in positron emission tomography (PET) is to provide more accurate measurements of tumor size and extension into adjacent structures, than is possible with visual assessment alone and hence improve patient management decisions. We propose a segmentation energy function for the graph cuts technique to improve lung tumor segmentation with PET. Our segmentation energy is based on an analysis of the tumor voxels in PET images combined with a standardized uptake value (SUV) cost function and a monotonic downhill SUV feature. The monotonic downhill feature avoids segmentation leakage into surrounding tissues with similar or higher PET tracer uptake than the tumor and the SUV cost function improves the boundary definition and also addresses situations where the lung tumor is heterogeneous. We evaluated the method in 42 clinical PET volumes from patients with non-small cell lung cancer (NSCLC). Our method improves segmentation and performs better than region growing approaches, the watershed technique, fuzzy-c-means, region-based active contour and tumor customized downhill. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.
Kawamoto, Alan H; Liu, Qiang; Kello, Christopher T
2015-01-01
Speech production and reading aloud studies have much in common, especially the last stages involved in producing a response. We focus on the minimal planning unit (MPU) in articulation. Although most researchers now assume that the MPU is the syllable, we argue that it is at least as small as the segment based on negative response latencies (i.e., response initiation before presentation of the complete target) and longer initial segment durations in a reading aloud task where the initial segment is primed. We also discuss why such evidence was not found in earlier studies. Next, we rebut arguments that the segment cannot be the MPU by appealing to flexible planning scope whereby planning units of different sizes can be used due to individual differences, as well as stimulus and experimental design differences. We also discuss why negative response latencies do not arise in some situations and why anticipatory coarticulation does not preclude the segment MPU. Finally, we argue that the segment MPU is also important because it provides an alternative explanation of results implicated in the serial vs. parallel processing debate.
A low-cost three-dimensional laser surface scanning approach for defining body segment parameters.
Pandis, Petros; Bull, Anthony Mj
2017-11-01
Body segment parameters are used in many different applications in ergonomics as well as in dynamic modelling of the musculoskeletal system. Body segment parameters can be defined using different methods, including techniques that involve time-consuming manual measurements of the human body, used in conjunction with models or equations. In this study, a scanning technique for measuring subject-specific body segment parameters in an easy, fast, accurate and low-cost way was developed and validated. The scanner can obtain the body segment parameters in a single scanning operation, which takes between 8 and 10 s. The results obtained with the system show a standard deviation of 2.5% in volumetric measurements of the upper limb of a mannequin and 3.1% difference between scanning volume and actual volume. Finally, the maximum mean error for the moment of inertia by scanning a standard-sized homogeneous object was 2.2%. This study shows that a low-cost system can provide quick and accurate subject-specific body segment parameter estimates.
Initialisation of 3D level set for hippocampus segmentation from volumetric brain MR images
NASA Astrophysics Data System (ADS)
Hajiesmaeili, Maryam; Dehmeshki, Jamshid; Bagheri Nakhjavanlo, Bashir; Ellis, Tim
2014-04-01
Shrinkage of the hippocampus is a primary biomarker for Alzheimer's disease and can be measured through accurate segmentation of brain MR images. The paper will describe the problem of initialisation of a 3D level set algorithm for hippocampus segmentation that must cope with the some challenging characteristics, such as small size, wide range of intensities, narrow width, and shape variation. In addition, MR images require bias correction, to account for additional inhomogeneity associated with the scanner technology. Due to these inhomogeneities, using a single initialisation seed region inside the hippocampus is prone to failure. Alternative initialisation strategies are explored, such as using multiple initialisations in different sections (such as the head, body and tail) of the hippocampus. The Dice metric is used to validate our segmentation results with respect to ground truth for a dataset of 25 MR images. Experimental results indicate significant improvement in segmentation performance using the multiple initialisations techniques, yielding more accurate segmentation results for the hippocampus.
Yu, Huimin; Zhao, Xiuhua; Zu, Yuangang; Zhang, Xinjuan; Zu, Baishi; Zhang, Xiaonan
2012-01-01
The particle sizes of pharmaceutical substances are important for their bioavailability. Bioavailability can be improved by reducing the particle size of the drug. In this study, artemisinin was micronized by the rapid expansion of supercritical solutions (RESS). The particle size of the unprocessed white needle-like artemisinin particles was 30 to 1200 μm. The optimum micronization conditions are determined as follows: extraction temperature of 62 °C, extraction pressure of 25 MPa, precipitation temperature 45 °C and nozzle diameter of 1000 μm. Under the optimum conditions, micronized artemisinin with a (mean particle size) MPS of 550 nm is obtained. By analysis of variance (ANOVA), extraction temperature and pressure have significant effects on the MPS of the micronized artemisinin. The particle size of micronized artemisinin decreased with increasing extraction temperature and pressure. Moreover, the SEM, LC-MS, FTIR, DSC and XRD allowed the comparison between the crystalline initial state and the micronization particles obtained after the RESS process. The results showed that RESS process has not induced degradation of artemisinin and that processed artemisinin particles have lower crystallinity and melting point. The bulk density of artemisinin was determined before and after RESS process and the obtained results showed that it passes from an initial density of 0.554 to 0.128 g·cm−3 after the processing. The decrease in bulk density of the micronized powder can increase the liquidity of drug particles when they are applied for medicinal preparations. These results suggest micronized powder of artemisinin can be of great potential in drug delivery systems. PMID:22606030
Muscle segmentation in time series images of Drosophila metamorphosis.
Yadav, Kuleesha; Lin, Feng; Wasser, Martin
2015-01-01
In order to study genes associated with muscular disorders, we characterize the phenotypic changes in Drosophila muscle cells during metamorphosis caused by genetic perturbations. We collect in vivo images of muscle fibers during remodeling of larval to adult muscles. In this paper, we focus on the new image processing pipeline designed to quantify the changes in shape and size of muscles. We propose a new two-step approach to muscle segmentation in time series images. First, we implement a watershed algorithm to divide the image into edge-preserving regions, and then, we classify these regions into muscle and non-muscle classes on the basis of shape and intensity. The advantage of our method is two-fold: First, better results are obtained because classification of regions is constrained by the shape of muscle cell from previous time point; and secondly, minimal user intervention results in faster processing time. The segmentation results are used to compare the changes in cell size between controls and reduction of the autophagy related gene Atg 9 during Drosophila metamorphosis.
Outdoor recreation activity trends by volume segments: U.S. and Northeast market analyses, 1982-1989
Rodney B. Warnick
1992-01-01
The purpose of this review was to examine volume segmentation within three selected outdoor recreational activities -- swimming, hunting and downhill skiing over an eight-year period, from 1982 through 1989 at the national level and within the Northeast Region of the U.S.; and to determine if trend patterns existed within any of these activities when the market size...
Prigge, Vanessa; Melchinger, Albrecht E; Dhillon, Baldev S; Frisch, Matthias
2009-06-01
Expenses for marker assays are the major costs in marker-assisted backcrossing programs for the transfer of target genes from a donor into the genetic background of a recipient genotype. Our objectives were to (1) investigate the effect of employing sequentially increasing marker densities over backcross generations on the recurrent parent genome (RPG) recovery and the number of marker data points (MDP) required, and (2) determine optimum designs for attaining RPG thresholds of 93-98% with a minimum number of MDP. We simulated the introgression of one dominant target gene for genome models of sugar beet (Beta vulgaris L.) and maize (Zea mays L.) with varying marker distances of 5-80 cM and population sizes of 30-250 plants across BC(1) to BC(3) generations. Employing less dense maps in early backcross generations resulted in savings of over 50% in the number of required MDP compared with using a constant set of markers and was accompanied only by small reductions in the attained RPG values. The optimum designs were characterized by increasing marker densities and increasing population sizes in advanced generations for both genome models. We conclude that increasing simultaneously the marker density and the population size from early to advanced backcross generations results in gene introgression with a minimum number of required MDP.
Application of fully stressed design procedures to redundant and non-isotropic structures
NASA Technical Reports Server (NTRS)
Adelman, H. M.; Haftka, R. T.; Tsach, U.
1980-01-01
An evaluation is presented of fully stressed design procedures for sizing highly redundant structures including structures made of composite materials. The evaluation is carried out by sizing three structures: a simple box beam of either composite or metal construction; a low aspect ratio titanium wing; and a titanium arrow wing for a conceptual supersonic cruise aircraft. All three structures are sized by ordinary fully-stressed design (FSD) and thermal fully stressed design (TFSD) for combined mechanical and thermal loads. Where possible, designs are checked by applying rigorous mathematical programming techniques to the structures. It is found that FSD and TFSD produce optimum designs for the metal box beam, but produce highly non-optimum designs for the composite box beam. Results from the delta wing and arrow wing indicate that FSD and TFSD exhibits slow convergence for highly redundant metal structures. Further, TFSD exhibits slow oscillatory convergence behavior for the arrow wing for very high temperatures. In all cases where FSD and TFSD perform poorly either in obtaining nonoptimum designs or in converging slowly, the assumptions on which the algorithms are based are grossly violated. The use of scaling, however, is found to be very effective in obtaining fast convergence and efficiently produces safe designs even for those cases when FSD and TFSD alone are ineffective.
Xiong, Chengjie; Luo, Jingqin; Morris, John C; Bateman, Randall
2018-01-01
Modern clinical trials on Alzheimer disease (AD) focus on the early symptomatic stage or even the preclinical stage. Subtle disease progression at the early stages, however, poses a major challenge in designing such clinical trials. We propose a multivariate mixed model on repeated measures to model the disease progression over time on multiple efficacy outcomes, and derive the optimum weights to combine multiple outcome measures by minimizing the sample sizes to adequately power the clinical trials. A cross-validation simulation study is conducted to assess the accuracy for the estimated weights as well as the improvement in reducing the sample sizes for such trials. The proposed methodology is applied to the multiple cognitive tests from the ongoing observational study of the Dominantly Inherited Alzheimer Network (DIAN) to power future clinical trials in the DIAN with a cognitive endpoint. Our results show that the optimum weights to combine multiple outcome measures can be accurately estimated, and that compared to the individual outcomes, the combined efficacy outcome with these weights significantly reduces the sample size required to adequately power clinical trials. When applied to the clinical trial in the DIAN, the estimated linear combination of six cognitive tests can adequately power the clinical trial. PMID:29546251
Fard Masoumi, Hamid Reza; Basri, Mahiran; Sarah Samiun, Wan; Izadiyan, Zahra; Lim, Chaw Jiang
2015-01-01
Aripiprazole is considered as a third-generation antipsychotic drug with excellent therapeutic efficacy in controlling schizophrenia symptoms and was the first atypical anti-psychotic agent to be approved by the US Food and Drug Administration. Formulation of nanoemulsion-containing aripiprazole was carried out using high shear and high pressure homogenizers. Mixture experimental design was selected to optimize the composition of nanoemulsion. A very small droplet size of emulsion can provide an effective encapsulation for delivery system in the body. The effects of palm kernel oil ester (3–6 wt%), lecithin (2–3 wt%), Tween 80 (0.5–1 wt%), glycerol (1.5–3 wt%), and water (87–93 wt%) on the droplet size of aripiprazole nanoemulsions were investigated. The mathematical model showed that the optimum formulation for preparation of aripiprazole nanoemulsion having the desirable criteria was 3.00% of palm kernel oil ester, 2.00% of lecithin, 1.00% of Tween 80, 2.25% of glycerol, and 91.75% of water. Under optimum formulation, the corresponding predicted response value for droplet size was 64.24 nm, which showed an excellent agreement with the actual value (62.23 nm) with residual standard error <3.2%. PMID:26508853
1980-12-01
Commun- ications Corporation, Palo Alto, CA (March 1978). g. [Walter at al. 74] Walter, K.G. et al., " Primitive Models for Computer .. Security", ESD-TR...discussion is followed by a presenta- tion of the Kernel primitive operations upon these objects. All Kernel objects shall be referenced by a common...set of sizes. All process segments, regardless of domain, shall be manipulated by the same set of Kernel segment primitives . User domain segments
Polarization sensitive corneal and anterior segment swept-source optical coherence tomography
NASA Astrophysics Data System (ADS)
Lim, Yiheng; Yamanari, Masahiro; Yasuno, Yoshiaki
2010-02-01
We develop a compact polarization sensitive corneal and anterior segment swept-source optical coherence tomography (PS-CAS- OCT) for evaluating the usefulness of PS-OCT, and enabling large scale studies in the tissue properties of normal and diseased eyes using the benefits of the PS-OCT, which provides better tissue discrimination compared to the conventional OCT by visualizing the fibrous tissues in the anterior eye segment. Our polarization-sensitive interferometer is size reduced into a 19 inch box for the portability and the probe is integrated into a position adjustable scanning head for the usability of our system.
Parallelized seeded region growing using CUDA.
Park, Seongjin; Lee, Jeongjin; Lee, Hyunna; Shin, Juneseuk; Seo, Jinwook; Lee, Kyoung Ho; Shin, Yeong-Gil; Kim, Bohyoung
2014-01-01
This paper presents a novel method for parallelizing the seeded region growing (SRG) algorithm using Compute Unified Device Architecture (CUDA) technology, with intention to overcome the theoretical weakness of SRG algorithm of its computation time being directly proportional to the size of a segmented region. The segmentation performance of the proposed CUDA-based SRG is compared with SRG implementations on single-core CPUs, quad-core CPUs, and shader language programming, using synthetic datasets and 20 body CT scans. Based on the experimental results, the CUDA-based SRG outperforms the other three implementations, advocating that it can substantially assist the segmentation during massive CT screening tests.
Haeck, Joost D E; Verouden, Niels J W; Kuijt, Wichert J; Koch, Karel T; Van Straalen, Jan P; Fischer, Johan; Groenink, Maarten; Bilodeau, Luc; Tijssen, Jan G P; Krucoff, Mitchell W; De Winter, Robbert J
2010-04-15
The purpose of the present study was to determine the prognostic value of N-terminal pro-brain natriuretic peptide (NT-pro-BNP), among other serum biomarkers, on cardiac magnetic resonance (CMR) imaging parameters of cardiac function and infarct size in patients with ST-segment elevation myocardial infarction undergoing primary percutaneous coronary intervention. We measured NT-pro-BNP, cardiac troponin T, creatinine kinase-MB fraction, high-sensitivity C-reactive protein, and creatinine on the patients' arrival at the catheterization laboratory in 206 patients with ST-segment elevation myocardial infarction. The NT-pro-BNP levels were divided into quartiles and correlated with left ventricular function and infarct size measured by CMR imaging at 4 to 6 months. Compared to the lower quartiles, patients with nonanterior wall myocardial infarction in the highest quartile of NT-pro-BNP (> or = 260 pg/ml) more often had a greater left ventricular end-systolic volume (68 vs 39 ml/m(2), p <0.001), a lower left ventricular ejection fraction (42% vs 54%, p <0.001), a larger infarct size (9 vs 4 g/m(2), p = 0.002), and a larger number of transmural segments (11% of segments vs 3% of segments, p <0.001). Multivariate analysis revealed that a NT-pro-BNP level of > or = 260 pg/ml was the strongest independent predictor of left ventricular ejection fraction in patients with nonanterior wall myocardial infarction compared to the other serum biomarkers (beta = -5.8; p = 0.019). In conclusion, in patients with nonanterior wall myocardial infarction undergoing primary percutaneous coronary intervention, an admission NT-pro-BNP level of > or = 260 pg/ml was a strong, independent predictor of left ventricular function assessed by CMR imaging at follow-up. Our findings suggest that NT-pro-BNP, a widely available biomarker, might be helpful in the early risk stratification of patients with nonanterior wall myocardial infarction. Copyright 2010 Elsevier Inc. All rights reserved.
Program manual for ASTOP, an Arbitrary space trajectory optimization program
NASA Technical Reports Server (NTRS)
Horsewood, J. L.
1974-01-01
The ASTOP program (an Arbitrary Space Trajectory Optimization Program) designed to generate optimum low-thrust trajectories in an N-body field while satisfying selected hardware and operational constraints is presented. The trajectory is divided into a number of segments or arcs over which the control is held constant. This constant control over each arc is optimized using a parameter optimization scheme based on gradient techniques. A modified Encke formulation of the equations of motion is employed. The program provides a wide range of constraint, end conditions, and performance index options. The basic approach is conducive to future expansion of features such as the incorporation of new constraints and the addition of new end conditions.
Relaxation dynamics of internal segments of DNA chains in nanochannels
NASA Astrophysics Data System (ADS)
Jain, Aashish; Muralidhar, Abhiram; Dorfman, Kevin; Dorfman Group Team
We will present relaxation dynamics of internal segments of a DNA chain confined in nanochannel. The results have direct application in genome mapping technology, where long DNA molecules containing sequence-specific fluorescent probes are passed through an array of nanochannels to linearize them, and then the distances between these probes (the so-called ``DNA barcode'') are measured. The relaxation dynamics of internal segments set the experimental error due to dynamic fluctuations. We developed a multi-scale simulation algorithm, combining a Pruned-Enriched Rosenbluth Method (PERM) simulation of a discrete wormlike chain model with hard spheres with Brownian dynamics (BD) simulations of a bead-spring chain. Realistic parameters such as the bead friction coefficient and spring force law parameters are obtained from PERM simulations and then mapped onto the bead-spring model. The BD simulations are carried out to obtain the extension autocorrelation functions of various segments, which furnish their relaxation times. Interestingly, we find that (i) corner segments relax faster than the center segments and (ii) relaxation times of corner segments do not depend on the contour length of DNA chain, whereas the relaxation times of center segments increase linearly with DNA chain size.
NASA Astrophysics Data System (ADS)
Shim, Hackjoon; Lee, Soochan; Kim, Bohyeong; Tao, Cheng; Chang, Samuel; Yun, Il Dong; Lee, Sang Uk; Kwoh, Kent; Bae, Kyongtae
2008-03-01
Knee osteoarthritis is the most common debilitating health condition affecting elderly population. MR imaging of the knee is highly sensitive for diagnosis and evaluation of the extent of knee osteoarthritis. Quantitative analysis of the progression of osteoarthritis is commonly based on segmentation and measurement of articular cartilage from knee MR images. Segmentation of the knee articular cartilage, however, is extremely laborious and technically demanding, because the cartilage is of complex geometry and thin and small in size. To improve precision and efficiency of the segmentation of the cartilage, we have applied a semi-automated segmentation method that is based on an s/t graph cut algorithm. The cost function was defined integrating regional and boundary cues. While regional cues can encode any intensity distributions of two regions, "object" (cartilage) and "background" (the rest), boundary cues are based on the intensity differences between neighboring pixels. For three-dimensional (3-D) segmentation, hard constraints are also specified in 3-D way facilitating user interaction. When our proposed semi-automated method was tested on clinical patients' MR images (160 slices, 0.7 mm slice thickness), a considerable amount of segmentation time was saved with improved efficiency, compared to a manual segmentation approach.
Perry, Russell W.; Jones, Edward; Scoppettone, G. Gary
2015-07-14
Increasing or decreasing the total carrying capacity of all stream segments resulted in changes in equilibrium population size that were directly proportional to the change in capacity. However, changes in carrying capacity to some stream segments but not others could result in disproportionate changes in equilibrium population sizes by altering density-dependent movement and survival in the stream network. These simulations show how our IBM can provide a useful management tool for understanding the effect of restoration actions or reintroductions on carrying capacity, and, in turn, how these changes affect Moapa dace abundance. Such tools are critical for devising management strategies to achieve recovery goals.
Iris recognition: on the segmentation of degraded images acquired in the visible wavelength.
Proença, Hugo
2010-08-01
Iris recognition imaging constraints are receiving increasing attention. There are several proposals to develop systems that operate in the visible wavelength and in less constrained environments. These imaging conditions engender acquired noisy artifacts that lead to severely degraded images, making iris segmentation a major issue. Having observed that existing iris segmentation methods tend to fail in these challenging conditions, we present a segmentation method that can handle degraded images acquired in less constrained conditions. We offer the following contributions: 1) to consider the sclera the most easily distinguishable part of the eye in degraded images, 2) to propose a new type of feature that measures the proportion of sclera in each direction and is fundamental in segmenting the iris, and 3) to run the entire procedure in deterministically linear time in respect to the size of the image, making the procedure suitable for real-time applications.
Ernst, Anne G.; Baldigo, Barry P.; Calef, Fred J.; Freehafer, Douglas A.; Kremens, Robert L.
2015-10-09
The locations and sizes of potential cold-water refuges for trout were examined in 2005 along a 27-kilometer segment of the Indian and Hudson Rivers in northern New York to evaluate the extent of refuges, the effects of routine flow releases from an impoundment, and how these refuges and releases might influence trout survival in reaches that otherwise would be thermally stressed. This river segment supports small populations of brook trout (Salvelinus fontinalis), brown trout (Salmo trutta), and rainbow trout (Oncorhynchus mykiss) and also receives regular releases of reservoir-surface waters to support rafting during the summer, when water temperatures in both the reservoir and the river frequently exceed thermal thresholds for trout survival. Airborne thermal infrared imaging was supplemented with continuous, in-stream temperature loggers to identify potential refuges that may be associated with tributary inflows or groundwater seeps and to define the extent to which the release flows decrease the size of existing refuges. In general, the release flows overwhelmed the refuge areas and greatly decreased the size and number of the areas. Mean water temperatures were unaffected by the releases, but small-scale heterogeneity was diminished. At a larger scale, water temperatures in the upper and lower segments of the reach were consistently warmer than in the middle segment, even during passage of release waters. The inability of remote thermal infrared images to consistently distinguish land from water (in shaded areas) and to detect groundwater seeps (away from the shallow edges of the stream) limited data analysis and the ability to identify potential thermal refuge areas.
NASA Astrophysics Data System (ADS)
Agüera, Francisco; Aguilar, Fernando J.; Aguilar, Manuel A.
The area occupied by plastic-covered greenhouses has undergone rapid growth in recent years, currently exceeding 500,000 ha worldwide. Due to the vast amount of input (water, fertilisers, fuel, etc.) required, and output of different agricultural wastes (vegetable, plastic, chemical, etc.), the environmental impact of this type of production system can be serious if not accompanied by sound and sustainable territorial planning. For this, the new generation of satellites which provide very high resolution imagery, such as QuickBird and IKONOS can be useful. In this study, one QuickBird and one IKONOS satellite image have been used to cover the same area under similar circumstances. The aim of this work was an exhaustive comparison of QuickBird vs. IKONOS images in land-cover detection. In terms of plastic greenhouse mapping, comparative tests were designed and implemented, each with separate objectives. Firstly, the Maximum Likelihood Classification (MLC) was applied using five different approaches combining R, G, B, NIR, and panchromatic bands. The combinations of the bands used, significantly influenced some of the indexes used to classify quality in this work. Furthermore, the quality classification of the QuickBird image was higher in all cases than that of the IKONOS image. Secondly, texture features derived from the panchromatic images at different window sizes and with different grey levels were added as a fifth band to the R, G, B, NIR images to carry out the MLC. The inclusion of texture information in the classification did not improve the classification quality. For classifications with texture information, the best accuracies were found in both images for mean and angular second moment texture parameters. The optimum window size in these texture parameters was 3×3 for IK images, while for QB images it depended on the quality index studied, but the optimum window size was around 15×15. With regard to the grey level, the optimum was 128. Thus, the optimum texture parameter depended on the main objective of the image classification. If the main classification goal is to minimize the number of pixels wrongly classified, the mean texture parameter should be used, whereas if the main classification goal is to minimize the unclassified pixels the angular second moment texture parameter should be used. On the whole, both QuickBird and IKONOS images offered promising results in classifying plastic greenhouses.
Dietrich, Timo; Rundle-Thiele, Sharyn; Leo, Cheryl; Connor, Jason
2015-04-01
According to commercial marketing theory, a market orientation leads to improved performance. Drawing on the social marketing principles of segmentation and audience research, the current study seeks to identify segments to examine responses to a school-based alcohol social marketing program. A sample of 371 year 10 students (aged: 14-16 years; 51.4% boys) participated in a prospective (pre-post) multisite alcohol social marketing program. Game On: Know Alcohol (GO:KA) program included 6, student-centered, and interactive lessons to teach adolescents about alcohol and strategies to abstain or moderate drinking. A repeated measures design was used. Baseline demographics, drinking attitudes, drinking intentions, and alcohol knowledge were cluster analyzed to identify segments. Change on key program outcome measures and satisfaction with program components were assessed by segment. Three segments were identified; (1) Skeptics, (2) Risky Males, (3) Good Females. Segments 2 and 3 showed greatest change in drinking attitudes and intentions. Good Females reported highest satisfaction with all program components and Skeptics lowest program satisfaction with all program components. Three segments, each differing on psychographic and demographic variables, exhibited different change patterns following participation in GO:KA. Post hoc analysis identified that satisfaction with program components differed by segment offering opportunities for further research. © 2015, American School Health Association.
Boundary overlap for medical image segmentation evaluation
NASA Astrophysics Data System (ADS)
Yeghiazaryan, Varduhi; Voiculescu, Irina
2017-03-01
All medical image segmentation algorithms need to be validated and compared, and yet no evaluation framework is widely accepted within the imaging community. Collections of segmentation results often need to be compared and ranked by their effectiveness. Evaluation measures which are popular in the literature are based on region overlap or boundary distance. None of these are consistent in the way they rank segmentation results: they tend to be sensitive to one or another type of segmentation error (size, location, shape) but no single measure covers all error types. We introduce a new family of measures, with hybrid characteristics. These measures quantify similarity/difference of segmented regions by considering their overlap around the region boundaries. This family is more sensitive than other measures in the literature to combinations of segmentation error types. We compare measure performance on collections of segmentation results sourced from carefully compiled 2D synthetic data, and also on 3D medical image volumes. We show that our new measure: (1) penalises errors successfully, especially those around region boundaries; (2) gives a low similarity score when existing measures disagree, thus avoiding overly inflated scores; and (3) scores segmentation results over a wider range of values. We consider a representative measure from this family and the effect of its only free parameter on error sensitivity, typical value range, and running time.
The Expansion Segments of 28S Ribosomal RNA Extensively Match Human Messenger RNAs
Parker, Michael S.; Balasubramaniam, Ambikaipakan; Sallee, Floyd R.; Parker, Steven L.
2018-01-01
Eukaryote ribosomal RNAs (rRNAs) have expanded in the course of phylogeny by addition of nucleotides in specific insertion areas, the expansion segments. These number about 40 in the larger (25–28S) rRNA (up to 2,400 nucleotides), and about 12 in the smaller (18S) rRNA (<700 nucleotides). Expansion of the larger rRNA shows a clear phylogenetic increase, with a dramatic rise in mammals and especially in hominids. Substantial portions of expansion segments in this RNA are not bound to ribosomal proteins, and may engage extraneous interactants, including messenger RNAs (mRNAs). Studies on the ribosome-mRNA interaction have focused on proteins of the smaller ribosomal subunit, with some examination of 18S rRNA. However, the expansion segments of human 28S rRNA show much higher density and numbers of mRNA matches than those of 18S rRNA, and also a higher density and match numbers than its own core parts. We have studied that with frequent and potentially stable matches containing 7–15 nucleotides. The expansion segments of 28S rRNA average more than 50 matches per mRNA even assuming only 5% of their sequence as available for such interaction. Large expansion segments 7, 15, and 27 of 28S rRNA also have copious long (≥10-nucleotide) matches to most human mRNAs, with frequencies much higher than in other 28S rRNA parts. Expansion segments 7 and 27 and especially segment 15 of 28S rRNA show large size increase in mammals compared to other metazoans, which could reflect a gain of function related to interaction with non-ribosomal partners. The 28S rRNA expansion segment 15 shows very high increments in size, guanosine, and cytidine nucleotide content and mRNA matching in mammals, and especially in hominids. With these segments (but not with other 28S rRNA or any 18S rRNA expansion segments) the density and number of matches are much higher in 5′-terminal than in 3′-terminal untranslated mRNA regions, which may relate to mRNA mobilization via 5′ termini. Matches in the expansion segments 7, 15, and 27 of human 28S rRNA appear as candidates for general interaction with mRNAs, especially those associated with intracellular matrices such as the endoplasmic reticulum. PMID:29563925
Applications of tuned mass dampers to improve performance of large space mirrors
NASA Astrophysics Data System (ADS)
Yingling, Adam J.; Agrawal, Brij N.
2014-01-01
In order for future imaging spacecraft to meet higher resolution imaging capability, it will be necessary to build large space telescopes with primary mirror diameters that range from 10 m to 20 m and do so with nanometer surface accuracy. Due to launch vehicle mass and volume constraints, these mirrors have to be deployable and lightweight, such as segmented mirrors using active optics to correct mirror surfaces with closed loop control. As a part of this work, system identification tests revealed that dynamic disturbances inherent in a laboratory environment are significant enough to degrade the optical performance of the telescope. Research was performed at the Naval Postgraduate School to identify the vibration modes most affecting the optical performance and evaluate different techniques to increase damping of those modes. Based on this work, tuned mass dampers (TMDs) were selected because of their simplicity in implementation and effectiveness in targeting specific modes. The selected damping mechanism was an eddy current damper where the damping and frequency of the damper could be easily changed. System identification of segments was performed to derive TMD specifications. Several configurations of the damper were evaluated, including the number and placement of TMDs, damping constant, and targeted structural modes. The final configuration consisted of two dampers located at the edge of each segment and resulted in 80% reduction in vibrations. The WFE for the system without dampers was 1.5 waves, with one TMD the WFE was 0.9 waves, and with two TMDs the WFE was 0.25 waves. This paper provides details of some of the work done in this area and includes theoretical predictions for optimum damping which were experimentally verified on a large aperture segmented system.
High-contrast imaging with an arbitrary aperture: active correction of aperture discontinuities
NASA Astrophysics Data System (ADS)
Pueyo, Laurent; Norman, Colin; Soummer, Rémi; Perrin, Marshall; N'Diaye, Mamadou; Choquet, Elodie
2013-09-01
We present a new method to achieve high-contrast images using segmented and/or on-axis telescopes. Our approach relies on using two sequential Deformable Mirrors to compensate for the large amplitude excursions in the telescope aperture due to secondary support structures and/or segment gaps. In this configuration the parameter landscape of Deformable Mirror Surfaces that yield high contrast Point Spread Functions is not linear, and non-linear methods are needed to find the true minimum in the optimization topology. We solve the highly non-linear Monge-Ampere equation that is the fundamental equation describing the physics of phase induced amplitude modulation. We determine the optimum configuration for our two sequential Deformable Mirror system and show that high-throughput and high contrast solutions can be achieved using realistic surface deformations that are accessible using existing technologies. We name this process Active Compensation of Aperture Discontinuities (ACAD). We show that for geometries similar to JWST, ACAD can attain at least 10-7 in contrast and an order of magnitude higher for future Extremely Large Telescopes, even when the pupil features a missing segment" . We show that the converging non-linear mappings resulting from our Deformable Mirror shapes actually damp near-field diffraction artifacts in the vicinity of the discontinuities. Thus ACAD actually lowers the chromatic ringing due to diffraction by segment gaps and strut's while not amplifying the diffraction at the aperture edges beyond the Fresnel regime and illustrate the broadband properties of ACAD in the case of the pupil configuration corresponding to the Astrophysics Focused Telescope Assets. Since details about these telescopes are not yet available to the broader astronomical community, our test case is based on a geometry mimicking the actual one, to the best of our knowledge.
Twin ruptures grew to build up the giant 2011 Tohoku, Japan, earthquake.
Maercklin, Nils; Festa, Gaetano; Colombelli, Simona; Zollo, Aldo
2012-01-01
The 2011 Tohoku megathrust earthquake had an unexpected size for the region. To image the earthquake rupture in detail, we applied a novel backprojection technique to waveforms from local accelerometer networks. The earthquake began as a small-size twin rupture, slowly propagating mainly updip and triggering the break of a larger-size asperity at shallower depths, resulting in up to 50 m slip and causing high-amplitude tsunami waves. For a long time the rupture remained in a 100-150 km wide slab segment delimited by oceanic fractures, before propagating further to the southwest. The occurrence of large slip at shallow depths likely favored the propagation across contiguous slab segments and contributed to build up a giant earthquake. The lateral variations in the slab geometry may act as geometrical or mechanical barriers finally controlling the earthquake rupture nucleation, evolution and arrest.
Twin ruptures grew to build up the giant 2011 Tohoku, Japan, earthquake
Maercklin, Nils; Festa, Gaetano; Colombelli, Simona; Zollo, Aldo
2012-01-01
The 2011 Tohoku megathrust earthquake had an unexpected size for the region. To image the earthquake rupture in detail, we applied a novel backprojection technique to waveforms from local accelerometer networks. The earthquake began as a small-size twin rupture, slowly propagating mainly updip and triggering the break of a larger-size asperity at shallower depths, resulting in up to 50 m slip and causing high-amplitude tsunami waves. For a long time the rupture remained in a 100–150 km wide slab segment delimited by oceanic fractures, before propagating further to the southwest. The occurrence of large slip at shallow depths likely favored the propagation across contiguous slab segments and contributed to build up a giant earthquake. The lateral variations in the slab geometry may act as geometrical or mechanical barriers finally controlling the earthquake rupture nucleation, evolution and arrest. PMID:23050093
Compaction of quasi-one-dimensional elastoplastic materials
Shaebani, M. Reza; Najafi, Javad; Farnudi, Ali; Bonn, Daniel; Habibi, Mehdi
2017-01-01
Insight into crumpling or compaction of one-dimensional objects is important for understanding biopolymer packaging and designing innovative technological devices. By compacting various types of wires in rigid confinements and characterizing the morphology of the resulting crumpled structures, here, we report how friction, plasticity and torsion enhance disorder, leading to a transition from coiled to folded morphologies. In the latter case, where folding dominates the crumpling process, we find that reducing the relative wire thickness counter-intuitively causes the maximum packing density to decrease. The segment size distribution gradually becomes more asymmetric during compaction, reflecting an increase of spatial correlations. We introduce a self-avoiding random walk model and verify that the cumulative injected wire length follows a universal dependence on segment size, allowing for the prediction of the efficiency of compaction as a function of material properties, container size and injection force. PMID:28585550
Particle size and support effects in electrocatalysis.
Hayden, Brian E
2013-08-20
Researchers increasingly recognize that, as with standard supported heterogeneous catalysts, the activity and selectivity of supported metal electrocatalysts are influenced by particle size, particle structure, and catalyst support. Studies using model supported heterogeneous catalysts have provided information about these effects. Similarly, model electrochemical studies on supported metal electrocatalysts can provide insight into the factors determining catalytic activity. High-throughput methods for catalyst synthesis and screening can determine systematic trends in activity as a function of support and particle size with excellent statistical certainty. In this Account, we describe several such studies investigating methods for dispersing precious metals on both carbon and oxide supports, with particular emphasis on the prospects for the development of low-temperature fuel-cell electrocatalysts. One key finding is a decrease in catalytic activity with decreasing particle size independent of the support for both oxygen reduction and CO oxidation on supported gold and platinum. For these reactions, there appears to be an intrinsic particle size effect that results in a loss of activity at particle sizes below 2-3 nm. A titania support, however, also increases activity of gold particles in the electrooxidation of CO and in the reduction of oxygen, with an optimum at 3 nm particle size. This optimum may represent the superposition of competing effects: a titania-induced enhanced activity versus deactivation at small particle sizes. The titania support shows catalytic activity at potentials where carbon-supported and bulk-gold surfaces are normally oxidized and CO electrooxidation is poisoned. On the other hand, platinum on amorphous titania shows a different effect: the oxidation reduction reaction is strongly poisoned in the same particle size range. We correlated the influence of the titania support with titania-induced changes in the surface redox behavior of the platinum particles. For both supported gold and platinum particles in electrocatalysis, we observe parallels to the effects of particle size and support in the equivalent heterogeneous catalysts. Studies of model supported-metal electrocatalysts, performs efficiently using high throughput synthetic and screening methodologies, will lead to a better understanding of the mechanisms responsible for support and particle size effects in electrocatalysis, and will drive the development of more effective and robust catalysts in the future.
Component and System Sensitivity Considerations for Design of a Lunar ISRU Oxygen Production Plant
NASA Technical Reports Server (NTRS)
Linne, Diane L.; Gokoglu, Suleyman; Hegde, Uday G.; Balasubramaniam, Ramaswamy; Santiago-Maldonado, Edgardo
2009-01-01
Component and system sensitivities of some design parameters of ISRU system components are analyzed. The differences between terrestrial and lunar excavation are discussed, and a qualitative comparison of large and small excavators is started. The effect of excavator size on the size of the ISRU plant's regolith hoppers is presented. Optimum operating conditions of both hydrogen and carbothermal reduction reactors are explored using recently developed analytical models. Design parameters such as batch size, conversion fraction, and maximum particle size are considered for a hydrogen reduction reactor while batch size, conversion fraction, number of melt zones, and methane flow rate are considered for a carbothermal reduction reactor. For both reactor types the effect of reactor operation on system energy and regolith delivery requirements is presented.
A fast and efficient segmentation scheme for cell microscopic image.
Lebrun, G; Charrier, C; Lezoray, O; Meurie, C; Cardot, H
2007-04-27
Microscopic cellular image segmentation schemes must be efficient for reliable analysis and fast to process huge quantity of images. Recent studies have focused on improving segmentation quality. Several segmentation schemes have good quality but processing time is too expensive to deal with a great number of images per day. For segmentation schemes based on pixel classification, the classifier design is crucial since it is the one which requires most of the processing time necessary to segment an image. The main contribution of this work is focused on how to reduce the complexity of decision functions produced by support vector machines (SVM) while preserving recognition rate. Vector quantization is used in order to reduce the inherent redundancy present in huge pixel databases (i.e. images with expert pixel segmentation). Hybrid color space design is also used in order to improve data set size reduction rate and recognition rate. A new decision function quality criterion is defined to select good trade-off between recognition rate and processing time of pixel decision function. The first results of this study show that fast and efficient pixel classification with SVM is possible. Moreover posterior class pixel probability estimation is easy to compute with Platt method. Then a new segmentation scheme using probabilistic pixel classification has been developed. This one has several free parameters and an automatic selection must dealt with, but criteria for evaluate segmentation quality are not well adapted for cell segmentation, especially when comparison with expert pixel segmentation must be achieved. Another important contribution in this paper is the definition of a new quality criterion for evaluation of cell segmentation. The results presented here show that the selection of free parameters of the segmentation scheme by optimisation of the new quality cell segmentation criterion produces efficient cell segmentation.
Yokoyama, Takao; Miura, Fumihito; Araki, Hiromitsu; Okamura, Kohji; Ito, Takashi
2015-08-12
Base-resolution methylome data generated by whole-genome bisulfite sequencing (WGBS) is often used to segment the genome into domains with distinct methylation levels. However, most segmentation methods include many parameters to be carefully tuned and/or fail to exploit the unsurpassed resolution of the data. Furthermore, there is no simple method that displays the composition of the domains to grasp global trends in each methylome. We propose to use changepoint detection for domain demarcation based on base-resolution methylome data. While the proposed method segments the methylome in a largely comparable manner to conventional approaches, it has only a single parameter to be tuned. Furthermore, it fully exploits the base-resolution of the data to enable simultaneous detection of methylation changes in even contrasting size ranges, such as focal hypermethylation and global hypomethylation in cancer methylomes. We also propose a simple plot termed methylated domain landscape (MDL) that globally displays the size, the methylation level and the number of the domains thus defined, thereby enabling one to intuitively grasp trends in each methylome. Since the pattern of MDL often reflects cell lineages and is largely unaffected by data size, it can serve as a novel signature of methylome. Changepoint detection in base-resolution methylome data followed by MDL plotting provides a novel method for methylome characterization and will facilitate global comparison among various WGBS data differing in size and even species origin.
Frequency Selection for Multi-frequency Acoustic Measurement of Suspended Sediment
NASA Astrophysics Data System (ADS)
Chen, X.; HO, H.; Fu, X.
2017-12-01
Multi-frequency acoustic measurement of suspended sediment has found successful applications in marine and fluvial environments. Difficult challenges remain in regard to improving its effectiveness and efficiency when applied to high concentrations and wide size distributions in rivers. We performed a multi-frequency acoustic scattering experiment in a cylindrical tank with a suspension of natural sands. The sands range from 50 to 600 μm in diameter with a lognormal size distribution. The bulk concentration of suspended sediment varied from 1.0 to 12.0 g/L. We found that the commonly used linear relationship between the intensity of acoustic backscatter and suspended sediment concentration holds only at sufficiently low concentrations, for instance below 3.0 g/L. It fails at a critical value of concentration that depends on measurement frequency and the distance between the transducer and the target point. Instead, an exponential relationship was found to work satisfactorily throughout the entire range of concentration. The coefficient and exponent of the exponential function changed, however, with the measuring frequency and distance. Considering the increased complexity of inverting the concentration values when an exponential relationship prevails, we further analyzed the relationship between measurement error and measuring frequency. It was also found that the inversion error may be effectively controlled within 5% if the frequency is properly set. Compared with concentration, grain size was found to heavily affect the selection of optimum frequency. A regression relationship for optimum frequency versus grain size was developed based on the experimental results.
NASA Technical Reports Server (NTRS)
Unnam, J.; Tenney, D. R.
1981-01-01
Exact solutions for diffusion in single phase binary alloy systems with constant diffusion coefficient and zero-flux boundary condition have been evaluated to establish the optimum zone size of applicability. Planar, cylindrical and spherical interface geometry, and finite, singly infinite, and doubly infinite systems are treated. Two solutions are presented for each geometry, one well suited to short diffusion times, and one to long times. The effect of zone-size on the convergence of these solutions is discussed. A generalized form of the diffusion solution for doubly infinite systems is proposed.
GPU-based relative fuzzy connectedness image segmentation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhuge Ying; Ciesielski, Krzysztof C.; Udupa, Jayaram K.
2013-01-15
Purpose:Recently, clinical radiological research and practice are becoming increasingly quantitative. Further, images continue to increase in size and volume. For quantitative radiology to become practical, it is crucial that image segmentation algorithms and their implementations are rapid and yield practical run time on very large data sets. The purpose of this paper is to present a parallel version of an algorithm that belongs to the family of fuzzy connectedness (FC) algorithms, to achieve an interactive speed for segmenting large medical image data sets. Methods: The most common FC segmentations, optimizing an Script-Small-L {sub {infinity}}-based energy, are known as relative fuzzymore » connectedness (RFC) and iterative relative fuzzy connectedness (IRFC). Both RFC and IRFC objects (of which IRFC contains RFC) can be found via linear time algorithms, linear with respect to the image size. The new algorithm, P-ORFC (for parallel optimal RFC), which is implemented by using NVIDIA's Compute Unified Device Architecture (CUDA) platform, considerably improves the computational speed of the above mentioned CPU based IRFC algorithm. Results: Experiments based on four data sets of small, medium, large, and super data size, achieved speedup factors of 32.8 Multiplication-Sign , 22.9 Multiplication-Sign , 20.9 Multiplication-Sign , and 17.5 Multiplication-Sign , correspondingly, on the NVIDIA Tesla C1060 platform. Although the output of P-ORFC need not precisely match that of IRFC output, it is very close to it and, as the authors prove, always lies between the RFC and IRFC objects. Conclusions: A parallel version of a top-of-the-line algorithm in the family of FC has been developed on the NVIDIA GPUs. An interactive speed of segmentation has been achieved, even for the largest medical image data set. Such GPU implementations may play a crucial role in automatic anatomy recognition in clinical radiology.« less
[Target volume segmentation of PET images by an iterative method based on threshold value].
Castro, P; Huerga, C; Glaría, L A; Plaza, R; Rodado, S; Marín, M D; Mañas, A; Serrada, A; Núñez, L
2014-01-01
An automatic segmentation method is presented for PET images based on an iterative approximation by threshold value that includes the influence of both lesion size and background present during the acquisition. Optimal threshold values that represent a correct segmentation of volumes were determined based on a PET phantom study that contained different sizes spheres and different known radiation environments. These optimal values were normalized to background and adjusted by regression techniques to a two-variable function: lesion volume and signal-to-background ratio (SBR). This adjustment function was used to build an iterative segmentation method and then, based in this mention, a procedure of automatic delineation was proposed. This procedure was validated on phantom images and its viability was confirmed by retrospectively applying it on two oncology patients. The resulting adjustment function obtained had a linear dependence with the SBR and was inversely proportional and negative with the volume. During the validation of the proposed method, it was found that the volume deviations respect to its real value and CT volume were below 10% and 9%, respectively, except for lesions with a volume below 0.6 ml. The automatic segmentation method proposed can be applied in clinical practice to tumor radiotherapy treatment planning in a simple and reliable way with a precision close to the resolution of PET images. Copyright © 2013 Elsevier España, S.L.U. and SEMNIM. All rights reserved.
LDR segmented mirror technology assessment study
NASA Technical Reports Server (NTRS)
Krim, M.; Russo, J.
1983-01-01
In the mid-1990s, NASA plans to orbit a giant telescope, whose aperture may be as great as 30 meters, for infrared and sub-millimeter astronomy. Its primary mirror will be deployed or assembled in orbit from a mosaic of possibly hundreds of mirror segments. Each segment must be shaped to precise curvature tolerances so that diffraction-limited performance will be achieved at 30 micron (nominal operating wavelength). All panels must lie within 1 micron on a theoretical surface described by the optical precipitation of the telescope's primary mirror. To attain diffraction-limited performance, the issues of alignment and/or position sensing, position control of micron tolerances, and structural, thermal, and mechanical considerations for stowing, deploying, and erecting the reflector must be resolved. Radius of curvature precision influences panel size, shape, material, and type of construction. Two superior material choices emerged: fused quartz (sufficiently homogeneous with respect to thermal expansivity to permit a thin shell substrate to be drape molded between graphite dies to a precise enough off-axis asphere for optical finishing on the as-received a segment) and a Pyrex or Duran (less expensive than quartz and formable at lower temperatures). The optimal reflector panel size is between 1-1/2 and 2 meters. Making one, two-meter mirror every two weeks requires new approaches to manufacturing off-axis parabolic or aspheric segments (drape molding on precision dies and subsequent finishing on a nonrotationally symmetric dependent machine). Proof-of-concept developmental programs were identified to prove the feasibility of the materials and manufacturing ideas.
Do Indo-Asians have smaller coronary arteries?
Lip, G Y; Rathore, V S; Katira, R; Watson, R D; Singh, S P
1999-08-01
There is a widespread belief that coronary arteries are smaller in Indo-Asians. The aim of the present study was to compare the size of atheroma-free proximal and distal epicardial coronary arteries of Indo-Asians and Caucasians. We analysed normal coronary angiograms from 77 Caucasians and 39 Indo-Asians. The two groups were comparable for dominance of the coronary arteries. Indo-Asian patients had generally smaller coronary arteries, with a statistically significant difference in the mean diameters of the left main coronary artery, proximal, mid and left anterior descending, and proximal and distal right coronary artery segments. There was a non-significant trend towards smaller coronary artery segment diameters for the distal left anterior descending, proximal and distal circumflex, and obtuse marginal artery segments. However, after correction for body surface area, none of these differences in size were statistically significant. Thus, the smaller coronary arteries in Indo-Asian patients were explained by body size alone and were not due to ethnic origin per se. This finding nevertheless has important therapeutic implications, since smaller coronary arteries may give rise to technical difficulties during bypass graft and intervention procedures such as percutaneous transluminal coronary angioplasty, stents and atherectomy. On smaller arteries, atheroma may also give an impression of more severe disease than on larger diameter arteries.
NASA Astrophysics Data System (ADS)
Morfa, Carlos Recarey; Farias, Márcio Muniz de; Morales, Irvin Pablo Pérez; Navarra, Eugenio Oñate Ibañez de; Valera, Roberto Roselló
2018-04-01
The influence of the microstructural heterogeneities is an important topic in the study of materials. In the context of computational mechanics, it is therefore necessary to generate virtual materials that are statistically equivalent to the microstructure under study, and to connect that geometrical description to the different numerical methods. Herein, the authors present a procedure to model continuous solid polycrystalline materials, such as rocks and metals, preserving their representative statistical grain size distribution. The first phase of the procedure consists of segmenting an image of the material into adjacent polyhedral grains representing the individual crystals. This segmentation allows estimating the grain size distribution, which is used as the input for an advancing front sphere packing algorithm. Finally, Laguerre diagrams are calculated from the obtained sphere packings. The centers of the spheres give the centers of the Laguerre cells, and their radii determine the cells' weights. The cell sizes in the obtained Laguerre diagrams have a distribution similar to that of the grains obtained from the image segmentation. That is why those diagrams are a convenient model of the original crystalline structure. The above-outlined procedure has been used to model real polycrystalline metallic materials. The main difference with previously existing methods lies in the use of a better particle packing algorithm.
Al-Busaidi, Asiya M; Khriji, Lazhar; Touati, Farid; Rasid, Mohd Fadlee; Mnaouer, Adel Ben
2017-09-12
One of the major issues in time-critical medical applications using wireless technology is the size of the payload packet, which is generally designed to be very small to improve the transmission process. Using small packets to transmit continuous ECG data is still costly. Thus, data compression is commonly used to reduce the huge amount of ECG data transmitted through telecardiology devices. In this paper, a new ECG compression scheme is introduced to ensure that the compressed ECG segments fit into the available limited payload packets, while maintaining a fixed CR to preserve the diagnostic information. The scheme automatically divides the ECG block into segments, while maintaining other compression parameters fixed. This scheme adopts discrete wavelet transform (DWT) method to decompose the ECG data, bit-field preserving (BFP) method to preserve the quality of the DWT coefficients, and a modified running-length encoding (RLE) scheme to encode the coefficients. The proposed dynamic compression scheme showed promising results with a percentage packet reduction (PR) of about 85.39% at low percentage root-mean square difference (PRD) values, less than 1%. ECG records from MIT-BIH Arrhythmia Database were used to test the proposed method. The simulation results showed promising performance that satisfies the needs of portable telecardiology systems, like the limited payload size and low power consumption.
Accurate segmentation of lung fields on chest radiographs using deep convolutional networks
NASA Astrophysics Data System (ADS)
Arbabshirani, Mohammad R.; Dallal, Ahmed H.; Agarwal, Chirag; Patel, Aalpan; Moore, Gregory
2017-02-01
Accurate segmentation of lung fields on chest radiographs is the primary step for computer-aided detection of various conditions such as lung cancer and tuberculosis. The size, shape and texture of lung fields are key parameters for chest X-ray (CXR) based lung disease diagnosis in which the lung field segmentation is a significant primary step. Although many methods have been proposed for this problem, lung field segmentation remains as a challenge. In recent years, deep learning has shown state of the art performance in many visual tasks such as object detection, image classification and semantic image segmentation. In this study, we propose a deep convolutional neural network (CNN) framework for segmentation of lung fields. The algorithm was developed and tested on 167 clinical posterior-anterior (PA) CXR images collected retrospectively from picture archiving and communication system (PACS) of Geisinger Health System. The proposed multi-scale network is composed of five convolutional and two fully connected layers. The framework achieved IOU (intersection over union) of 0.96 on the testing dataset as compared to manual segmentation. The suggested framework outperforms state of the art registration-based segmentation by a significant margin. To our knowledge, this is the first deep learning based study of lung field segmentation on CXR images developed on a heterogeneous clinical dataset. The results suggest that convolutional neural networks could be employed reliably for lung field segmentation.
Alp, Murat; Cucinotta, Francis A.
2017-01-01
Changes to cognition, including memory, following radiation exposure are a concern for cosmic ray exposures to astronauts and in Hadron therapy with proton and heavy ion beams. The purpose of the present work is to develop computational methods to evaluate microscopic energy deposition (ED) in volumes representative of neuron cell structures, including segments of dendrites and spines, using a stochastic track structure model. A challenge for biophysical models of neuronal damage is the large sizes (>100 μm) and variability in volumes of possible dendritic segments and pre-synaptic elements (spines and filopodia). We consider cylindrical and spherical microscopic volumes of varying geometric parameters and aspect ratios from 0.5 to 5 irradiated by protons, and 3He and 12C particles at energies corresponding to a distance of 1 cm to the Bragg peak, which represent particles of interest in Hadron therapy as well as space radiation exposure. We investigate the optimal axis length of dendritic segments to evaluate microscopic ED and hit probabilities along the dendritic branches at a given macroscopic dose. Because of large computation times to analyze ED in volumes of varying sizes, we developed an analytical method to find the mean primary dose in spheres that can guide numerical methods to find the primary dose distribution for cylinders. Considering cylindrical segments of varying aspect ratio at constant volume, we assess the chord length distribution, mean number of hits and ED profiles by primary particles and secondary electrons (δ-rays). For biophysical modeling applications, segments on dendritic branches are proposed to have equal diameters and axes lengths along the varying diameter of a dendritic branch. PMID:28554507
Rietschel, Marcella; Mattheisen, Manuel; Breuer, René; Schulze, Thomas G.; Nöthen, Markus M.; Levinson, Douglas; Shi, Jianxin; Gejman, Pablo V.; Cichon, Sven; Ophoff, Roel A.
2012-01-01
Recent studies suggest that variation in complex disorders (e.g., schizophrenia) is explained by a large number of genetic variants with small effect size (Odds Ratio∼1.05–1.1). The statistical power to detect these genetic variants in Genome Wide Association (GWA) studies with large numbers of cases and controls (∼15,000) is still low. As it will be difficult to further increase sample size, we decided to explore an alternative method for analyzing GWA data in a study of schizophrenia, dramatically reducing the number of statistical tests. The underlying hypothesis was that at least some of the genetic variants related to a common outcome are collocated in segments of chromosomes at a wider scale than single genes. Our approach was therefore to study the association between relatively large segments of DNA and disease status. An association test was performed for each SNP and the number of nominally significant tests in a segment was counted. We then performed a permutation-based binomial test to determine whether this region contained significantly more nominally significant SNPs than expected under the null hypothesis of no association, taking linkage into account. Genome Wide Association data of three independent schizophrenia case/control cohorts with European ancestry (Dutch, German, and US) using segments of DNA with variable length (2 to 32 Mbp) was analyzed. Using this approach we identified a region at chromosome 5q23.3-q31.3 (128–160 Mbp) that was significantly enriched with nominally associated SNPs in three independent case-control samples. We conclude that considering relatively wide segments of chromosomes may reveal reliable relationships between the genome and schizophrenia, suggesting novel methodological possibilities as well as raising theoretical questions. PMID:22723893
NASA Astrophysics Data System (ADS)
Alp, Murat; Cucinotta, Francis A.
2017-05-01
Changes to cognition, including memory, following radiation exposure are a concern for cosmic ray exposures to astronauts and in Hadron therapy with proton and heavy ion beams. The purpose of the present work is to develop computational methods to evaluate microscopic energy deposition (ED) in volumes representative of neuron cell structures, including segments of dendrites and spines, using a stochastic track structure model. A challenge for biophysical models of neuronal damage is the large sizes (> 100 μm) and variability in volumes of possible dendritic segments and pre-synaptic elements (spines and filopodia). We consider cylindrical and spherical microscopic volumes of varying geometric parameters and aspect ratios from 0.5 to 5 irradiated by protons, and 3He and 12C particles at energies corresponding to a distance of 1 cm to the Bragg peak, which represent particles of interest in Hadron therapy as well as space radiation exposure. We investigate the optimal axis length of dendritic segments to evaluate microscopic ED and hit probabilities along the dendritic branches at a given macroscopic dose. Because of large computation times to analyze ED in volumes of varying sizes, we developed an analytical method to find the mean primary dose in spheres that can guide numerical methods to find the primary dose distribution for cylinders. Considering cylindrical segments of varying aspect ratio at constant volume, we assess the chord length distribution, mean number of hits and ED profiles by primary particles and secondary electrons (δ-rays). For biophysical modeling applications, segments on dendritic branches are proposed to have equal diameters and axes lengths along the varying diameter of a dendritic branch.
A novel measure and significance testing in data analysis of cell image segmentation.
Wu, Jin Chu; Halter, Michael; Kacker, Raghu N; Elliott, John T; Plant, Anne L
2017-03-14
Cell image segmentation (CIS) is an essential part of quantitative imaging of biological cells. Designing a performance measure and conducting significance testing are critical for evaluating and comparing the CIS algorithms for image-based cell assays in cytometry. Many measures and methods have been proposed and implemented to evaluate segmentation methods. However, computing the standard errors (SE) of the measures and their correlation coefficient is not described, and thus the statistical significance of performance differences between CIS algorithms cannot be assessed. We propose the total error rate (TER), a novel performance measure for segmenting all cells in the supervised evaluation. The TER statistically aggregates all misclassification error rates (MER) by taking cell sizes as weights. The MERs are for segmenting each single cell in the population. The TER is fully supported by the pairwise comparisons of MERs using 106 manually segmented ground-truth cells with different sizes and seven CIS algorithms taken from ImageJ. Further, the SE and 95% confidence interval (CI) of TER are computed based on the SE of MER that is calculated using the bootstrap method. An algorithm for computing the correlation coefficient of TERs between two CIS algorithms is also provided. Hence, the 95% CI error bars can be used to classify CIS algorithms. The SEs of TERs and their correlation coefficient can be employed to conduct the hypothesis testing, while the CIs overlap, to determine the statistical significance of the performance differences between CIS algorithms. A novel measure TER of CIS is proposed. The TER's SEs and correlation coefficient are computed. Thereafter, CIS algorithms can be evaluated and compared statistically by conducting the significance testing.
Alp, Murat; Cucinotta, Francis A
2017-05-01
Changes to cognition, including memory, following radiation exposure are a concern for cosmic ray exposures to astronauts and in Hadron therapy with proton and heavy ion beams. The purpose of the present work is to develop computational methods to evaluate microscopic energy deposition (ED) in volumes representative of neuron cell structures, including segments of dendrites and spines, using a stochastic track structure model. A challenge for biophysical models of neuronal damage is the large sizes (> 100µm) and variability in volumes of possible dendritic segments and pre-synaptic elements (spines and filopodia). We consider cylindrical and spherical microscopic volumes of varying geometric parameters and aspect ratios from 0.5 to 5 irradiated by protons, and 3 He and 12 C particles at energies corresponding to a distance of 1cm to the Bragg peak, which represent particles of interest in Hadron therapy as well as space radiation exposure. We investigate the optimal axis length of dendritic segments to evaluate microscopic ED and hit probabilities along the dendritic branches at a given macroscopic dose. Because of large computation times to analyze ED in volumes of varying sizes, we developed an analytical method to find the mean primary dose in spheres that can guide numerical methods to find the primary dose distribution for cylinders. Considering cylindrical segments of varying aspect ratio at constant volume, we assess the chord length distribution, mean number of hits and ED profiles by primary particles and secondary electrons (δ-rays). For biophysical modeling applications, segments on dendritic branches are proposed to have equal diameters and axes lengths along the varying diameter of a dendritic branch. Copyright © 2017. Published by Elsevier Ltd.
Measurement of foliar deposits of Bt and their relation to efficacy
P. G. Fast; E. G. Kettela; C. Wiesner
1985-01-01
Interest in and discussion of the relationship between droplet spectrum emitted and droplet spectrum deposited, spray cloud behaviour, the relationship between droplets deposited and efficacy, and optimum droplet size, has increased in recent years and has resulted in a number of collaborative studies addressing aspects of these questions. The questions are...
Taufiqurrahmi, Niken; Mohamed, Abdul Rahman; Bhatia, Subhash
2011-11-01
The catalytic cracking of waste cooking palm oil to biofuel was studied over different types of nano-crystalline zeolite catalysts in a fixed bed reactor. The effect of reaction temperature (400-500 °C), catalyst-to-oil ratio (6-14) and catalyst pore size of different nanocrystalline zeolites (0.54-0.80 nm) were studied over the conversion of waste cooking palm oil, yields of Organic Liquid Product (OLP) and gasoline fraction in the OLP following central composite design (CCD). The response surface methodology was used to determine the optimum value of the operating variables for maximum conversion as well as maximum yield of OLP and gasoline fraction, respectively. The optimum reaction temperature of 458 °C with oil/catalyst ratio=6 over the nanocrystalline zeolite Y with pore size of 0.67 nm gave 86.4 wt% oil conversion, 46.5 wt% OLP yield and 33.5 wt% gasoline fraction yield, respectively. The experimental results were in agreement with the simulated values within an experimental error of less than 5%. Copyright © 2011 Elsevier Ltd. All rights reserved.
Ifoulis, A A; Savopoulou-Soultani, M
2006-10-01
The purpose of this research was to quantify the spatial pattern and develop a sampling program for larvae of Lobesia botrana Denis and Schiffermüller (Lepidoptera: Tortricidae), an important vineyard pest in northern Greece. Taylor's power law and Iwao's patchiness regression were used to model the relationship between the mean and the variance of larval counts. Analysis of covariance was carried out, separately for infestation and injury, with combined second and third generation data, for vine and half-vine sample units. Common regression coefficients were estimated to permit use of the sampling plan over a wide range of conditions. Optimum sample sizes for infestation and injury, at three levels of precision, were developed. An investigation of a multistage sampling plan with a nested analysis of variance showed that if the goal of sampling is focusing on larval infestation, three grape clusters should be sampled in a half-vine; if the goal of sampling is focusing on injury, then two grape clusters per half-vine are recommended.
Arshadi, M; Mousavi, S M
2014-12-01
Computer printed circuit boards (CPCBs) have a rich metal content and are produced in high volume, making them an important component of electronic waste. The present study used a pure culture of Acidithiobacillus ferrooxidans to leach Cu and Ni from CPCBs waste. The adaptation phase began at 1g/l CPCBs powder with 10% inoculation and final pulp density was reached at 20g/l after about 80d. Four effective factors including initial pH, particle size, pulp density, and initial Fe(3+) concentration were optimized to achieve maximum simultaneous recovery of Cu and Ni. Their interactions were also identified using central composite design in response surface methodology. The suggested optimal conditions were initial pH 3, initial Fe(3+) 8.4g/l, pulp density 20g/l and particle size 95μm. Nearly 100% of Cu and Ni were simultaneously recovered under optimum conditions. Finally, bacterial growth characteristics versus time at optimum conditions were plotted. Copyright © 2014 Elsevier Ltd. All rights reserved.
Experimental evaluation of optimization method for developing ultraviolet barrier coatings
NASA Astrophysics Data System (ADS)
Gonome, Hiroki; Okajima, Junnosuke; Komiya, Atsuki; Maruyama, Shigenao
2014-01-01
Ultraviolet (UV) barrier coatings can be used to protect many industrial products from UV attack. This study introduces a method of optimizing UV barrier coatings using pigment particles. The radiative properties of the pigment particles were evaluated theoretically, and the optimum particle size was decided from the absorption efficiency and the back-scattering efficiency. UV barrier coatings were prepared with zinc oxide (ZnO) and titanium dioxide (TiO2). The transmittance of the UV barrier coating was calculated theoretically. The radiative transfer in the UV barrier coating was modeled using the radiation element method by ray emission model (REM2). In order to validate the calculated results, the transmittances of these coatings were measured by a spectrophotometer. A UV barrier coating with a low UV transmittance and high VIS transmittance could be achieved. The calculated transmittance showed a similar spectral tendency with the measured one. The use of appropriate particles with optimum size, coating thickness and volume fraction will result in effective UV barrier coatings. UV barrier coatings can be achieved by the application of optical engineering.
Fuzzy hidden Markov chains segmentation for volume determination and quantitation in PET.
Hatt, M; Lamare, F; Boussion, N; Turzo, A; Collet, C; Salzenstein, F; Roux, C; Jarritt, P; Carson, K; Cheze-Le Rest, C; Visvikis, D
2007-06-21
Accurate volume of interest (VOI) estimation in PET is crucial in different oncology applications such as response to therapy evaluation and radiotherapy treatment planning. The objective of our study was to evaluate the performance of the proposed algorithm for automatic lesion volume delineation; namely the fuzzy hidden Markov chains (FHMC), with that of current state of the art in clinical practice threshold based techniques. As the classical hidden Markov chain (HMC) algorithm, FHMC takes into account noise, voxel intensity and spatial correlation, in order to classify a voxel as background or functional VOI. However the novelty of the fuzzy model consists of the inclusion of an estimation of imprecision, which should subsequently lead to a better modelling of the 'fuzzy' nature of the object of interest boundaries in emission tomography data. The performance of the algorithms has been assessed on both simulated and acquired datasets of the IEC phantom, covering a large range of spherical lesion sizes (from 10 to 37 mm), contrast ratios (4:1 and 8:1) and image noise levels. Both lesion activity recovery and VOI determination tasks were assessed in reconstructed images using two different voxel sizes (8 mm3 and 64 mm3). In order to account for both the functional volume location and its size, the concept of % classification errors was introduced in the evaluation of volume segmentation using the simulated datasets. Results reveal that FHMC performs substantially better than the threshold based methodology for functional volume determination or activity concentration recovery considering a contrast ratio of 4:1 and lesion sizes of <28 mm. Furthermore differences between classification and volume estimation errors evaluated were smaller for the segmented volumes provided by the FHMC algorithm. Finally, the performance of the automatic algorithms was less susceptible to image noise levels in comparison to the threshold based techniques. The analysis of both simulated and acquired datasets led to similar results and conclusions as far as the performance of segmentation algorithms under evaluation is concerned.
Transposon-containing DNA cloning vector and uses thereof
Berg, C.M.; Berg, D.E.; Wang, G.
1997-07-08
The present invention discloses a rapid method of restriction mapping, sequencing or localizing genetic features in a segment of deoxyribonucleic acid (DNA) that is up to 42 kb in size. The method in part comprises cloning of the DNA segment in a specialized cloning vector and then isolating nested deletions in either direction in vivo by intramolecular transposition into the cloned DNA. A plasmid has been prepared and disclosed. 4 figs.
Transposon-containing DNA cloning vector and uses thereof
Berg, Claire M.; Berg, Douglas E.; Wang, Gan
1997-01-01
The present invention discloses a rapid method of restriction mapping, sequencing or localizing genetic features in a segment of deoxyribonucleic acid (DNA) that is up to 42 kb in size. The method in part comprises cloning of the DNA segment in a specialized cloning vector and then isolating nested deletions in either direction in vivo by intramolecular transposition into the cloned DNA. A plasmid has been prepared and disclosed.
Segmented polynomial taper equation incorporating years since thinning for loblolly pine plantations
A. Gordon Holley; Thomas B. Lynch; Charles T. Stiff; William Stansfield
2010-01-01
Data from 108 trees felled from 16 loblolly pine stands owned by Temple-Inland Forest Products Corp. were used to determine effects of years since thinning (YST) on stem taper using the MaxâBurkhart type segmented polynomial taper model. Sample tree YST ranged from two to nine years prior to destructive sampling. In an effort to equalize sample sizes, tree data were...
Scale-based fuzzy connectivity: a novel image segmentation methodology and its validation
NASA Astrophysics Data System (ADS)
Saha, Punam K.; Udupa, Jayaram K.
1999-05-01
This paper extends a previously reported theory and algorithms for fuzzy connected object definition. It introduces `object scale' for determining the neighborhood size for defining affinity, the degree of local hanging togetherness between image elements. Object scale allows us to use a varying neighborhood size in different parts of the image. This paper argues that scale-based fuzzy connectivity is natural in object definition and demonstrates that this leads to a more effective object segmentation than without using scale in fuzzy concentrations. Affinity is described as consisting of a homogeneity-based and an object-feature- based component. Families of non scale-based and scale-based affinity relations are constructed. An effective method for giving a rough estimate of scale at different locations in the image is presented. The original theoretical and algorithmic framework remains more-or-less the same but considerably improved segmentations result. A quantitative statistical comparison between the non scale-based and the scale-based methods was made based on phantom images generated from patient MR brain studies by first segmenting the objects, and then by adding noise and blurring, and background component. Both the statistical and the subjective tests clearly indicate the superiority of scale- based method in capturing details and in robustness to noise.
3D prostate TRUS segmentation using globally optimized volume-preserving prior.
Qiu, Wu; Rajchl, Martin; Guo, Fumin; Sun, Yue; Ukwatta, Eranga; Fenster, Aaron; Yuan, Jing
2014-01-01
An efficient and accurate segmentation of 3D transrectal ultrasound (TRUS) images plays an important role in the planning and treatment of the practical 3D TRUS guided prostate biopsy. However, a meaningful segmentation of 3D TRUS images tends to suffer from US speckles, shadowing and missing edges etc, which make it a challenging task to delineate the correct prostate boundaries. In this paper, we propose a novel convex optimization based approach to extracting the prostate surface from the given 3D TRUS image, while preserving a new global volume-size prior. We, especially, study the proposed combinatorial optimization problem by convex relaxation and introduce its dual continuous max-flow formulation with the new bounded flow conservation constraint, which results in an efficient numerical solver implemented on GPUs. Experimental results using 12 patient 3D TRUS images show that the proposed approach while preserving the volume-size prior yielded a mean DSC of 89.5% +/- 2.4%, a MAD of 1.4 +/- 0.6 mm, a MAXD of 5.2 +/- 3.2 mm, and a VD of 7.5% +/- 6.2% in - 1 minute, deomonstrating the advantages of both accuracy and efficiency. In addition, the low standard deviation of the segmentation accuracy shows a good reliability of the proposed approach.
NASA Astrophysics Data System (ADS)
Maier, Oskar; Wilms, Matthias; von der Gablentz, Janina; Krämer, Ulrike; Handels, Heinz
2014-03-01
Automatic segmentation of ischemic stroke lesions in magnetic resonance (MR) images is important in clinical practice and for neuroscientific trials. The key problem is to detect largely inhomogeneous regions of varying sizes, shapes and locations. We present a stroke lesion segmentation method based on local features extracted from multi-spectral MR data that are selected to model a human observer's discrimination criteria. A support vector machine classifier is trained on expert-segmented examples and then used to classify formerly unseen images. Leave-one-out cross validation on eight datasets with lesions of varying appearances is performed, showing our method to compare favourably with other published approaches in terms of accuracy and robustness. Furthermore, we compare a number of feature selectors and closely examine each feature's and MR sequence's contribution.
Evaluation of segmentation algorithms for optical coherence tomography images of ovarian tissue
NASA Astrophysics Data System (ADS)
Sawyer, Travis W.; Rice, Photini F. S.; Sawyer, David M.; Koevary, Jennifer W.; Barton, Jennifer K.
2018-02-01
Ovarian cancer has the lowest survival rate among all gynecologic cancers due to predominantly late diagnosis. Early detection of ovarian cancer can increase 5-year survival rates from 40% up to 92%, yet no reliable early detection techniques exist. Optical coherence tomography (OCT) is an emerging technique that provides depthresolved, high-resolution images of biological tissue in real time and demonstrates great potential for imaging of ovarian tissue. Mouse models are crucial to quantitatively assess the diagnostic potential of OCT for ovarian cancer imaging; however, due to small organ size, the ovaries must rst be separated from the image background using the process of segmentation. Manual segmentation is time-intensive, as OCT yields three-dimensional data. Furthermore, speckle noise complicates OCT images, frustrating many processing techniques. While much work has investigated noise-reduction and automated segmentation for retinal OCT imaging, little has considered the application to the ovaries, which exhibit higher variance and inhomogeneity than the retina. To address these challenges, we evaluated a set of algorithms to segment OCT images of mouse ovaries. We examined ve preprocessing techniques and six segmentation algorithms. While all pre-processing methods improve segmentation, Gaussian filtering is most effective, showing an improvement of 32% +/- 1.2%. Of the segmentation algorithms, active contours performs best, segmenting with an accuracy of 0.948 +/- 0.012 compared with manual segmentation (1.0 being identical). Nonetheless, further optimization could lead to maximizing the performance for segmenting OCT images of the ovaries.
CFD mixing analysis of axially opposed rows of jets injected into confined crossflow
NASA Technical Reports Server (NTRS)
Bain, D. B.; Smith, C. E.; Holdeman, J. D.
1993-01-01
A computational fluid dynamics (CFD) parametric study was performed to analyze axially opposed rows of jets mixing with crossflow in a rectangular duct. Isothermal analysis was conducted to determine the influence of lateral geometric arrangement on mixing. Two lateral arrangements were analyzed: (1) inline (jets' centerlines aligned with each other on top and bottom walls), and (2) staggered (jets' centerlines offset with each other on top and bottom walls). For a jet-to-mainstream mass flow ratio (MR) of 2.0, design parameters were systematically varied for jet-to-mainstream momentum-flux ratios (J) between 16 and 64 and orifice spacing-to-duct height ratios (S/H) between 0.125 and 1.5. Comparisons were made between geometries optimized for S/H at a specified J. Inline configurations had a unique spacing for best mixing at a specified J. In contrast, staggered configurations had two 'good mixing' spacings for each J, one corresponding to optimum inline spacing and the other corresponding to optimum non-impinging jet spacing. The inline configurations, due to their smaller orifice size at optimum S/H, produced better initial mixing characteristics. At downstream locations (e.g. x/H of 1.5), the optimum non-impinging staggered configuration produced better mixing than the optimum inline configuration for J of 64; the opposite results were observed for J of 16. Increasing J resulted in better mixing characteristics if each configuration was optimized with respect to orifice spacing. Mixing performance was shown to be similar to results from previous dilution jet mixing investigations (MR less than 0.5).
NASA Astrophysics Data System (ADS)
Zhang, Ziyang; Sun, Di; Han, Tongshuai; Guo, Chao; Liu, Jin
2016-10-01
In the non-invasive blood components measurement using near infrared spectroscopy, the useful signals caused by the concentration variation in the interested components, such as glucose, hemoglobin, albumin etc., are relative weak. Then the signals may be greatly disturbed by a lot of noises in various ways. We improved the signals by using the optimum path-length for the used wavelength to get a maximum variation of transmitted light intensity when the concentration of a component varies. And after the path-length optimization for every wavelength in 1000-2500 nm, we present the detection limits for the components, including glucose, hemoglobin and albumin, when measuring them in a tissue phantom. The evaluated detection limits could be the best reachable precision level since it assumed the measurement uses a high signal-to-noise ratio (SNR) signal and the optimum path-length. From the results, available wavelengths in 1000-2500 nm for the three component measurements can be screened by comparing their detection limit values with their measurement limit requirements. For other blood components measurement, the evaluation their detection limits could also be designed using the method proposed in this paper. Moreover, we use an equation to estimate the absorbance at the optimum path-length for every wavelength in 1000-2500 nm caused by the three components. It could be an easy way to realize the evaluation because adjusting the sample cell's size to the precise path-length value for every wavelength is not necessary. This equation could also be referred to other blood components measurement using the optimum path-length for every used wavelength.
Method for reducing nitrogen oxides in combustion effluents
Zauderer, Bert
2000-01-01
Method for reducing nitrogen oxides (NO.sub.x) in the gas stream from the combustion of fossil fuels is disclosed. In a narrow gas temperature zone, NO.sub.x is converted to nitrogen by reaction with urea or ammonia with negligible remaining ammonia and other reaction pollutants. Specially designed injectors are used to introduce air atomized water droplets containing dissolved urea or ammonia into the gaseous combustion products in a manner that widely disperses the droplets exclusively in the optimum reaction temperature zone. The injector operates in a manner that forms droplet of a size that results in their vaporization exclusively in this optimum NO.sub.x -urea/ammonia reaction temperature zone. Also disclosed is a design of a system to effectively accomplish this injection.
Chatzistergos, Panagiotis E; Sapkas, George; Kourkoulis, Stavros K
2010-04-20
The pullout strength of a typical pedicle screw was evaluated experimentally for different screw insertion techniques. OBJECTIVE.: To conclude whether the self-tapping insertion technique is indeed the optimum one for self-tapping screws, with respect to the pullout strength. It is reported in the literature that the size of the pilot-hole significantly influences the pullout strength of a self-tapping screw. In addition it is accepted that an optimum value of the diameter of the pilot-hole exists. For non self-tapping screw insertion it is reported that undertapping of the pilot-hole can increase its pullout strength. Finally it is known that in some cases orthopedic surgeons open the threaded holes, using another screw instead of a tap. A typical commercial self-tapping pedicle screw was inserted into blocks of Solid Rigid Polyurethane Foam (simulating osteoporotic cancellous bone), following different insertion techniques. The pullout force was measured according to the ASTM-F543-02 standard. The screw was inserted into previously prepared holes of different sizes, either threaded or cylindrical, to conclude whether an optimum size of the pilot-hole exists and whether tapping can increase the pullout strength. The case where the tapping is performed using another screw was also studied. For screw insertion with tapping, decreasing the outer radius of the threaded hole from 1.00 to 0.87 of the screw's outer radius increased the pullout force 9%. For insertion without tapping, decreasing the pilot-hole's diameter from 0.87 to 0.47 of the screw's outer diameter increased its pullout force 75%. Finally, tapping using another screw instead of a tap, gave results similar to those of conventional tapping. Undertapping of a pilot-hole either using a tap or another screw can increase the pullout strength of self-tapping pedicle screws.
Pikuta, Elena V; Hoover, Richard B; Bej, Asim K; Marsic, Damien; Whitman, William B; Cleland, David; Krader, Paul
2003-09-01
A novel alkaliphilic, sulfate-reducing bacterium, strain MLF1(T), was isolated from sediments of soda Mono Lake, California. Gram-negative vibrio-shaped cells were observed, which were 0.6-0.7x1.2-2.7 micro m in size, motile by a single polar flagellum and occurred singly, in pairs or as short spirilla. Growth was observed at 15-48 degrees C (optimum, 37 degrees C), >1-7 % NaCl, w/v (optimum, 3 %) and pH 8.0-10.0 (optimum, 9.5). The novel isolate is strictly alkaliphilic, requires a high concentration of carbonate in the growth medium and is obligately anaerobic and catalase-negative. As electron donors, strain MLF1(T) uses hydrogen, formate and ethanol. Sulfate, sulfite and thiosulfate (but not sulfur or nitrate) can be used as electron acceptors. The novel isolate is a lithoheterotroph and a facultative lithoautotroph that is able to grow on hydrogen without an organic source of carbon. Strain MLF1(T) is resistant to kanamycin and gentamicin, but sensitive to chloramphenicol and tetracycline. The DNA G+C content is 63.0 mol% (HPLC). DNA-DNA hybridization with the most closely related species, Desulfonatronum lacustre Z-7951(T), exhibited 51 % homology. Also, the genome size (1.6x10(9) Da) and T(m) value of the genomic DNA (71+/-2 degrees C) for strain MLF1(T) were significantly different from the genome size (2.1x10(9) Da) and T(m) value (63+/-2 degrees C) for Desulfonatronum lacustre Z-7951(T). On the basis of physiological and molecular properties, the isolate was considered to be a novel species of the genus Desulfonatronum, for which the name Desulfonatronum thiodismutans sp. nov. is proposed (the type strain is MLF1(T)=ATCC BAA-395(T)=DSM 14708(T)).
NASA Technical Reports Server (NTRS)
Pikuta, Elena V.; Hoover, Richard B.; Bej, Asim K.; Marsic, Damien; Whitman, William B.; Cleland, David; Krader, Paul
2003-01-01
A novel alkaliphilic, sulfate-reducing bacterium, strain MLF1(sup T), was isolated from sediments of soda Mono Lake, California. Gram-negative vibrio-shaped cells were observed, which were 0.6-0.7 x 1.2-2.7 microns in size, motile by a single polar flagellum and occurred singly, in pairs or as short spirilla. Growth was observed at 15-48 C (optimum, 37 C), > 1-7 % NaCI, w/v (optimum, 3%) and pH 8.0-10.0 (optimum, 9.5). The novel isolate is strictly alkaliphilic, requires a high concentration of carbonate in the growth medium and is obligately anaerobic and catalase-negative. As electron donors, strain MLF1(sup T) uses hydrogen, formate and ethanol. Sulfate, sulfite and thiosulfate (but not sulfur or nitrate) can be used as electron acceptors. The novel isolate is a lithoheterotroph and a facultative lithoautotroph that is able to grow on hydrogen without an organic source of carbon. Strain MLF1(sup T) is resistant to kanamycin and gentamicin, but sensitive to chloramphenicol and tetracycline. The DNA G+C content is 63.0 mol% (HPLC). DNA-DNA hybridization with the most closely related species, Desulfonatronum lacustre Z-7951(sup T), exhibited 51 % homology. Also, the genome size (1.6 x 10(exp 9) Da) and T(sub m) value of the genomic DNA (71 +/- 2 C) for strain MLF1(sup T) were significantly different from the genome size (2.1 x 10(exp 9) Da) and T(sub m) value (63 +/- 2 C) for Desulfonatronum lacustre Z-7951(sup T). On the basis of physiological and molecular properties, the isolate was considered to be a novel species of the genus Desulfonatronum, for which the name Desulfonatronum thiodismutans sp. nov. is proposed (the type strain is MLF1(sup T) = ATCC BAA-395(sup T) = DSM 14708(sup T)).
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, Bo; Edwards, Brian J., E-mail: bje@utk.edu
A combination of self-consistent field theory and density functional theory was used to examine the effect of particle size on the stable, 3-dimensional equilibrium morphologies formed by diblock copolymers with a tethered nanoparticle attached either between the two blocks or at the end of one of the blocks. Particle size was varied between one and four tenths of the radius of gyration of the diblock polymer chain for neutral particles as well as those either favoring or disfavoring segments of the copolymer blocks. Phase diagrams were constructed and analyzed in terms of thermodynamic diagrams to understand the physics associated withmore » the molecular-level self-assembly processes. Typical morphologies were observed, such as lamellar, spheroidal, cylindrical, gyroidal, and perforated lamellar, with the primary concentration region of the tethered particles being influenced heavily by particle size and tethering location, strength of the particle-segment energetic interactions, chain length, and copolymer radius of gyration. The effect of the simulation box size on the observed morphology and system thermodynamics was also investigated, indicating possible effects of confinement upon the system self-assembly processes.« less
Kulkarni, Amol A; Sebastian Cabeza, Victor
2017-12-19
Continuous segmented flow interfacial synthesis of Au nanostructures is demonstrated in a microchannel reactor. This study brings new insights into the growth of nanostructures at continuous interfaces. The size as well as the shape of the nanostructures showed significant dependence on the reactant concentrations, reaction time, temperature, and surface tension, which actually controlled the interfacial mass transfer. The microchannel reactor assisted in achieving a high interfacial area, as well as uniformity in mass transfer effects. Hexagonal nanostructures were seen to be formed in synthesis times as short as 10 min. The wettability of the channel showed significant effect on the particle size as well as the actual shape. The hydrophobic channel yielded hexagonal structures of relatively smaller size than the hydrophilic microchannel, which yielded sharp hexagonal bipyramidal particles (diagonal distance of 30 nm). The evolution of particle size and shape for the case of hydrophilic microchannel is also shown as a function of the residence time. The interfacial synthesis approach based on a stable segmented flow promoted an excellent control on the reaction extent, reduction in axial dispersion as well as the particle size distribution.
NASA Astrophysics Data System (ADS)
Grippa, Tais; Georganos, Stefanos; Lennert, Moritz; Vanhuysse, Sabine; Wolff, Eléonore
2017-10-01
Mapping large heterogeneous urban areas using object-based image analysis (OBIA) remains challenging, especially with respect to the segmentation process. This could be explained both by the complex arrangement of heterogeneous land-cover classes and by the high diversity of urban patterns which can be encountered throughout the scene. In this context, using a single segmentation parameter to obtain satisfying segmentation results for the whole scene can be impossible. Nonetheless, it is possible to subdivide the whole city into smaller local zones, rather homogeneous according to their urban pattern. These zones can then be used to optimize the segmentation parameter locally, instead of using the whole image or a single representative spatial subset. This paper assesses the contribution of a local approach for the optimization of segmentation parameter compared to a global approach. Ouagadougou, located in sub-Saharan Africa, is used as case studies. First, the whole scene is segmented using a single globally optimized segmentation parameter. Second, the city is subdivided into 283 local zones, homogeneous in terms of building size and building density. Each local zone is then segmented using a locally optimized segmentation parameter. Unsupervised segmentation parameter optimization (USPO), relying on an optimization function which tends to maximize both intra-object homogeneity and inter-object heterogeneity, is used to select the segmentation parameter automatically for both approaches. Finally, a land-use/land-cover classification is performed using the Random Forest (RF) classifier. The results reveal that the local approach outperforms the global one, especially by limiting confusions between buildings and their bare-soil neighbors.
Development and Prototyping of the PROSPECT Antineutrino Detector
NASA Astrophysics Data System (ADS)
Commeford, Kelley; Prospect Collaboration
2017-01-01
The PROSPECT experiment will make the most precise measurement of the 235U reactor antineutrino spectrum as well as search for sterile neutrinos using a segmented Li-loaded liquid scintillator neutrino detector. Several prototype detectors of increasing size, complexity, and fidelity have been constructed and tested as part of the PROSPECT detector development program. The challenges to overcome include the efficient rejection of cosmogenic background and collection of optical photons in a compact volume. Design choices regarding segment structure and layout, calibration source deployment, and optical collection methods are discussed. Results from the most recent multi-segment prototype, PROSPECT-50, will also be shown.
Parallelized Seeded Region Growing Using CUDA
Park, Seongjin; Lee, Hyunna; Seo, Jinwook; Lee, Kyoung Ho; Shin, Yeong-Gil; Kim, Bohyoung
2014-01-01
This paper presents a novel method for parallelizing the seeded region growing (SRG) algorithm using Compute Unified Device Architecture (CUDA) technology, with intention to overcome the theoretical weakness of SRG algorithm of its computation time being directly proportional to the size of a segmented region. The segmentation performance of the proposed CUDA-based SRG is compared with SRG implementations on single-core CPUs, quad-core CPUs, and shader language programming, using synthetic datasets and 20 body CT scans. Based on the experimental results, the CUDA-based SRG outperforms the other three implementations, advocating that it can substantially assist the segmentation during massive CT screening tests. PMID:25309619
Real-time myocardium segmentation for the assessment of cardiac function variation
NASA Astrophysics Data System (ADS)
Zoehrer, Fabian; Huellebrand, Markus; Chitiboi, Teodora; Oechtering, Thekla; Sieren, Malte; Frahm, Jens; Hahn, Horst K.; Hennemuth, Anja
2017-03-01
Recent developments in MRI enable the acquisition of image sequences with high spatio-temporal resolution. Cardiac motion can be captured without gating and triggering. Image size and contrast relations differ from conventional cardiac MRI cine sequences requiring new adapted analysis methods. We suggest a novel segmentation approach utilizing contrast invariant polar scanning techniques. It has been tested with 20 datasets of arrhythmia patients. The results do not differ significantly more between automatic and manual segmentations than between observers. This indicates that the presented solution could enable clinical applications of real-time MRI for the examination of arrhythmic cardiac motion in the future.
2015-01-01
The development of new and improved photothermal contrast agents for the successful treatment of cancer (or other diseases) via plasmonic photothermal therapy (PPTT) is a crucial part of the application of nanotechnology in medicine. Gold nanorods (AuNRs) have been found to be the most effective photothermal contrast agents, both in vitro and in vivo. Therefore, determining the optimum AuNR size needed for applications in PPTT is of great interest. In the present work, we utilized theoretical calculations as well as experimental techniques in vitro to determine this optimum AuNR size by comparing plasmonic properties and the efficacy as photothermal contrast agents of three different sizes of AuNRs. Our theoretical calculations showed that the contribution of absorbance to the total extinction, the electric field, and the distance at which this field extends away from the nanoparticle surface all govern the effectiveness of the amount of heat these particles generate upon NIR laser irradiation. Comparing between three different AuNRs (38 × 11, 28 × 8, and 17 × 5 nm), we determined that the 28 × 8 nm AuNR is the most effective in plasmonic photothermal heat generation. These results encouraged us to carry out in vitro experiments to compare the PPTT efficacy of the different sized AuNRs. The 28 × 8 nm AuNR was found to be the most effective photothermal contrast agent for PPTT of human oral squamous cell carcinoma. This size AuNR has the best compromise between the total amount of light absorbed and the fraction of which is converted to heat. In addition, the distance at which the electric field extends from the particle surface is most ideal for this size AuNR, as it is sufficient to allow for coupling between the fields of adjacent particles in solution (i.e., particle aggregates), resulting in effective heating in solution. PMID:24433049
Wound size measurement of lower extremity ulcers using segmentation algorithms
NASA Astrophysics Data System (ADS)
Dadkhah, Arash; Pang, Xing; Solis, Elizabeth; Fang, Ruogu; Godavarty, Anuradha
2016-03-01
Lower extremity ulcers are one of the most common complications that not only affect many people around the world but also have huge impact on economy since a large amount of resources are spent for treatment and prevention of the diseases. Clinical studies have shown that reduction in the wound size of 40% within 4 weeks is an acceptable progress in the healing process. Quantification of the wound size plays a crucial role in assessing the extent of healing and determining the treatment process. To date, wound healing is visually inspected and the wound size is measured from surface images. The extent of wound healing internally may vary from the surface. A near-infrared (NIR) optical imaging approach has been developed for non-contact imaging of wounds internally and differentiating healing from non-healing wounds. Herein, quantitative wound size measurements from NIR and white light images are estimated using a graph cuts and region growing image segmentation algorithms. The extent of the wound healing from NIR imaging of lower extremity ulcers in diabetic subjects are quantified and compared across NIR and white light images. NIR imaging and wound size measurements can play a significant role in potentially predicting the extent of internal healing, thus allowing better treatment plans when implemented for periodic imaging in future.
Microplastics reduced posterior segment regeneration rate of the polychaete Perinereis aibuhitensis.
Leung, Julia; Chan, Kit Yu Karen
2018-04-01
Microplastics are found in abundance in and on coastal sediments, and yet, whether exposure to this emerging pollutant negatively impact whole organism function is unknown. Focusing on a commercially important polychaete, Perinereis aibuhitensis, we demonstrated that presence of microplastics increased mortality and reduced the rate of posterior segment regeneration. The impact of the micro-polystyrene beads was size-dependent with smaller beads (8-12μm in diameter) being more detrimental than those bigger in size (32-38μm). This observed difference suggests microplastic impact could be affected by physical properties, e.g., sinking speed, surface area available for sorption of chemicals and bacteria, and selective feeding behaviors of the target organism. Copyright © 2017 Elsevier Ltd. All rights reserved.
Aspects on HTS applications in confined power grids
NASA Astrophysics Data System (ADS)
Arndt, T.; Grundmann, J.; Kuhnert, A.; Kummeth, P.; Nick, W.; Oomen, M.; Schacherer, C.; Schmidt, W.
2014-12-01
In an increasing number of electric power grids the share of distributed energy generation is also increasing. The grids have to cope with a considerable change of power flow, which has an impact on the optimum topology of the grids and sub-grids (high-voltage, medium-voltage and low-voltage sub-grids) and the size of quasi-autonomous grid sections. Furthermore the stability of grids is influenced by its size. Thus special benefits of HTS applications in the power grid might become most visible in confined power grids.
Shi, Feng; Yap, Pew-Thian; Fan, Yong; Cheng, Jie-Zhi; Wald, Lawrence L.; Gerig, Guido; Lin, Weili; Shen, Dinggang
2010-01-01
The acquisition of high quality MR images of neonatal brains is largely hampered by their characteristically small head size and low tissue contrast. As a result, subsequent image processing and analysis, especially for brain tissue segmentation, are often hindered. To overcome this problem, a dedicated phased array neonatal head coil is utilized to improve MR image quality by effectively combing images obtained from 8 coil elements without lengthening data acquisition time. In addition, a subject-specific atlas based tissue segmentation algorithm is specifically developed for the delineation of fine structures in the acquired neonatal brain MR images. The proposed tissue segmentation method first enhances the sheet-like cortical gray matter (GM) structures in neonatal images with a Hessian filter for generation of cortical GM prior. Then, the prior is combined with our neonatal population atlas to form a cortical enhanced hybrid atlas, which we refer to as the subject-specific atlas. Various experiments are conducted to compare the proposed method with manual segmentation results, as well as with additional two population atlas based segmentation methods. Results show that the proposed method is capable of segmenting the neonatal brain with the highest accuracy, compared to other two methods. PMID:20862268
METHOD AND MEANS FOR RECOGNIZING COMPLEX PATTERNS
Hough, P.V.C.
1962-12-18
This patent relates to a method and means for recognizing a complex pattern in a picture. The picture is divided into framelets, each framelet being sized so that any segment of the complex pattern therewithin is essentially a straight line. Each framelet is scanned to produce an electrical pulse for each point scanned on the segment therewithin. Each of the electrical pulses of each segment is then transformed into a separate strnight line to form a plane transform in a pictorial display. Each line in the plane transform of a segment is positioned laterally so that a point on the line midway between the top and the bottom of the pictorial display occurs at a distance from the left edge of the pictorial display equal to the distance of the generating point in the segment from the left edge of the framelet. Each line in the plane transform of a segment is inclined in the pictorial display at an angle to the vertical whose tangent is proportional to the vertical displacement of the generating point in the segment from the center of the framelet. The coordinate position of the point of intersection of the lines in the pictorial display for each segment is determined and recorded. The sum total of said recorded coordinate positions being representative of the complex pattern. (AEC)
NASA Technical Reports Server (NTRS)
Gersh-Range, Jessica A.; Arnold, William R.; Peck, Mason A.; Stahl, H. Philip
2011-01-01
Since future astrophysics missions require space telescopes with apertures of at least 10 meters, there is a need for on-orbit assembly methods that decouple the size of the primary mirror from the choice of launch vehicle. One option is to connect the segments edgewise using mechanisms analogous to damped springs. To evaluate the feasibility of this approach, a parametric ANSYS model that calculates the mode shapes, natural frequencies, and disturbance response of such a mirror, as well as of the equivalent monolithic mirror, has been developed. This model constructs a mirror using rings of hexagonal segments that are either connected continuously along the edges (to form a monolith) or at discrete locations corresponding to the mechanism locations (to form a segmented mirror). As an example, this paper presents the case of a mirror whose segments are connected edgewise by mechanisms analogous to a set of four collocated single-degree-of-freedom damped springs. The results of a set of parameter studies suggest that such mechanisms can be used to create a 15-m segmented mirror that behaves similarly to a monolith, although fully predicting the segmented mirror performance would require incorporating measured mechanism properties into the model. Keywords: segmented mirror, edgewise connectivity, space telescope
Multi-Scale Correlative Tomography of a Li-Ion Battery Composite Cathode
Moroni, Riko; Börner, Markus; Zielke, Lukas; Schroeder, Melanie; Nowak, Sascha; Winter, Martin; Manke, Ingo; Zengerle, Roland; Thiele, Simon
2016-01-01
Focused ion beam/scanning electron microscopy tomography (FIB/SEMt) and synchrotron X-ray tomography (Xt) are used to investigate the same lithium manganese oxide composite cathode at the same specific spot. This correlative approach allows the investigation of three central issues in the tomographic analysis of composite battery electrodes: (i) Validation of state-of-the-art binary active material (AM) segmentation: Although threshold segmentation by standard algorithms leads to very good segmentation results, limited Xt resolution results in an AM underestimation of 6 vol% and severe overestimation of AM connectivity. (ii) Carbon binder domain (CBD) segmentation in Xt data: While threshold segmentation cannot be applied for this purpose, a suitable classification method is introduced. Based on correlative tomography, it allows for reliable ternary segmentation of Xt data into the pore space, CBD, and AM. (iii) Pore space analysis in the micrometer regime: This segmentation technique is applied to an Xt reconstruction with several hundred microns edge length, thus validating the segmentation of pores within the micrometer regime for the first time. The analyzed cathode volume exhibits a bimodal pore size distribution in the ranges between 0–1 μm and 1–12 μm. These ranges can be attributed to different pore formation mechanisms. PMID:27456201
2013-01-01
Background The so-called ventral organs are amongst the most enigmatic structures in Onychophora (velvet worms). They were described as segmental, ectodermal thickenings in the onychophoran embryo, but the same term has also been applied to mid-ventral, cuticular structures in adults, although the relationship between the embryonic and adult ventral organs is controversial. In the embryo, these structures have been regarded as anlagen of segmental ganglia, but recent studies suggest that they are not associated with neural development. Hence, their function remains obscure. Moreover, their relationship to the anteriorly located preventral organs, described from several onychophoran species, is also unclear. To clarify these issues, we studied the anatomy and development of the ventral and preventral organs in several species of Onychophora. Results Our anatomical data, based on histology, and light, confocal and scanning electron microscopy in five species of Peripatidae and three species of Peripatopsidae, revealed that the ventral and preventral organs are present in all species studied. These structures are covered externally with cuticle that forms an internal, longitudinal, apodeme-like ridge. Moreover, phalloidin-rhodamine labelling for f-actin revealed that the anterior and posterior limb depressor muscles in each trunk and the slime papilla segment attach to the preventral and ventral organs, respectively. During embryonic development, the ventral and preventral organs arise as large segmental, paired ectodermal thickenings that decrease in size and are subdivided into the smaller, anterior anlagen of the preventral organs and the larger, posterior anlagen of the ventral organs, both of which persist as paired, medially-fused structures in adults. Our expression data of the genes Delta and Notch from embryos of Euperipatoides rowelli revealed that these genes are expressed in two, paired domains in each body segment, corresponding in number, position and size with the anlagen of the ventral and preventral organs. Conclusions Our findings suggest that the ventral and preventral organs are a common feature of onychophorans that serve as attachment sites for segmental limb depressor muscles. The origin of these structures can be traced back in the embryo as latero-ventral segmental, ectodermal thickenings, previously suggested to be associated with the development of the nervous system. PMID:24308783
Two-stage atlas subset selection in multi-atlas based image segmentation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhao, Tingting, E-mail: tingtingzhao@mednet.ucla.edu; Ruan, Dan, E-mail: druan@mednet.ucla.edu
2015-06-15
Purpose: Fast growing access to large databases and cloud stored data presents a unique opportunity for multi-atlas based image segmentation and also presents challenges in heterogeneous atlas quality and computation burden. This work aims to develop a novel two-stage method tailored to the special needs in the face of large atlas collection with varied quality, so that high-accuracy segmentation can be achieved with low computational cost. Methods: An atlas subset selection scheme is proposed to substitute a significant portion of the computationally expensive full-fledged registration in the conventional scheme with a low-cost alternative. More specifically, the authors introduce a two-stagemore » atlas subset selection method. In the first stage, an augmented subset is obtained based on a low-cost registration configuration and a preliminary relevance metric; in the second stage, the subset is further narrowed down to a fusion set of desired size, based on full-fledged registration and a refined relevance metric. An inference model is developed to characterize the relationship between the preliminary and refined relevance metrics, and a proper augmented subset size is derived to ensure that the desired atlases survive the preliminary selection with high probability. Results: The performance of the proposed scheme has been assessed with cross validation based on two clinical datasets consisting of manually segmented prostate and brain magnetic resonance images, respectively. The proposed scheme demonstrates comparable end-to-end segmentation performance as the conventional single-stage selection method, but with significant computation reduction. Compared with the alternative computation reduction method, their scheme improves the mean and medium Dice similarity coefficient value from (0.74, 0.78) to (0.83, 0.85) and from (0.82, 0.84) to (0.95, 0.95) for prostate and corpus callosum segmentation, respectively, with statistical significance. Conclusions: The authors have developed a novel two-stage atlas subset selection scheme for multi-atlas based segmentation. It achieves good segmentation accuracy with significantly reduced computation cost, making it a suitable configuration in the presence of extensive heterogeneous atlases.« less
Large Constituent Families Help Children Parse Compounds
ERIC Educational Resources Information Center
Krott, Andrea; Nicoladis, Elena
2005-01-01
The family size of the constituents of compound words, or the number of compounds sharing the constituents, has been shown to affect adults' access to compound words in the mental lexicon. The present study was designed to see if family size would affect children's segmentation of compounds. Twenty-five English-speaking children between 3;7 and…
Holokinetic drive: centromere drive in chromosomes without centromeres.
Bureš, Petr; Zedek, František
2014-08-01
Similar to how the model of centromere drive explains the size and complexity of centromeres in monocentrics (organisms with localized centromeres), our model of holokinetic drive is consistent with the divergent evolution of chromosomal size and number in holocentrics (organisms with nonlocalized centromeres) exhibiting holokinetic meiosis (holokinetics). Holokinetic drive is proposed to facilitate chromosomal fission and/or repetitive DNA removal (or any segmental deletion) when smaller homologous chromosomes are preferentially inherited or chromosomal fusion and/or repetitive DNA proliferation (or any segmental duplication) when larger homologs are preferred. The hypothesis of holokinetic drive is supported primarily by the negative correlation between chromosome number and genome size that is documented in holokinetic lineages. The supporting value of two older cross-experiments on holokinetic structural heterozygotes (the rush Luzula elegans and butterflies of the genus Antheraea) that indicate the presence of size-preferential homolog transmission via female meiosis for holokinetic drive is discussed, along with the further potential consequences of holokinetic drive in comparison with centromere drive. © 2014 The Author(s). Evolution © 2014 The Society for the Study of Evolution.
NASA Technical Reports Server (NTRS)
Skillen, Michael D.; Crossley, William A.
2008-01-01
This report presents an approach for sizing of a morphing aircraft based upon a multi-level design optimization approach. For this effort, a morphing wing is one whose planform can make significant shape changes in flight - increasing wing area by 50% or more from the lowest possible area, changing sweep 30 or more, and/or increasing aspect ratio by as much as 200% from the lowest possible value. The top-level optimization problem seeks to minimize the gross weight of the aircraft by determining a set of "baseline" variables - these are common aircraft sizing variables, along with a set of "morphing limit" variables - these describe the maximum shape change for a particular morphing strategy. The sub-level optimization problems represent each segment in the morphing aircraft's design mission; here, each sub-level optimizer minimizes fuel consumed during each mission segment by changing the wing planform within the bounds set by the baseline and morphing limit variables from the top-level problem.
[Optimum design of imaging spectrometer based on toroidal uniform-line-spaced (TULS) spectrometer].
Xue, Qing-Sheng; Wang, Shu-Rong
2013-05-01
Based on the geometrical aberration theory, a optimum-design method for designing an imaging spectrometer based on toroidal uniform grating spectrometer is proposed. To obtain the best optical parameters, twice optimization is carried out using genetic algorithm(GA) and optical design software ZEMAX A far-ultraviolet(FUV) imaging spectrometer is designed using this method. The working waveband is 110-180 nm, the slit size is 50 microm x 5 mm, and the numerical aperture is 0.1. Using ZEMAX software, the design result is analyzed and evaluated. The results indicate that the MTF for different wavelengths is higher than 0.7 at Nyquist frequency 10 lp x mm(-1), and the RMS spot radius is less than 14 microm. The good imaging quality is achieved over the whole working waveband, the design requirements of spatial resolution 0.5 mrad and spectral resolution 0.6 nm are satisfied. It is certificated that the optimum-design method proposed in this paper is feasible. This method can be applied in other waveband, and is an instruction method for designing grating-dispersion imaging spectrometers.
Vehicle systems design optimization study
NASA Technical Reports Server (NTRS)
Gilmour, J. L.
1980-01-01
The optimum vehicle configuration and component locations are determined for an electric drive vehicle based on using the basic structure of a current production subcompact vehicle. The optimization of an electric vehicle layout requires a weight distribution in the range of 53/47 to 62/38 in order to assure dynamic handling characteristics comparable to current internal combustion engine vehicles. Necessary modification of the base vehicle can be accomplished without major modification of the structure or running gear. As long as batteries are as heavy and require as much space as they currently do, they must be divided into two packages, one at front under the hood and a second at the rear under the cargo area, in order to achieve the desired weight distribution. The weight distribution criteria requires the placement of batteries at the front of the vehicle even when the central tunnel is used for the location of some batteries. The optimum layout has a front motor and front wheel drive. This configuration provides the optimum vehicle dynamic handling characteristics and the maximum passenger and cargo space for a given size vehicle.
Gamma ray irradiated AgFeO{sub 2} nanoparticles with enhanced gas sensor properties
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, Xiuhua, E-mail: xhwang@mail.ahnu.edu.cn; Shi, Zhijie; Yao, Shangwu
2014-11-15
AgFeO{sub 2} nanoparticles were synthesized via a facile hydrothermal method and irradiated by various doses of gamma ray. The products were characterized with X-ray powder diffraction, UV–vis absorption spectrum and transmission electron microscope. The results revealed that the crystal structure, morphology and size of the samples remained unchanged after irradiation, while the intensity of UV–Vis spectra increased with irradiation dose increasing. In addition, gamma ray irradiation improved the performance of gas sensor based on the AgFeO{sub 2} nanoparticles including the optimum operating temperature and sensitivity, which might be ascribed to the generation of defects. - Graphical abstract: Gamma ray irradiationmore » improved the performance of gas sensor based on the AgFeO{sub 2} nanoparticles including sensitivity and optimum operating temperature, which might be ascribed to the generation of defects. - Highlights: • AgFeO{sub 2} nanoparticles were synthesized and irradiated with gamma ray. • AgFeO{sub 2} nanoparticles were employed to fabricate gas sensors to detect ethanol. • Gamma ray irradiation improved the sensitivity and optimum operating temperature.« less
Scheduling multirobot operations in manufacturing by truncated Petri nets
NASA Astrophysics Data System (ADS)
Chen, Qin; Luh, J. Y.
1995-08-01
Scheduling of operational sequences in manufacturing processes is one of the important problems in automation. Methods of applying Petri nets to model and analyze the problem with constraints on precedence relations, multiple resources allocation, etc. have been available in literature. Searching for an optimum schedule can be implemented by combining the branch-and-bound technique with the execution of the timed Petri net. The process usually produces a large Petri net which is practically not manageable. This disadvantage, however, can be handled by a truncation technique which divides the original large Petri net into several smaller size subnets. The complexity involved in the analysis of each subnet individually is greatly reduced. However, when the locally optimum schedules of the resulting subnets are combined together, it may not yield an overall optimum schedule for the original Petri net. To circumvent this problem, algorithms are developed based on the concepts of Petri net execution and modified branch-and-bound process. The developed technique is applied to a multi-robot task scheduling problem of the manufacturing work cell.
NASA Astrophysics Data System (ADS)
Abd Kadir, N.; Aminanda, Y.; Ibrahim, M. S.; Mokhtar, H.
2016-10-01
A statistical analysis was performed to evaluate the effect of factor and to obtain the optimum configuration of Kraft paper honeycomb. The factors considered in this study include density of paper, thickness of paper and cell size of honeycomb. Based on three level factorial design, two-factor interaction model (2FI) was developed to correlate the factors with specific energy absorption and specific compression strength. From the analysis of variance (ANOVA), the most influential factor on responses and the optimum configuration was identified. After that, Kraft paper honeycomb with optimum configuration is used to fabricate foam-filled paper honeycomb with five different densities of polyurethane foam as filler (31.8, 32.7, 44.5, 45.7, 52 kg/m3). The foam-filled paper honeycomb is subjected to quasi-static compression loading. Failure mechanism of the foam-filled honeycomb was identified, analyzed and compared with the unfilled paper honeycomb. The peak force and energy absorption capability of foam-filled paper honeycomb are increased up to 32% and 30%, respectively, compared to the summation of individual components.
Leaching behavior of copper from waste printed circuit boards with Brønsted acidic ionic liquid
DOE Office of Scientific and Technical Information (OSTI.GOV)
Huang, Jinxiu; Chen, Mengjun, E-mail: kyling@swust.edu.cn; Chen, Haiyan
2014-02-15
Highlights: • A Brønsted acidic ILs was used to leach Cu from WPCBs for the first time. • The particle size of WPCBs has significant influence on Cu leaching rate. • Cu leaching rate was higher than 99% under the optimum leaching conditions. • The leaching process can be modeled with shrinking core model, and the E{sub a} was 25.36 kJ/mol. - Abstract: In this work, a Brønsted acidic ionic liquid, 1-butyl-3-methyl-imidazolium hydrogen sulfate ([bmim]HSO{sub 4}), was used to leach copper from waste printed circuit boards (WPCBs, mounted with electronic components) for the first time, and the leaching behavior ofmore » copper was discussed in detail. The results showed that after the pre-treatment, the metal distributions were different with the particle size: Cu, Zn and Al increased with the increasing particle size; while Ni, Sn and Pb were in the contrary. And the particle size has significant influence on copper leaching rate. Copper leaching rate was higher than 99%, almost 100%, when 1 g WPCBs powder was leached under the optimum conditions: particle size of 0.1–0.25 mm, 25 mL 80% (v/v) ionic liquid, 10 mL 30% hydrogen peroxide, solid/liquid ratio of 1/25, 70 °C and 2 h. Copper leaching by [bmim]HSO{sub 4} can be modeled with the shrinking core model, controlled by diffusion through a solid product layer, and the kinetic apparent activation energy has been calculated to be 25.36 kJ/mol.« less
Brain tumor segmentation in multi-spectral MRI using convolutional neural networks (CNN).
Iqbal, Sajid; Ghani, M Usman; Saba, Tanzila; Rehman, Amjad
2018-04-01
A tumor could be found in any area of the brain and could be of any size, shape, and contrast. There may exist multiple tumors of different types in a human brain at the same time. Accurate tumor area segmentation is considered primary step for treatment of brain tumors. Deep Learning is a set of promising techniques that could provide better results as compared to nondeep learning techniques for segmenting timorous part inside a brain. This article presents a deep convolutional neural network (CNN) to segment brain tumors in MRIs. The proposed network uses BRATS segmentation challenge dataset which is composed of images obtained through four different modalities. Accordingly, we present an extended version of existing network to solve segmentation problem. The network architecture consists of multiple neural network layers connected in sequential order with the feeding of Convolutional feature maps at the peer level. Experimental results on BRATS 2015 benchmark data thus show the usability of the proposed approach and its superiority over the other approaches in this area of research. © 2018 Wiley Periodicals, Inc.
Jurrus, Elizabeth; Watanabe, Shigeki; Giuly, Richard J.; Paiva, Antonio R. C.; Ellisman, Mark H.; Jorgensen, Erik M.; Tasdizen, Tolga
2013-01-01
Neuroscientists are developing new imaging techniques and generating large volumes of data in an effort to understand the complex structure of the nervous system. The complexity and size of this data makes human interpretation a labor-intensive task. To aid in the analysis, new segmentation techniques for identifying neurons in these feature rich datasets are required. This paper presents a method for neuron boundary detection and nonbranching process segmentation in electron microscopy images and visualizing them in three dimensions. It combines both automated segmentation techniques with a graphical user interface for correction of mistakes in the automated process. The automated process first uses machine learning and image processing techniques to identify neuron membranes that deliniate the cells in each two-dimensional section. To segment nonbranching processes, the cell regions in each two-dimensional section are connected in 3D using correlation of regions between sections. The combination of this method with a graphical user interface specially designed for this purpose, enables users to quickly segment cellular processes in large volumes. PMID:22644867
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jurrus, Elizabeth R.; Watanabe, Shigeki; Giuly, Richard J.
2013-01-01
Neuroscientists are developing new imaging techniques and generating large volumes of data in an effort to understand the complex structure of the nervous system. The complexity and size of this data makes human interpretation a labor-intensive task. To aid in the analysis, new segmentation techniques for identifying neurons in these feature rich datasets are required. This paper presents a method for neuron boundary detection and nonbranching process segmentation in electron microscopy images and visualizing them in three dimensions. It combines both automated segmentation techniques with a graphical user interface for correction of mistakes in the automated process. The automated processmore » first uses machine learning and image processing techniques to identify neuron membranes that deliniate the cells in each two-dimensional section. To segment nonbranching processes, the cell regions in each two-dimensional section are connected in 3D using correlation of regions between sections. The combination of this method with a graphical user interface specially designed for this purpose, enables users to quickly segment cellular processes in large volumes.« less
[The private vaccines market in Brazil: privatization of public health].
Temporão, José Gomes
2003-01-01
The main objective of this article is to analyze the vaccines market in Brazil, which is characterized as consisting of two segments with distinct practices and logics: the public segment, focused on supply within the Unified National Health System (SUS) and the private segment, organized around private clinics, physicians' offices, and similar private health facilities. The private vaccines market segment, studied here for the first time, is characterized in relation to the supply and demand structure. Historical aspects of its structure are analyzed, based on the creation of one of the first immunization clinics in the country. The attempt was to analyze this segment in relation to its economic dimensions (imports and sales), principal manufacturers, and products marketed. It economic size proved much greater than initially hypothesized. The figures allow one to view it as one of the main segments in the pharmaceutical industry in Brazil as measured by sales volume. One detects the penetration of a privatizing logic in a sphere that has always been essentially public, thereby introducing into the SUS a new space for disregarding the principles of equity and universality.
Mishra, Ajay; Aloimonos, Yiannis
2009-01-01
The human visual system observes and understands a scene/image by making a series of fixations. Every fixation point lies inside a particular region of arbitrary shape and size in the scene which can either be an object or just a part of it. We define as a basic segmentation problem the task of segmenting that region containing the fixation point. Segmenting the region containing the fixation is equivalent to finding the enclosing contour- a connected set of boundary edge fragments in the edge map of the scene - around the fixation. This enclosing contour should be a depth boundary.We present here a novel algorithm that finds this bounding contour and achieves the segmentation of one object, given the fixation. The proposed segmentation framework combines monocular cues (color/intensity/texture) with stereo and/or motion, in a cue independent manner. The semantic robots of the immediate future will be able to use this algorithm to automatically find objects in any environment. The capability of automatically segmenting objects in their visual field can bring the visual processing to the next level. Our approach is different from current approaches. While existing work attempts to segment the whole scene at once into many areas, we segment only one image region, specifically the one containing the fixation point. Experiments with real imagery collected by our active robot and from the known databases 1 demonstrate the promise of the approach.
Multi-atlas pancreas segmentation: Atlas selection based on vessel structure.
Karasawa, Ken'ichi; Oda, Masahiro; Kitasaka, Takayuki; Misawa, Kazunari; Fujiwara, Michitaka; Chu, Chengwen; Zheng, Guoyan; Rueckert, Daniel; Mori, Kensaku
2017-07-01
Automated organ segmentation from medical images is an indispensable component for clinical applications such as computer-aided diagnosis (CAD) and computer-assisted surgery (CAS). We utilize a multi-atlas segmentation scheme, which has recently been used in different approaches in the literature to achieve more accurate and robust segmentation of anatomical structures in computed tomography (CT) volume data. Among abdominal organs, the pancreas has large inter-patient variability in its position, size and shape. Moreover, the CT intensity of the pancreas closely resembles adjacent tissues, rendering its segmentation a challenging task. Due to this, conventional intensity-based atlas selection for pancreas segmentation often fails to select atlases that are similar in pancreas position and shape to those of the unlabeled target volume. In this paper, we propose a new atlas selection strategy based on vessel structure around the pancreatic tissue and demonstrate its application to a multi-atlas pancreas segmentation. Our method utilizes vessel structure around the pancreas to select atlases with high pancreatic resemblance to the unlabeled volume. Also, we investigate two types of applications of the vessel structure information to the atlas selection. Our segmentations were evaluated on 150 abdominal contrast-enhanced CT volumes. The experimental results showed that our approach can segment the pancreas with an average Jaccard index of 66.3% and an average Dice overlap coefficient of 78.5%. Copyright © 2017 Elsevier B.V. All rights reserved.
NASA Technical Reports Server (NTRS)
Hoover, Richard B.; Pikuta, Elena V.; Bej, Asim K.; Marsic, Damien; Whitman, William B.; Tang, Jane; Krader, Paul; Six, N. Frank (Technical Monitor)
2002-01-01
A novel obligately anaerobic, mesophilic, haloalkaliphilic spirochete, strain ASpG1(sup T), was isolated from sediments of the alkaline, hypersaline Mono Lake in California, U.S.A. The Gram-negative cells are motile and spirochete-shaped with sizes of 0.2 - 0.22 X 8-15 microns. Growth was observed over the following ranges: temperature 10 C to 44 C; optimum +37 C; NaCl concentration 2 - 12 % (w/v); optimum NaCl3 % and pH 8 - 10.5; optimum pH 9.5. The novel isolate is strictly alkaliphilic, requires high concentrations of carbonate in the medium, and is capable of utilizing D-glucose, fructose, maltose, sucrose, starch, and D-mannitol. The main end products of glucose fermentation are: H2, acetate, ethanol, and formate. Strain ASpG(sup T) is resistant to kanamycin, and rifampin, but sensitive to chloramphenicol, gentamycin and tetracycline. The G+C content of its DNA is 58.5 mol%, genome size is 2.98 x l0(exp 9) Daltons, Tm of the genomic DNA is 68 +/- 2 C, and DNA-DNA hybridization with the most closely related species, Spirocheta alkalica Strain Z-7491(sup T), exhibited 48.7% homology. On the basis of its physiological and molecular properties, the isolate appears to be a novel species of the genus Spirochaeta; and the name Spirochaeta americana sp. nov., is proposed for the taxon (type strain ASpG1(sup T) = ATCC BAA-392(sup T) = DSMZ 14872(sup T)).
Meza, Rodrigo C; López-Jury, Luciana; Canavier, Carmen C; Henny, Pablo
2018-01-17
The spontaneous tonic discharge activity of nigral dopamine neurons plays a fundamental role in dopaminergic signaling. To investigate the role of neuronal morphology and architecture with respect to spontaneous activity in this population, we visualized the 3D structure of the axon initial segment (AIS) along with the entire somatodendritic domain of adult male mouse dopaminergic neurons, previously recorded in vivo We observed a positive correlation of the firing rate with both proximity and size of the AIS. Computational modeling showed that the size of the AIS, but not its position within the somatodendritic domain, is the major causal determinant of the tonic firing rate in the intact model, by virtue of the higher intrinsic frequency of the isolated AIS. Further mechanistic analysis of the relationship between neuronal morphology and firing rate showed that dopaminergic neurons function as a coupled oscillator whose frequency of discharge results from a compromise between AIS and somatodendritic oscillators. Thus, morphology plays a critical role in setting the basal tonic firing rate, which in turn could control striatal dopaminergic signaling that mediates motivation and movement. SIGNIFICANCE STATEMENT The frequency at which nigral dopamine neurons discharge action potentials sets baseline dopamine levels in the brain, which enables activity in motor, cognitive, and motivational systems. Here, we demonstrate that the size of the axon initial segment, a subcellular compartment responsible for initiating action potentials, is a key determinant of the firing rate in these neurons. The axon initial segment and all the molecular components that underlie its critical function may provide a novel target for the regulation of dopamine levels in the brain. Copyright © 2018 the authors 0270-6474/18/380733-12$15.00/0.
A global/local affinity graph for image segmentation.
Xiaofang Wang; Yuxing Tang; Masnou, Simon; Liming Chen
2015-04-01
Construction of a reliable graph capturing perceptual grouping cues of an image is fundamental for graph-cut based image segmentation methods. In this paper, we propose a novel sparse global/local affinity graph over superpixels of an input image to capture both short- and long-range grouping cues, and thereby enabling perceptual grouping laws, including proximity, similarity, continuity, and to enter in action through a suitable graph-cut algorithm. Moreover, we also evaluate three major visual features, namely, color, texture, and shape, for their effectiveness in perceptual segmentation and propose a simple graph fusion scheme to implement some recent findings from psychophysics, which suggest combining these visual features with different emphases for perceptual grouping. In particular, an input image is first oversegmented into superpixels at different scales. We postulate a gravitation law based on empirical observations and divide superpixels adaptively into small-, medium-, and large-sized sets. Global grouping is achieved using medium-sized superpixels through a sparse representation of superpixels' features by solving a ℓ0-minimization problem, and thereby enabling continuity or propagation of local smoothness over long-range connections. Small- and large-sized superpixels are then used to achieve local smoothness through an adjacent graph in a given feature space, and thus implementing perceptual laws, for example, similarity and proximity. Finally, a bipartite graph is also introduced to enable propagation of grouping cues between superpixels of different scales. Extensive experiments are carried out on the Berkeley segmentation database in comparison with several state-of-the-art graph constructions. The results show the effectiveness of the proposed approach, which outperforms state-of-the-art graphs using four different objective criteria, namely, the probabilistic rand index, the variation of information, the global consistency error, and the boundary displacement error.
Bishop, Chris; Arnold, John B; Fraysse, Francois; Thewlis, Dominic
2015-01-01
To investigate in-shoe foot kinematics, holes are often cut in the shoe upper to allow markers to be placed on the skin surface. However, there is currently a lack of understanding as to what is an appropriate size. This study aimed to demonstrate a method to assess whether different diameter holes were large enough to allow free motion of marker wands mounted on the skin surface during walking using a multi-segment foot model. Eighteen participants underwent an analysis of foot kinematics whilst walking barefoot and wearing shoes with different size holes (15 mm, 20mm and 25 mm). The analysis was conducted in two parts; firstly the trajectory of the individual skin-mounted markers were analysed in a 2D ellipse to investigate total displacement of each marker during stance. Secondly, a geometrical analysis was conducted to assess cluster deformation of the hindfoot and midfoot-forefoot segments. Where movement of the markers in the 15 and 20mm conditions were restricted, the marker movement in the 25 mm condition did not exceed the radius at any anatomical location. Despite significant differences in the isotropy index of the medial and lateral calcaneus markers between the 25 mm and barefoot conditions, the differences were due to the effect of footwear on the foot and not a result of the marker wands hitting the shoe upper. In conclusion, the method proposed and results can be used to increase confidence in the representativeness of joint kinematics with respect to in-shoe multi-segment foot motion during walking. Crown Copyright © 2014. Published by Elsevier B.V. All rights reserved.
Hwang, Ji-Won; Yang, Jeong Hoon; Song, Young Bin; Park, Taek Kyu; Lee, Joo Myung; Kim, Ji-Hwan; Jang, Woo Jin; Choi, Seung-Hyuk; Hahn, Joo-Yong; Choi, Jin-Ho; Ahn, Joonghyun; Carriere, Keumhee; Lee, Sang Hoon; Gwon, Hyeon-Cheol
2018-02-22
We sought to determine the association of reciprocal change in the ST-segment with myocardial injury assessed by cardiac magnetic resonance (CMR) in patients with ST-segment elevation myocardial infarction (STEMI) undergoing primary percutaneous coronary intervention (PCI). We performed CMR imaging in 244 patients who underwent primary PCI for their first STEMI; CMR was performed a median 3 days after primary PCI. The first electrocardiogram was analyzed, and patients were stratified according to the presence of reciprocal change. The primary outcome was infarct size measured by CMR. Secondary outcomes were area at risk and myocardial salvage index. Patients with reciprocal change (n=133, 54.5%) had a lower incidence of anterior infarction (27.8% vs 71.2%, P < .001) and shorter symptom onset to balloon time (221.5±169.8 vs 289.7±337.3min, P=.042). Using a multiple linear regression model, we found that patients with reciprocal change had a larger area at risk (P=.002) and a greater myocardial salvage index (P=.04) than patients without reciprocal change. Consequently, myocardial infarct size was not significantly different between the 2 groups (P=.14). The rate of major adverse cardiovascular events, including all-cause death, myocardial infarction, and repeat coronary revascularization, was similar between the 2 groups after 2 years of follow-up (P=.92). Reciprocal ST-segment change was associated with larger extent of ischemic myocardium at risk and more myocardial salvage but not with final infarct size or adverse clinical outcomes in STEMI patients undergoing primary PCI. Copyright © 2018 Sociedad Española de Cardiología. Published by Elsevier España, S.L.U. All rights reserved.
Tellez, Armando; Rousselle, Serge; Palmieri, Taylor; Rate, William R; Wicks, Joan; Degrange, Ashley; Hyon, Chelsea M; Gongora, Carlos A; Hart, Randy; Grundy, Will; Kaluza, Greg L; Granada, Juan F
2013-12-01
Catheter-based renal artery denervation has demonstrated to be effective in decreasing blood pressure among patients with refractory hypertension. The anatomic distribution of renal artery nerves may influence the safety and efficacy profile of this procedure. We aimed to describe the anatomic distribution and density of periarterial renal nerves in the porcine model. Thirty arterial renal sections were included in the analysis by harvesting a tissue block containing the renal arteries and perirenal tissue from each animal. Each artery was divided into 3 segments (proximal, mid, and distal) and assessed for total number, size, and depth of the nerves according to the location. Nerve counts were greatest proximally (45.62% of the total nerves) and decreased gradually distally (mid, 24.58%; distal, 29.79%). The distribution in nerve size was similar across all 3 sections (∼40% of the nerves, 50-100 μm; ∼30%, 0-50 μm; ∼20%, 100-200 μm; and ∼10%, 200-500 μm). In the arterial segments ∼45% of the nerves were located within 2 mm from the arterial wall whereas ∼52% of all nerves were located within 2.5 mm from the arterial wall. Sympathetic efferent fibers outnumbered sensory afferent fibers overwhelmingly, intermixed within the nerve bundle. In the porcine model, renal artery nerves are seen more frequently in the proximal segment of the artery. Nerve size distribution appears to be homogeneous throughout the artery length. Nerve bundles progress closer to the arterial wall in the distal segments of the artery. This anatomic distribution may have implications for the future development of renal denervation therapies. Crown Copyright © 2013. Published by Mosby, Inc. All rights reserved.
Gebler, J.B.
2004-01-01
The related topics of spatial variability of aquatic invertebrate community metrics, implications of spatial patterns of metric values to distributions of aquatic invertebrate communities, and ramifications of natural variability to the detection of human perturbations were investigated. Four metrics commonly used for stream assessment were computed for 9 stream reaches within a fairly homogeneous, minimally impaired stream segment of the San Pedro River, Arizona. Metric variability was assessed for differing sampling scenarios using simple permutation procedures. Spatial patterns of metric values suggest that aquatic invertebrate communities are patchily distributed on subsegment and segment scales, which causes metric variability. Wide ranges of metric values resulted in wide ranges of metric coefficients of variation (CVs) and minimum detectable differences (MDDs), and both CVs and MDDs often increased as sample size (number of reaches) increased, suggesting that any particular set of sampling reaches could yield misleading estimates of population parameters and effects that can be detected. Mean metric variabilities were substantial, with the result that only fairly large differences in metrics would be declared significant at ?? = 0.05 and ?? = 0.20. The number of reaches required to obtain MDDs of 10% and 20% varied with significance level and power, and differed for different metrics, but were generally large, ranging into tens and hundreds of reaches. Study results suggest that metric values from one or a small number of stream reach(es) may not be adequate to represent a stream segment, depending on effect sizes of interest, and that larger sample sizes are necessary to obtain reasonable estimates of metrics and sample statistics. For bioassessment to progress, spatial variability may need to be investigated in many systems and should be considered when designing studies and interpreting data.
A Tracker for Broken and Closely-Spaced Lines
1997-10-01
to combine the current level flow estimate and the previous level flow estimate. However, the result is still not good enough for some reasons. First...geometric attributes are not good enough to discriminate line segments, when they are crowded, parallel and closely-spaced to each other. On the other...level information [10]. Still, it is not good at dealing with closely-spaced line segments. Because it requires a proper size of square neighborhood to
Reproducibility of myelin content-based human habenula segmentation at 3 Tesla.
Kim, Joo-Won; Naidich, Thomas P; Joseph, Joshmi; Nair, Divya; Glasser, Matthew F; O'halloran, Rafael; Doucet, Gaelle E; Lee, Won Hee; Krinsky, Hannah; Paulino, Alejandro; Glahn, David C; Anticevic, Alan; Frangou, Sophia; Xu, Junqian
2018-03-26
In vivo morphological study of the human habenula, a pair of small epithalamic nuclei adjacent to the dorsomedial thalamus, has recently gained significant interest for its role in reward and aversion processing. However, segmenting the habenula from in vivo magnetic resonance imaging (MRI) is challenging due to the habenula's small size and low anatomical contrast. Although manual and semi-automated habenula segmentation methods have been reported, the test-retest reproducibility of the segmented habenula volume and the consistency of the boundaries of habenula segmentation have not been investigated. In this study, we evaluated the intra- and inter-site reproducibility of in vivo human habenula segmentation from 3T MRI (0.7-0.8 mm isotropic resolution) using our previously proposed semi-automated myelin contrast-based method and its fully-automated version, as well as a previously published manual geometry-based method. The habenula segmentation using our semi-automated method showed consistent boundary definition (high Dice coefficient, low mean distance, and moderate Hausdorff distance) and reproducible volume measurement (low coefficient of variation). Furthermore, the habenula boundary in our semi-automated segmentation from 3T MRI agreed well with that in the manual segmentation from 7T MRI (0.5 mm isotropic resolution) of the same subjects. Overall, our proposed semi-automated habenula segmentation showed reliable and reproducible habenula localization, while its fully-automated version offers an efficient way for large sample analysis. © 2018 Wiley Periodicals, Inc.
NASA Astrophysics Data System (ADS)
Barthelat, Francois
2014-12-01
Nacre, bone and spider silk are staggered composites where inclusions of high aspect ratio reinforce a softer matrix. Such staggered composites have emerged through natural selection as the best configuration to produce stiffness, strength and toughness simultaneously. As a result, these remarkable materials are increasingly serving as model for synthetic composites with unusual and attractive performance. While several models have been developed to predict basic properties for biological and bio-inspired staggered composites, the designer is still left to struggle with finding optimum parameters. Unresolved issues include choosing optimum properties for inclusions and matrix, and resolving the contradictory effects of certain design variables. Here we overcome these difficulties with a multi-objective optimization for simultaneous high stiffness, strength and energy absorption in staggered composites. Our optimization scheme includes material properties for inclusions and matrix as design variables. This process reveals new guidelines, for example the staggered microstructure is only advantageous if the tablets are at least five times stronger than the interfaces, and only if high volume concentrations of tablets are used. We finally compile the results into a step-by-step optimization procedure which can be applied for the design of any type of high-performance staggered composite and at any length scale. The procedure produces optimum designs which are consistent with the materials and microstructure of natural nacre, confirming that this natural material is indeed optimized for mechanical performance.
NASA Astrophysics Data System (ADS)
Purwanto, Agung; Yusmaniar, Ferdiani, Fatmawati; Damayanti, Rachma
2017-03-01
Silica gel modified APTS was synthesized from silica gel which was obtained from corn cobs via sol-gel process. Silica gel was synthesized from corn cobs and then chemically modified with silane coupling agent which has an amine group (NH2). This process resulting modified silica gel 3-aminopropyltriethoxysilane (APTS). Characterization of silica gel modified APTS by SEM-EDX showed that the size of the particles of silica gel modified APTS was 20µm with mass percentage of individual elements were nitrogen (N) 15.56%, silicon (Si) 50.69% and oxygen (O) 33.75%. In addition, silica gel modified APTS also showed absorption bands of functional groups silanol (Si-OH), siloxane (Si-O-Si), and an aliphatic chain (-CH2-), as well as amine (NH2) from FTIR spectra. Based on the characterization of XRD, silica gel 2θ of 21.094° and 21.32° respectively. It indicated that both material were amorphous. Determination of optimum pH and contact time on adsorption of silica gel 3-aminopropyltriethoxysilane (APTS) against Cu(II). The optimum pH and contact time was measured by using AAS. Optimum pH of adsorption silica gel modified APTS against metal Cu(II) could be obtained at pH 6 while optimum contact time was at 30 minutes, with the process of adsorption metal Cu(II) occured based on the model Freundlich isotherm.
Optimization of space manufacturing systems
NASA Technical Reports Server (NTRS)
Akin, D. L.
1979-01-01
Four separate analyses are detailed: transportation to low earth orbit, orbit-to-orbit optimization, parametric analysis of SPS logistics based on earth and lunar source locations, and an overall program option optimization implemented with linear programming. It is found that smaller vehicles are favored for earth launch, with the current Space Shuttle being right at optimum payload size. Fully reusable launch vehicles represent a savings of 50% over the Space Shuttle; increased reliability with less maintenance could further double the savings. An optimization of orbit-to-orbit propulsion systems using lunar oxygen for propellants shows that ion propulsion is preferable by a 3:1 cost margin over a mass driver reaction engine at optimum values; however, ion engines cannot yet operate in the lower exhaust velocity range where the optimum lies, and total program costs between the two systems are ambiguous. Heavier payloads favor the use of a MDRE. A parametric model of a space manufacturing facility is proposed, and used to analyze recurring costs, total costs, and net present value discounted cash flows. Parameters studied include productivity, effects of discounting, materials source tradeoffs, economic viability of closed-cycle habitats, and effects of varying degrees of nonterrestrial SPS materials needed from earth. Finally, candidate optimal scenarios are chosen, and implemented in a linear program with external constraints in order to arrive at an optimum blend of SPS production strategies in order to maximize returns.
NASA Astrophysics Data System (ADS)
Sun, M.; Yu, P. F.; Fu, J. X.; Ji, X. Q.; Jiang, T.
2017-08-01
The optimal process parameters and conditions for the treatment of slaughterhouse wastewater by coagulation sedimentation-AF - biological contact oxidation process were studied to solve the problem of high concentration organic wastewater treatment in the production of small and medium sized slaughter plants. The suitable water temperature and the optimum reaction time are determined by the experiment of precipitation to study the effect of filtration rate and reflux ratio on COD and SS in anaerobic biological filter and the effect of biofilm thickness and gas water ratio on NH3-N and COD in biological contact oxidation tank, and results show that the optimum temperature is 16-24°C, reaction time is 20 min in coagulating sedimentation, the optimum filtration rate is 0.6 m/h, and the optimum reflux ratio is 300% in anaerobic biological filter reactor. The most suitable biological film thickness range of 1.8-2.2 mm and the most suitable gas water ratio is 12:1-14:1 in biological contact oxidation pool. In the coupling process of continuous operation for 80 days, the average effluent’s mass concentrations of COD, TP and TN were 15.57 mg/L, 40 mg/L and 0.63 mg/L, the average removal rates were 98.93%, 86.10%, 88.95%, respectively. The coupling process has stable operation effect and good effluent quality, and is suitable for the industrial application.
NASA Technical Reports Server (NTRS)
Graham, A. B.
1977-01-01
Small- and large-scale models of supersonic cruise fighter vehicles were used to determine the effectiveness of airframe/propulsion integration concepts for improved low-speed performance and stability and control characteristics. Computer programs were used for engine/airframe sizing studies to yield optimum vehicle performance.
High Maneuverability Airframe: Investigation of Fin and Canard Sizing for Optimum Maneuverability
2014-09-01
overset grids (unified- grid); 5) total variation diminishing discretization based on a new multidimensional interpolation framework; 6) Riemann solvers to...Aerodynamics .........................................................................................3 3.1.1 Solver ...describes the methodology used for the simulations. 3.1.1 Solver The double-precision solver of a commercially available code, CFD ++ v12.1.1, 9
Altenburger, Andreas
2016-01-01
Kinorhynchs are ecdysozoan animals with a phylogenetic position close to priapulids and loriciferans. To understand the nature of segmentation within Kinorhyncha and to infer a probable ancestry of segmentation within the last common ancestor of Ecdysozoa, the musculature and the nervous system of the allomalorhagid kinorhynch Pycnophyes kielensis were investigated by use of immunohistochemistry, confocal laser scanning microscopy, and 3D reconstruction software. The kinorhynch body plan comprises 11 trunk segments. Trunk musculature consists of paired ventral and dorsal longitudinal muscles in segments 1-10 as well as dorsoventral muscles in segments 1-11. Dorsal and ventral longitudinal muscles insert on apodemes of the cuticle inside the animal within each segment. Strands of longitudinal musculature extend over segment borders in segments 1-6. In segments 7-10, the trunk musculature is confined to the segments. Musculature of the digestive system comprises a strong pharyngeal bulb with attached mouth cone muscles as well as pharyngeal bulb protractors and retractors. The musculature of the digestive system shows no sign of segmentation. Judged by the size of the pharyngeal bulb protractors and retractors, the pharyngeal bulb, as well as the introvert, is moved passively by internal pressure caused by concerted action of the dorsoventral muscles. The nervous system comprises a neuropil ring anterior to the pharyngeal bulb. Associated with the neuropil ring are flask-shaped serotonergic somata extending anteriorly and posteriorly. A ventral nerve cord is connected to the neuropil ring and runs toward the anterior until an attachment point in segment 1, and from there toward the posterior with one ganglion in segment 6. Segmentation within Kinorhyncha likely evolved from an unsegmented ancestor. This conclusion is supported by continuous trunk musculature in the anterior segments 1-6, continuous pharyngeal bulb protractors and retractors throughout the anterior segments, no sign of segmentation within the digestive system, and the absence of ganglia in most segments. The musculature shows evidence of segmentation that fit the definition of an anteroposteriorly repeated body unit only in segments 7-10.
NASA Astrophysics Data System (ADS)
Contreras-Reyes, Eduardo; Flueh, Ernst R.; Grevemeyer, Ingo
2010-12-01
Based on a compilation of published and new seismic refraction and multichannel seismic reflection data along the south central Chile margin (33°-46°S), we study the processes of sediment accretion and subduction and their implications on megathrust seismicity. In terms of the frontal accretionary prism (FAP) size, the marine south central Chile fore arc can be divided in two main segments: (1) the Maule segment (south of the Juan Fernández Ridge and north of the Mocha block) characterized by a relative large FAP (20-40 km wide) and (2) the Chiloé segment (south of the Mocha block and north of the Nazca-Antarctic-South America plates junction) characterized by a small FAP (≤10 km wide). In addition, the Maule and Chiloé segments correlate with a thin (<1 km thick) and thick (˜1.5 km thick) subduction channel, respectively. The Mocha block lies between ˜37.5° and 40°S and is configured by the Chile trench, Mocha and Valdivia fracture zones. This region separates young (0-25 Ma) oceanic lithosphere in the south from old (30-35 Ma) oceanic lithosphere in the north, and it represents a fundamental tectonic boundary separating two different styles of sediment accretion and subduction, respectively. A process responsible for this segmentation could be related to differences in initial angles of subduction which in turn depend on the amplitude of the down-deflected oceanic lithosphere under trench sediment loading. On the other hand, a small FAP along the Chiloé segment is coincident with the rupture area of the trans-Pacific tsunamigenic 1960 earthquake (Mw = 9.5), while a relatively large FAP along the Maule segment is coincident with the rupture area of the 2010 earthquake (Mw = 8.8). Differences in earthquake and tsunami magnitudes between these events can be explained in terms of the FAP size along the Chiloé and Maule segments that control the location of the updip limit of the seismogenic zone. The rupture area of the 1960 event also correlates with a thick subduction channel (Chiloé segment) that may provide enough smoothness at the subduction interface allowing long lateral earthquake rupture propagation.
Segmentation propagation for the automated quantification of ventricle volume from serial MRI
NASA Astrophysics Data System (ADS)
Linguraru, Marius George; Butman, John A.
2009-02-01
Accurate ventricle volume estimates could potentially improve the understanding and diagnosis of communicating hydrocephalus. Postoperative communicating hydrocephalus has been recognized in patients with brain tumors where the changes in ventricle volume can be difficult to identify, particularly over short time intervals. Because of the complex alterations of brain morphology in these patients, the segmentation of brain ventricles is challenging. Our method evaluates ventricle size from serial brain MRI examinations; we (i) combined serial images to increase SNR, (ii) automatically segmented this image to generate a ventricle template using fast marching methods and geodesic active contours, and (iii) propagated the segmentation using deformable registration of the original MRI datasets. By applying this deformation to the ventricle template, serial volume estimates were obtained in a robust manner from routine clinical images (0.93 overlap) and their variation analyzed.
NASA Astrophysics Data System (ADS)
Tavana, Jalal; Edrisi, Mohammad
2016-03-01
In this study, cobalt ferrite (CoFe2O4) nanoparticles were synthesized by two novel methods. The first method is based on the thermolysis of metal-NN complexes. In the second method, a template free sonochemical treatment of mixed cobalt and iron chelates of α-nitroso-β-naphthol (NN) was applied. Products prepared through method 1 were spherical, with high specific surface area (54.39 m2 g-1) and small average crystalline size of 13 nm. However, CoFe2O4 nanoparticles prepared by method 2 were in random shapes, a broad range of crystalline sizes and a low specific surface area of 25.46 m2 g-1 though highly pure. A Taguchi experimental design was implemented in method 1 to determine and obtain the optimum catalyst. The structural and morphological properties of products were investigated by x-ray diffraction, field emission scanning electron microscopy, transmission electron microscopy, Fourier transform infrared, Brunauer-Emmett-Teller and dynamic laser light scattering. The crystalline size calculations were performed using Williamson-Hall method on XRD spectrum. The photocatalytic activity of the optimum nanocrystalline cobalt ferrite was investigated for degradation of a representative pollutant, methylene blue (MB), and visible light as energy source. The results showed that some 92% degradation of MB could be achieved for 7 h of visible light irradiation.
Self-nanoemulsifying drug delivery systems of tamoxifen citrate: design and optimization.
Elnaggar, Yosra S R; El-Massik, Magda A; Abdallah, Ossama Y
2009-10-01
Tamoxifen citrate is an antiestrogen for peroral breast cancer treatment. The drug delivery encounters problems of poor water solubility and vulnerability to enzymatic degradation in both intestine and liver. In the current study, tamoxifen citrate self-nanoemulsifying drug delivery systems (SNEDDS) were prepared in an attempt to circumvent such obstacles. Preliminary screening was carried out to select proper ingredient combinations. All surfactants screened were recognized for their bioactive aspects. Ternary phase diagrams were then constructed and an optimum system was designated. Three tamoxifen SNEDDS were then compared for optimization. The systems were assessed for robustness to dilution, globule size, cloud point, surface morphology and drug release. An optimum system composed of tamoxifen citrate (1.6%), Maisine 35-1 (16.4%), Caproyl 90 (32.8%), Cremophor RH40 (32.8%) and propylene glycol (16.4%) was selected. The system was robust to different dilution volumes and types. It possessed a mean globule size of 150 nm and a cloud point of 80 degrees C. Transmission electron microscopy demonstrated spherical particle morphology. The drug release from the selected formulation was significantly higher than other SNEDDS and drug suspension, as well. Realizing drug incorporation into an optimized nano-sized SNEDD system that encompasses a bioactive surfactant, our results proposed that the prepared system could be promising to improve oral efficacy of the tamoxifen citrate.
Shu, Guowei; Bao, Chunju; Chen, He; Wang, Changfeng; Yang, Hui
2016-01-01
Goat milk is only limited to the processing of goat milk powder and liquid milk, the products are mainly about milk powder and a few of them are made as milk tablet. Therefore, the study of probiotic goat milk will have great significance in the full use of goats and the development of the goat milk industry in China. The effect of fermentation temperature (35°C, 37°C, 39°C), strain ratio (1:1:1, 2:1:1, 3:1:1) and inoculum size (4%, 5%, 6%) on viable counts of L. acidophilus and B. bifidum, total bacteria and sensory value during fermentation process of L. acidophilus and B. bifidum goat yogurt (AB-goat yogurt) was investigated. The optimum fermentation conditions for AB-goat yogurt were: fermentation temperature 38°C, the strain ratio 2:1:1, inoculum size 6%. Under the optimum conditions, the viable counts of B. bifidum, L. acidophilus, total bacteria and sensory value reached (4.30 ±0.11)×107 cfu/mL, (1.39 ±0.09)×108 cfu/mL, (1.82±0.06)×109 cfu/mL and 7.90 ±0.14, respectively. The fermentation temperature, the strain ratio and inoculum size had a significant effect on the fermentation of AB-goat yogurt and these results are beneficial for developing AB-goat yogurt.
Aling, Joanna; Podczeck, Fridrun
2012-11-20
The aim of this work was to investigate the plug formation and filling properties of powdered herbal leaves using hydrogenated cotton seed oil as an alternative lubricant. In a first step, unlubricated and lubricated herbal powders were studied on a small scale using a plug simulator, and low-force compression physics and parameterization techniques were used to narrow down the range in which the optimum amount of lubricant required would be found. In a second step these results were complemented with investigations into the flow properties of the powders based on packing (tapping) experiments to establish the final optimum lubricant concentration. Finally, capsule filling of the optimum formulations was undertaken using an instrumented tamp filling machine. This work has shown that hydrogenated cotton seed oil can be used advantageously for the lubrication of herbal leaf powders. Stickiness as observed with magnesium stearate did not occur, and the optimum lubricant concentration was found to be less than that required for magnesium stearate. In this work, lubricant concentrations of 1% or less hydrogenated cotton seed oil were required to fill herbal powders into capsules on the instrumented tamp-filling machine. It was found that in principle all powders could be filled successfully, but that for some powders the use of higher compression settings was disadvantageous. Relationships between the particle size distributions of the powders, their flow and consolidation as well as their filling properties could be identified by multivariate statistical analysis. The work has demonstrated that a combination of the identification of plug formation and powder flow properties is helpful in establishing the optimum lubricant concentration required using a small quantity of powder and a powder plug simulator. On an automated tamp-filling machine, these optimum formulations produced satisfactory capsules in terms of coefficient of fill weight variability and capsule weight. Copyright © 2012 Elsevier B.V. All rights reserved.
Automated segmentation of geographic atrophy using deep convolutional neural networks
NASA Astrophysics Data System (ADS)
Hu, Zhihong; Wang, Ziyuan; Sadda, SriniVas R.
2018-02-01
Geographic atrophy (GA) is an end-stage manifestation of the advanced age-related macular degeneration (AMD), the leading cause of blindness and visual impairment in developed nations. Techniques to rapidly and precisely detect and quantify GA would appear to be of critical importance in advancing the understanding of its pathogenesis. In this study, we develop an automated supervised classification system using deep convolutional neural networks (CNNs) for segmenting GA in fundus autofluorescene (FAF) images. More specifically, to enhance the contrast of GA relative to the background, we apply the contrast limited adaptive histogram equalization. Blood vessels may cause GA segmentation errors due to similar intensity level to GA. A tensor-voting technique is performed to identify the blood vessels and a vessel inpainting technique is applied to suppress the GA segmentation errors due to the blood vessels. To handle the large variation of GA lesion sizes, three deep CNNs with three varying sized input image patches are applied. Fifty randomly chosen FAF images are obtained from fifty subjects with GA. The algorithm-defined GA regions are compared with manual delineation by a certified grader. A two-fold cross-validation is applied to evaluate the algorithm performance. The mean segmentation accuracy, true positive rate (i.e. sensitivity), true negative rate (i.e. specificity), positive predictive value, false discovery rate, and overlap ratio, between the algorithm- and manually-defined GA regions are 0.97 +/- 0.02, 0.89 +/- 0.08, 0.98 +/- 0.02, 0.87 +/- 0.12, 0.13 +/- 0.12, and 0.79 +/- 0.12 respectively, demonstrating a high level of agreement.
NASA Astrophysics Data System (ADS)
Yin, Yin; Fotin, Sergei V.; Periaswamy, Senthil; Kunz, Justin; Haldankar, Hrishikesh; Muradyan, Naira; Cornud, François; Turkbey, Baris; Choyke, Peter
2012-02-01
Manual delineation of the prostate is a challenging task for a clinician due to its complex and irregular shape. Furthermore, the need for precisely targeting the prostate boundary continues to grow. Planning for radiation therapy, MR-ultrasound fusion for image-guided biopsy, multi-parametric MRI tissue characterization, and context-based organ retrieval are examples where accurate prostate delineation can play a critical role in a successful patient outcome. Therefore, a robust automated full prostate segmentation system is desired. In this paper, we present an automated prostate segmentation system for 3D MR images. In this system, the prostate is segmented in two steps: the prostate displacement and size are first detected, and then the boundary is refined by a shape model. The detection approach is based on normalized gradient fields cross-correlation. This approach is fast, robust to intensity variation and provides good accuracy to initialize a prostate mean shape model. The refinement model is based on a graph-search based framework, which contains both shape and topology information during deformation. We generated the graph cost using trained classifiers and used coarse-to-fine search and region-specific classifier training. The proposed algorithm was developed using 261 training images and tested on another 290 cases. The segmentation performance using mean DSC ranging from 0.89 to 0.91 depending on the evaluation subset demonstrates state of the art performance. Running time for the system is about 20 to 40 seconds depending on image size and resolution.
Ng, Vivian G; Mori, Ken; Costa, Ricardo A; Kish, Mitra; Mehran, Roxana; Urata, Hidenori; Saku, Keijiro; Stone, Gregg W; Lansky, Alexandra J
2016-03-15
Women with AMI may have worse outcomes than men. However, it is unclear if this is related to differences in treatment, treatment effect or gender specific factors. We sought to determine whether primary percutaneous intervention (PCI) has a differential impact on infarct size, myocardial perfusion and ST segment resolution in men and women with acute myocardial infarction (AMI). A total of 501 AMI patients were prospectively enrolled in the EMERALD study and underwent PCI with or without distal protection. Post hoc gender subset analysis was performed. 501 patients (108 women, 393 men) with ST-segment elevation AMI presenting within 6h underwent primary (or rescue) PCI with stenting and a distal protection device. Women were older, had more hypertension, less prior AMI, smaller BSA, and smaller vessel size, but had similar rates of diabetes (30% versus 20.2%, p=0.87), LAD infarct, and time-to-reperfusion compared to men. Women more frequently had complete ST-resolution (>70%) at 30days (72.8% versus 59.8%, p=0.02), and smaller infarct size compared to males (12.2±19.6% versus 18.4±18.5%, p=0.006). At 6months, TLR (6.9% versus 5.2%) and MACE (11.4% versus 10.3%) were similar for women and men. Despite worse comorbidities, women with AMI treated with primary PCI with stenting showed similar early and midterm outcomes compared to men. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Segmentation of thalamus from MR images via task-driven dictionary learning
NASA Astrophysics Data System (ADS)
Liu, Luoluo; Glaister, Jeffrey; Sun, Xiaoxia; Carass, Aaron; Tran, Trac D.; Prince, Jerry L.
2016-03-01
Automatic thalamus segmentation is useful to track changes in thalamic volume over time. In this work, we introduce a task-driven dictionary learning framework to find the optimal dictionary given a set of eleven features obtained from T1-weighted MRI and diffusion tensor imaging. In this dictionary learning framework, a linear classifier is designed concurrently to classify voxels as belonging to the thalamus or non-thalamus class. Morphological post-processing is applied to produce the final thalamus segmentation. Due to the uneven size of the training data samples for the non-thalamus and thalamus classes, a non-uniform sampling scheme is pro- posed to train the classifier to better discriminate between the two classes around the boundary of the thalamus. Experiments are conducted on data collected from 22 subjects with manually delineated ground truth. The experimental results are promising in terms of improvements in the Dice coefficient of the thalamus segmentation overstate-of-the-art atlas-based thalamus segmentation algorithms.
Moya, Nikolas; Falcão, Alexandre X; Ciesielski, Krzysztof C; Udupa, Jayaram K
2014-01-01
Graph-cut algorithms have been extensively investigated for interactive binary segmentation, when the simultaneous delineation of multiple objects can save considerable user's time. We present an algorithm (named DRIFT) for 3D multiple object segmentation based on seed voxels and Differential Image Foresting Transforms (DIFTs) with relaxation. DRIFT stands behind efficient implementations of some state-of-the-art methods. The user can add/remove markers (seed voxels) along a sequence of executions of the DRIFT algorithm to improve segmentation. Its first execution takes linear time with the image's size, while the subsequent executions for corrections take sublinear time in practice. At each execution, DRIFT first runs the DIFT algorithm, then it applies diffusion filtering to smooth boundaries between objects (and background) and, finally, it corrects possible objects' disconnection occurrences with respect to their seeds. We evaluate DRIFT in 3D CT-images of the thorax for segmenting the arterial system, esophagus, left pleural cavity, right pleural cavity, trachea and bronchi, and the venous system.
Segmentation of Thalamus from MR images via Task-Driven Dictionary Learning.
Liu, Luoluo; Glaister, Jeffrey; Sun, Xiaoxia; Carass, Aaron; Tran, Trac D; Prince, Jerry L
2016-02-27
Automatic thalamus segmentation is useful to track changes in thalamic volume over time. In this work, we introduce a task-driven dictionary learning framework to find the optimal dictionary given a set of eleven features obtained from T1-weighted MRI and diffusion tensor imaging. In this dictionary learning framework, a linear classifier is designed concurrently to classify voxels as belonging to the thalamus or non-thalamus class. Morphological post-processing is applied to produce the final thalamus segmentation. Due to the uneven size of the training data samples for the non-thalamus and thalamus classes, a non-uniform sampling scheme is proposed to train the classifier to better discriminate between the two classes around the boundary of the thalamus. Experiments are conducted on data collected from 22 subjects with manually delineated ground truth. The experimental results are promising in terms of improvements in the Dice coefficient of the thalamus segmentation over state-of-the-art atlas-based thalamus segmentation algorithms.
Sgaier, Sema K; Eletskaya, Maria; Engl, Elisabeth; Mugurungi, Owen; Tambatamba, Bushimbwa; Ncube, Gertrude; Xaba, Sinokuthemba; Nanga, Alice; Gogolina, Svetlana; Odawo, Patrick; Gumede-Moyo, Sehlulekile; Kretschmer, Steve
2017-09-13
Public health programs are starting to recognize the need to move beyond a one-size-fits-all approach in demand generation, and instead tailor interventions to the heterogeneity underlying human decision making. Currently, however, there is a lack of methods to enable such targeting. We describe a novel hybrid behavioral-psychographic segmentation approach to segment stakeholders on potential barriers to a target behavior. We then apply the method in a case study of demand generation for voluntary medical male circumcision (VMMC) among 15-29 year-old males in Zambia and Zimbabwe. Canonical correlations and hierarchical clustering techniques were applied on representative samples of men in each country who were differentiated by their underlying reasons for their propensity to get circumcised. We characterized six distinct segments of men in Zimbabwe, and seven segments in Zambia, according to their needs, perceptions, attitudes and behaviors towards VMMC, thus highlighting distinct reasons for a failure to engage in the desired behavior.
Wedge edge ceramic combustor tile
Shaffer, J.E.; Holsapple, A.C.
1997-06-10
A multipiece combustor has a portion thereof being made of a plurality of ceramic segments. Each of the plurality of ceramic segments have an outer surface and an inner surface. Each of the plurality of ceramic segments have a generally cylindrical configuration and including a plurality of joints. The joints define joint portions, a first portion defining a surface being skewed to the outer surface and the inner surface. The joint portions have a second portion defining a surface being skewed to the outer surface and the inner surface. The joint portions further include a shoulder formed intermediate the first portion and the second portion. The joints provide a sealing interlocking joint between corresponding ones of the plurality of ceramic segments. Thus, the multipiece combustor having the plurality of ceramic segment with the plurality of joints reduces the physical size of the individual components and the degradation of the surface of the ceramic components in a tensile stress zone is generally eliminated reducing the possibility of catastrophic failures. 7 figs.
Wedge edge ceramic combustor tile
Shaffer, James E.; Holsapple, Allan C.
1997-01-01
A multipiece combustor has a portion thereof being made of a plurality of ceramic segments. Each of the plurality of ceramic segments have an outer surface and an inner surface. Each of the plurality of ceramic segments have a generally cylindrical configuration and including a plurality of joints. The joints define joint portions, a first portion defining a surface being skewed to the outer surface and the inner surface. The joint portions have a second portion defining a surface being skewed to the outer surface and the inner surface. The joint portions further include a shoulder formed intermediate the first portion and the second portion. The joints provide a sealing interlocking joint between corresponding ones of the plurality of ceramic segments. Thus, the multipiece combustor having the plurality of ceramic segment with the plurality of joints reduces the physical size of the individual components and the degradation of the surface of the ceramic components in a tensile stress zone is generally eliminated reducing the possibility of catastrophic failures.
Eletskaya, Maria; Engl, Elisabeth; Mugurungi, Owen; Tambatamba, Bushimbwa; Ncube, Gertrude; Xaba, Sinokuthemba; Nanga, Alice; Gogolina, Svetlana; Odawo, Patrick; Gumede-Moyo, Sehlulekile; Kretschmer, Steve
2017-01-01
Public health programs are starting to recognize the need to move beyond a one-size-fits-all approach in demand generation, and instead tailor interventions to the heterogeneity underlying human decision making. Currently, however, there is a lack of methods to enable such targeting. We describe a novel hybrid behavioral-psychographic segmentation approach to segment stakeholders on potential barriers to a target behavior. We then apply the method in a case study of demand generation for voluntary medical male circumcision (VMMC) among 15–29 year-old males in Zambia and Zimbabwe. Canonical correlations and hierarchical clustering techniques were applied on representative samples of men in each country who were differentiated by their underlying reasons for their propensity to get circumcised. We characterized six distinct segments of men in Zimbabwe, and seven segments in Zambia, according to their needs, perceptions, attitudes and behaviors towards VMMC, thus highlighting distinct reasons for a failure to engage in the desired behavior. PMID:28901285
Fu, Henry L.; Mueller, Jenna L.; Javid, Melodi P.; Mito, Jeffrey K.; Kirsch, David G.; Ramanujam, Nimmi; Brown, J. Quincy
2013-01-01
Cancer is associated with specific cellular morphological changes, such as increased nuclear size and crowding from rapidly proliferating cells. In situ tissue imaging using fluorescent stains may be useful for intraoperative detection of residual cancer in surgical tumor margins. We developed a widefield fluorescence structured illumination microscope (SIM) system with a single-shot FOV of 2.1×1.6 mm (3.4 mm2) and sub-cellular resolution (4.4 µm). The objectives of this work were to measure the relationship between illumination pattern frequency and optical sectioning strength and signal-to-noise ratio in turbid (i.e. thick) samples for selection of the optimum frequency, and to determine feasibility for detecting residual cancer on tumor resection margins, using a genetically engineered primary mouse model of sarcoma. The SIM system was tested in tissue mimicking solid phantoms with various scattering levels to determine impact of both turbidity and illumination frequency on two SIM metrics, optical section thickness and modulation depth. To demonstrate preclinical feasibility, ex vivo 50 µm frozen sections and fresh intact thick tissue samples excised from a primary mouse model of sarcoma were stained with acridine orange, which stains cell nuclei, skeletal muscle, and collagenous stroma. The cell nuclei were segmented using a high-pass filter algorithm, which allowed quantification of nuclear density. The results showed that the optimal illumination frequency was 31.7 µm−1 used in conjunction with a 4×0.1 NA objective ( = 0.165). This yielded an optical section thickness of 128 µm and an 8.9×contrast enhancement over uniform illumination. We successfully demonstrated the ability to resolve cell nuclei in situ achieved via SIM, which allowed segmentation of nuclei from heterogeneous tissues in the presence of considerable background fluorescence. Specifically, we demonstrate that optical sectioning of fresh intact thick tissues performed equivalently in regards to nuclear density quantification, to physical frozen sectioning and standard microscopy. PMID:23894357
Coulomb explosion of hydrogen clusters irradiated by an ultrashort intense laser pulse
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li Hongyu; Liu Jiansheng; Wang Cheng
The explosion dynamics of hydrogen clusters driven by an ultrashort intense laser pulse has been analyzed analytically and numerically by employing a simplified Coulomb explosion model. The dependence of average and maximum proton kinetic energy on cluster size, pulse duration, and laser intensity has been investigated respectively. The existence of an optimum cluster size allows the proton energy to reach the maximum when the cluster size matches with the intensity and the duration of the laser pulse. In order to explain our experimental results such as the measured proton energy spectrum and the saturation effect of proton energy, the effectsmore » of cluster size distribution as well as the laser intensity distribution on the focus spot should be considered. A good agreement between them is obtained.« less
Coulomb explosion of hydrogen clusters irradiated by an ultrashort intense laser pulse
NASA Astrophysics Data System (ADS)
Li, Hongyu; Liu, Jiansheng; Wang, Cheng; Ni, Guoquan; Li, Ruxin; Xu, Zhizhan
2006-08-01
The explosion dynamics of hydrogen clusters driven by an ultrashort intense laser pulse has been analyzed analytically and numerically by employing a simplified Coulomb explosion model. The dependence of average and maximum proton kinetic energy on cluster size, pulse duration, and laser intensity has been investigated respectively. The existence of an optimum cluster size allows the proton energy to reach the maximum when the cluster size matches with the intensity and the duration of the laser pulse. In order to explain our experimental results such as the measured proton energy spectrum and the saturation effect of proton energy, the effects of cluster size distribution as well as the laser intensity distribution on the focus spot should be considered. A good agreement between them is obtained.
Local X-ray Computed Tomography Imaging for Mineralogical and Pore Characterization
NASA Astrophysics Data System (ADS)
Mills, G.; Willson, C. S.
2015-12-01
Sample size, material properties and image resolution are all tradeoffs that must be considered when imaging porous media samples with X-ray computed tomography. In many natural and engineered samples, pore and throat sizes span several orders of magnitude and are often correlated with the material composition. Local tomography is a nondestructive technique that images a subvolume, within a larger specimen, at high resolution and uses low-resolution tomography data from the larger specimen to reduce reconstruction error. The high-resolution, subvolume data can be used to extract important fine-scale properties but, due to the additional noise associated with the truncated dataset, it makes segmentation of different materials and mineral phases a challenge. The low-resolution data of a larger specimen is typically of much higher-quality making material characterization much easier. In addition, the imaging of a larger domain, allows for mm-scale bulk properties and heterogeneities to be determined. In this research, a 7 mm diameter and ~15 mm in length sandstone core was scanned twice. The first scan was performed to cover the entire diameter and length of the specimen at an image voxel resolution of 4.1 μm. The second scan was performed on a subvolume, ~1.3 mm in length and ~2.1 mm in diameter, at an image voxel resolution of 1.08 μm. After image processing and segmentation, the pore network structure and mineralogical features were extracted from the low-resolution dataset. Due to the noise in the truncated high-resolution dataset, several image processing approaches were applied prior to image segmentation and extraction of the pore network structure and mineralogy. Results from the different truncated tomography segmented data sets are compared to each other to evaluate the potential of each approach in identifying the different solid phases from the original 16 bit data set. The truncated tomography segmented data sets were also compared to the whole-core tomography segmented data set in two ways: (1) assessment of the porosity and pore size distribution at different scales; and (2) comparison of the mineralogical composition and distribution. Finally, registration of the two datasets will be used to show how the pore structure and mineralogy details at the two scales can be used to supplement each other.
NASA Astrophysics Data System (ADS)
Baumann, Sebastian; Robl, Jörg; Wendt, Lorenz; Willingshofer, Ernst; Hilberg, Sylke
2016-04-01
Automated lineament analysis on remotely sensed data requires two general process steps: The identification of neighboring pixels showing high contrast and the conversion of these domains into lines. The target output is the lineaments' position, extent and orientation. We developed a lineament extraction tool programmed in R using digital elevation models as input data to generate morphological lineaments defined as follows: A morphological lineament represents a zone of high relief roughness, whose length significantly exceeds the width. As relief roughness any deviation from a flat plane, defined by a roughness threshold, is considered. In our novel approach a multi-directional and multi-scale roughness filter uses moving windows of different neighborhood sizes to identify threshold limited rough domains on digital elevation models. Surface roughness is calculated as the vertical elevation difference between the center cell and the different orientated straight lines connecting two edge cells of a neighborhood, divided by the horizontal distance of the edge cells. Thus multiple roughness values depending on the neighborhood sizes and orientations of the edge connecting lines are generated for each cell and their maximum and minimum values are extracted. Thereby negative signs of the roughness parameter represent concave relief structures as valleys, positive signs convex relief structures as ridges. A threshold defines domains of high relief roughness. These domains are thinned to a representative point pattern by a 3x3 neighborhood filter, highlighting maximum and minimum roughness peaks, and representing the center points of lineament segments. The orientation and extent of the lineament segments are calculated within the roughness domains, generating a straight line segment in the direction of least roughness differences. We tested our algorithm on digital elevation models of multiple sources and scales and compared the results visually with shaded relief map of these digital elevation models. The lineament segments trace the relief structure to a great extent and the calculated roughness parameter represents the physical geometry of the digital elevation model. Modifying the threshold for the surface roughness value highlights different distinct relief structures. Also the neighborhood size at which lineament segments are detected correspond with the width of the surface structure and may be a useful additional parameter for further analysis. The discrimination of concave and convex relief structures perfectly matches with valleys and ridges of the surface.
Formation Flying: The Future of Remote Sensing from Space
NASA Technical Reports Server (NTRS)
Leitner, Jesse
2004-01-01
Over the next two decades a revolution is likely to occur in how remote sensing of Earth, other planets or bodies, and a range of phenomena in the universe is performed from space. In particular, current launch vehicle fairing volume and mass constraints will continue to restrict the size of monolithic telescope apertures which can be launched to little or no greater size than that of the Hubble Space Telescope, the largest aperture currently flying in space. Systems under formulation today, such as the James Webb Space Telescope will be able to increase aperture size and, hence, imaging resolution, by deploying segmented optics. However, this approach is limited as well, by our ability to control such segments to optical tolerances over long distances with highly uncertain structural dynamics connecting them. Consequently, for orders of magnitude improved resolution as required for imaging black holes, imaging planets, or performing asteroseismology, the only viable approach will be to fly a collection of spacecraft in formation to synthesize a virtual segmented telescope or interferometer with very large baselines. This paper provides some basic definitions in the area of formation flying, describes some of the strategic science missions planned in the National Aeronautics and Space Administration, and identifies some of the critical technologies needed to enable some of the most challenging space missions ever conceived which have realistic hopes of flying.