Sample records for proposed method yields

  1. A spectral-spatial-dynamic hierarchical Bayesian (SSD-HB) model for estimating soybean yield

    NASA Astrophysics Data System (ADS)

    Kazama, Yoriko; Kujirai, Toshihiro

    2014-10-01

    A method called a "spectral-spatial-dynamic hierarchical-Bayesian (SSD-HB) model," which can deal with many parameters (such as spectral and weather information all together) by reducing the occurrence of multicollinearity, is proposed. Experiments conducted on soybean yields in Brazil fields with a RapidEye satellite image indicate that the proposed SSD-HB model can predict soybean yield with a higher degree of accuracy than other estimation methods commonly used in remote-sensing applications. In the case of the SSD-HB model, the mean absolute error between estimated yield of the target area and actual yield is 0.28 t/ha, compared to 0.34 t/ha when conventional PLS regression was applied, showing the potential effectiveness of the proposed model.

  2. Reference point detection for camera-based fingerprint image based on wavelet transformation.

    PubMed

    Khalil, Mohammed S

    2015-04-30

    Fingerprint recognition systems essentially require core-point detection prior to fingerprint matching. The core-point is used as a reference point to align the fingerprint with a template database. When processing a larger fingerprint database, it is necessary to consider the core-point during feature extraction. Numerous core-point detection methods are available and have been reported in the literature. However, these methods are generally applied to scanner-based images. Hence, this paper attempts to explore the feasibility of applying a core-point detection method to a fingerprint image obtained using a camera phone. The proposed method utilizes a discrete wavelet transform to extract the ridge information from a color image. The performance of proposed method is evaluated in terms of accuracy and consistency. These two indicators are calculated automatically by comparing the method's output with the defined core points. The proposed method is tested on two data sets, controlled and uncontrolled environment, collected from 13 different subjects. In the controlled environment, the proposed method achieved a detection rate 82.98%. In uncontrolled environment, the proposed method yield a detection rate of 78.21%. The proposed method yields promising results in a collected-image database. Moreover, the proposed method outperformed compare to existing method.

  3. Molecular Based Temperature and Strain Rate Dependent Yield Criterion for Anisotropic Elastomeric Thin Films

    NASA Technical Reports Server (NTRS)

    Bosi, F.; Pellegrino, S.

    2017-01-01

    A molecular formulation of the onset of plasticity is proposed to assess temperature and strain rate effects in anisotropic semi-crystalline rubbery films. The presented plane stress criterion is based on the strain rate-temperature superposition principle and the cooperative theory of yielding, where some parameters are assumed to be material constants, while others are considered to depend on specific modes of deformation. An orthotropic yield function is developed for a linear low density polyethylene thin film. Uniaxial and biaxial inflation experiments were carried out to determine the yield stress of the membrane via a strain recovery method. It is shown that the 3% offset method predicts the uniaxial elastoplastic transition with good accuracy. Both the tensile yield points along the two principal directions of the film and the biaxial yield stresses are found to obey the superposition principle. The proposed yield criterion is compared against experimental measurements, showing excellent agreement over a wide range of deformation rates and temperatures.

  4. Efficient SRAM yield optimization with mixture surrogate modeling

    NASA Astrophysics Data System (ADS)

    Zhongjian, Jiang; Zuochang, Ye; Yan, Wang

    2016-12-01

    Largely repeated cells such as SRAM cells usually require extremely low failure-rate to ensure a moderate chi yield. Though fast Monte Carlo methods such as importance sampling and its variants can be used for yield estimation, they are still very expensive if one needs to perform optimization based on such estimations. Typically the process of yield calculation requires a lot of SPICE simulation. The circuit SPICE simulation analysis accounted for the largest proportion of time in the process yield calculation. In the paper, a new method is proposed to address this issue. The key idea is to establish an efficient mixture surrogate model. The surrogate model is based on the design variables and process variables. This model construction method is based on the SPICE simulation to get a certain amount of sample points, these points are trained for mixture surrogate model by the lasso algorithm. Experimental results show that the proposed model is able to calculate accurate yield successfully and it brings significant speed ups to the calculation of failure rate. Based on the model, we made a further accelerated algorithm to further enhance the speed of the yield calculation. It is suitable for high-dimensional process variables and multi-performance applications.

  5. Prediction of Enzyme Mutant Activity Using Computational Mutagenesis and Incremental Transduction

    PubMed Central

    Basit, Nada; Wechsler, Harry

    2011-01-01

    Wet laboratory mutagenesis to determine enzyme activity changes is expensive and time consuming. This paper expands on standard one-shot learning by proposing an incremental transductive method (T2bRF) for the prediction of enzyme mutant activity during mutagenesis using Delaunay tessellation and 4-body statistical potentials for representation. Incremental learning is in tune with both eScience and actual experimentation, as it accounts for cumulative annotation effects of enzyme mutant activity over time. The experimental results reported, using cross-validation, show that overall the incremental transductive method proposed, using random forest as base classifier, yields better results compared to one-shot learning methods. T2bRF is shown to yield 90% on T4 and LAC (and 86% on HIV-1). This is significantly better than state-of-the-art competing methods, whose performance yield is at 80% or less using the same datasets. PMID:22007208

  6. Colorimetric determination of alkaline phosphatase as indicator of mammalian feces in corn meal: collaborative study.

    PubMed

    Gerber, H

    1986-01-01

    In the official method for rodent filth in corn meal, filth and corn meal are separated in organic solvents, and particles are identified by the presence of hair and a mucous coating. The solvents are toxic, poor separation yields low recoveries, and fecal characteristics are rarely present on all fragments, especially on small particles. The official AOAC alkaline phosphatase test for mammalian feces, 44.181-44.184, has therefore been adapted to determine the presence of mammalian feces in corn meal. The enzyme cleaves phosphate radicals from a test indicator/substrate, phenolphthalein diphosphate. As free phenolphthalein accumulates, a pink-to-red color develops in the gelled test agar medium. In a collaborative study conducted to compare the proposed method with the official method for corn meal, 44.049, the proposed method yielded 45.5% higher recoveries than the official method. Repeatability and reproducibility for the official method were roughly 1.8 times more variable than for the proposed method. The method has been adopted official first action.

  7. Quality evaluation of no-reference MR images using multidirectional filters and image statistics.

    PubMed

    Jang, Jinseong; Bang, Kihun; Jang, Hanbyol; Hwang, Dosik

    2018-09-01

    This study aimed to develop a fully automatic, no-reference image-quality assessment (IQA) method for MR images. New quality-aware features were obtained by applying multidirectional filters to MR images and examining the feature statistics. A histogram of these features was then fitted to a generalized Gaussian distribution function for which the shape parameters yielded different values depending on the type of distortion in the MR image. Standard feature statistics were established through a training process based on high-quality MR images without distortion. Subsequently, the feature statistics of a test MR image were calculated and compared with the standards. The quality score was calculated as the difference between the shape parameters of the test image and the undistorted standard images. The proposed IQA method showed a >0.99 correlation with the conventional full-reference assessment methods; accordingly, this proposed method yielded the best performance among no-reference IQA methods for images containing six types of synthetic, MR-specific distortions. In addition, for authentically distorted images, the proposed method yielded the highest correlation with subjective assessments by human observers, thus demonstrating its superior performance over other no-reference IQAs. Our proposed IQA was designed to consider MR-specific features and outperformed other no-reference IQAs designed mainly for photographic images. Magn Reson Med 80:914-924, 2018. © 2018 International Society for Magnetic Resonance in Medicine. © 2018 International Society for Magnetic Resonance in Medicine.

  8. Anomalous effects in the aluminum oxide sputtering yield

    NASA Astrophysics Data System (ADS)

    Schelfhout, R.; Strijckmans, K.; Depla, D.

    2018-04-01

    The sputtering yield of aluminum oxide during reactive magnetron sputtering has been quantified by a new and fast method. The method is based on the meticulous determination of the reactive gas consumption during reactive DC magnetron sputtering and has been deployed to determine the sputtering yield of aluminum oxide. The accuracy of the proposed method is demonstrated by comparing its results to the common weight loss method excluding secondary effects such as redeposition. Both methods exhibit a decrease in sputtering yield with increasing discharge current. This feature of the aluminum oxide sputtering yield is described for the first time. It resembles the discrepancy between published high sputtering yield values determined by low current ion beams and the low deposition rate in the poisoned mode during reactive magnetron sputtering. Moreover, the usefulness of the new method arises from its time-resolved capabilities. The evolution of the alumina sputtering yield can now be measured up to a resolution of seconds. This reveals the complex dynamical behavior of the sputtering yield. A plausible explanation of the observed anomalies seems to originate from the balance between retention and out-diffusion of implanted gas atoms, while other possible causes are commented.

  9. Text Summarization Model based on Facility Location Problem

    NASA Astrophysics Data System (ADS)

    Takamura, Hiroya; Okumura, Manabu

    e propose a novel multi-document generic summarization model based on the budgeted median problem, which is a facility location problem. The summarization method based on our model is an extractive method, which selects sentences from the given document cluster and generates a summary. Each sentence in the document cluster will be assigned to one of the selected sentences, where the former sentece is supposed to be represented by the latter. Our method selects sentences to generate a summary that yields a good sentence assignment and hence covers the whole content of the document cluster. An advantage of this method is that it can incorporate asymmetric relations between sentences such as textual entailment. Through experiments, we showed that the proposed method yields good summaries on the dataset of DUC'04.

  10. Point cloud registration from local feature correspondences-Evaluation on challenging datasets.

    PubMed

    Petricek, Tomas; Svoboda, Tomas

    2017-01-01

    Registration of laser scans, or point clouds in general, is a crucial step of localization and mapping with mobile robots or in object modeling pipelines. A coarse alignment of the point clouds is generally needed before applying local methods such as the Iterative Closest Point (ICP) algorithm. We propose a feature-based approach to point cloud registration and evaluate the proposed method and its individual components on challenging real-world datasets. For a moderate overlap between the laser scans, the method provides a superior registration accuracy compared to state-of-the-art methods including Generalized ICP, 3D Normal-Distribution Transform, Fast Point-Feature Histograms, and 4-Points Congruent Sets. Compared to the surface normals, the points as the underlying features yield higher performance in both keypoint detection and establishing local reference frames. Moreover, sign disambiguation of the basis vectors proves to be an important aspect in creating repeatable local reference frames. A novel method for sign disambiguation is proposed which yields highly repeatable reference frames.

  11. Numerical solution of 2D-vector tomography problem using the method of approximate inverse

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Svetov, Ivan; Maltseva, Svetlana; Polyakova, Anna

    2016-08-10

    We propose a numerical solution of reconstruction problem of a two-dimensional vector field in a unit disk from the known values of the longitudinal and transverse ray transforms. The algorithm is based on the method of approximate inverse. Numerical simulations confirm that the proposed method yields good results of reconstruction of vector fields.

  12. A method of forest management for the planned introduction of intensive husbandry in virgin forest stands

    Treesearch

    B. Dolezal

    1978-01-01

    The method proposed is derived from long experience of intensive management in forest stands of Central Europe and from our proposal for management in virgin Iranian forests of the Caspian Region. The method establishes the need for systematic planning of stand conversion to insure both sustained yield and the harvesting of sufficient timber to sustain economic...

  13. Experimental Investigations on Subsequent Yield Surface of Pure Copper by Single-Sample and Multi-Sample Methods under Various Pre-Deformation.

    PubMed

    Liu, Gui-Long; Huang, Shi-Hong; Shi, Che-Si; Zeng, Bin; Zhang, Ke-Shi; Zhong, Xian-Ci

    2018-02-10

    Using copper thin-walled tubular specimens, the subsequent yield surfaces under pre-tension, pre-torsion and pre-combined tension-torsion are measured, where the single-sample and multi-sample methods are applied respectively to determine the yield stresses at specified offset strain. The rule and characteristics of the evolution of the subsequent yield surface are investigated. Under the conditions of different pre-strains, the influence of test point number, test sequence and specified offset strain on the measurement of subsequent yield surface and the concave phenomenon for measured yield surface are studied. Moreover, the feasibility and validity of the two methods are compared. The main conclusions are drawn as follows: (1) For the single or multi-sample method, the measured subsequent yield surfaces are remarkably different from cylindrical yield surfaces proposed by the classical plasticity theory; (2) there are apparent differences between the test results from the two kinds of methods: the multi-sample method is not influenced by the number of test points, test order and the cumulative effect of residual plastic strain resulting from the other test point, while those are very influential in the single-sample method; and (3) the measured subsequent yield surface may appear concave, which can be transformed to convex for single-sample method by changing the test sequence. However, for the multiple-sample method, the concave phenomenon will disappear when a larger offset strain is specified.

  14. A comprehensively quantitative method of evaluating the impact of drought on crop yield using daily multi-scale SPEI and crop growth process model.

    PubMed

    Wang, Qianfeng; Wu, Jianjun; Li, Xiaohan; Zhou, Hongkui; Yang, Jianhua; Geng, Guangpo; An, Xueli; Liu, Leizhen; Tang, Zhenghong

    2017-04-01

    The quantitative evaluation of the impact of drought on crop yield is one of the most important aspects in agricultural water resource management. To assess the impact of drought on wheat yield, the Environmental Policy Integrated Climate (EPIC) crop growth model and daily Standardized Precipitation Evapotranspiration Index (SPEI), which is based on daily meteorological data, are adopted in the Huang Huai Hai Plain. The winter wheat crop yields are estimated at 28 stations, after calibrating the cultivar coefficients based on the experimental site data, and SPEI data was taken 11 times across the growth season from 1981 to 2010. The relationship between estimated yield and multi-scale SPEI were analyzed. The optimum time scale SPEI to monitor drought during the crop growth period was determined. The reference yield was determined by averaging the yields from numerous non-drought years. From this data, we propose a comprehensive quantitative method which can be used to predict the impact of drought on wheat yields by combining the daily multi-scale SPEI and crop growth process model. This method was tested in the Huang Huai Hai Plain. The results suggested that estimation of calibrated EPIC was a good predictor of crop yield in the Huang Huai Hai Plain, with lower RMSE (15.4 %) between estimated yield and observed yield at six agrometeorological stations. The soil moisture at planting time was affected by the precipitation and evapotranspiration during the previous 90 days (about 3 months) in the Huang Huai Hai Plain. SPEI G90 was adopted as the optimum time scale SPEI to identify the drought and non-drought years, and identified a drought year in 2000. The water deficit in the year 2000 was significant, and the rate of crop yield reduction did not completely correspond with the volume of water deficit. Our proposed comprehensive method which quantitatively evaluates the impact of drought on crop yield is reliable. The results of this study further our understanding why the adoption of counter measures against drought is important and direct farmers to choose drought-resistant crops.

  15. Transformer Incipient Fault Prediction Using Combined Artificial Neural Network and Various Particle Swarm Optimisation Techniques.

    PubMed

    Illias, Hazlee Azil; Chai, Xin Rui; Abu Bakar, Ab Halim; Mokhlis, Hazlie

    2015-01-01

    It is important to predict the incipient fault in transformer oil accurately so that the maintenance of transformer oil can be performed correctly, reducing the cost of maintenance and minimise the error. Dissolved gas analysis (DGA) has been widely used to predict the incipient fault in power transformers. However, sometimes the existing DGA methods yield inaccurate prediction of the incipient fault in transformer oil because each method is only suitable for certain conditions. Many previous works have reported on the use of intelligence methods to predict the transformer faults. However, it is believed that the accuracy of the previously proposed methods can still be improved. Since artificial neural network (ANN) and particle swarm optimisation (PSO) techniques have never been used in the previously reported work, this work proposes a combination of ANN and various PSO techniques to predict the transformer incipient fault. The advantages of PSO are simplicity and easy implementation. The effectiveness of various PSO techniques in combination with ANN is validated by comparison with the results from the actual fault diagnosis, an existing diagnosis method and ANN alone. Comparison of the results from the proposed methods with the previously reported work was also performed to show the improvement of the proposed methods. It was found that the proposed ANN-Evolutionary PSO method yields the highest percentage of correct identification for transformer fault type than the existing diagnosis method and previously reported works.

  16. Transformer Incipient Fault Prediction Using Combined Artificial Neural Network and Various Particle Swarm Optimisation Techniques

    PubMed Central

    2015-01-01

    It is important to predict the incipient fault in transformer oil accurately so that the maintenance of transformer oil can be performed correctly, reducing the cost of maintenance and minimise the error. Dissolved gas analysis (DGA) has been widely used to predict the incipient fault in power transformers. However, sometimes the existing DGA methods yield inaccurate prediction of the incipient fault in transformer oil because each method is only suitable for certain conditions. Many previous works have reported on the use of intelligence methods to predict the transformer faults. However, it is believed that the accuracy of the previously proposed methods can still be improved. Since artificial neural network (ANN) and particle swarm optimisation (PSO) techniques have never been used in the previously reported work, this work proposes a combination of ANN and various PSO techniques to predict the transformer incipient fault. The advantages of PSO are simplicity and easy implementation. The effectiveness of various PSO techniques in combination with ANN is validated by comparison with the results from the actual fault diagnosis, an existing diagnosis method and ANN alone. Comparison of the results from the proposed methods with the previously reported work was also performed to show the improvement of the proposed methods. It was found that the proposed ANN-Evolutionary PSO method yields the highest percentage of correct identification for transformer fault type than the existing diagnosis method and previously reported works. PMID:26103634

  17. New method to enhance the extraction yield of rutin from Sophora japonica using a novel ultrasonic extraction system by determining optimum ultrasonic frequency.

    PubMed

    Liao, Jianqing; Qu, Baida; Liu, Da; Zheng, Naiqin

    2015-11-01

    A new method has been proposed for enhancing extraction yield of rutin from Sophora japonica, in which a novel ultrasonic extraction system has been developed to perform the determination of optimum ultrasonic frequency by a two-step procedure. This study has systematically investigated the influence of a continuous frequency range of 20-92 kHz on rutin yields. The effects of different operating conditions on rutin yields have also been studied in detail such as solvent concentration, solvent to solid ratio, ultrasound power, temperature and particle size. A higher extraction yield was obtained at the ultrasonic frequency of 60-62 kHz which was little affected under other extraction conditions. Comparative studies between existing methods and the present method were done to verify the effectiveness of this method. Results indicated that the new extraction method gave a higher extraction yield compared with existing ultrasound-assisted extraction (UAE) and soxhlet extraction (SE). Thus, the potential use of this method may be promising for extraction of natural materials on an industrial scale in the future. Copyright © 2015 Elsevier B.V. All rights reserved.

  18. Identification of QRS complex in non-stationary electrocardiogram of sick infants.

    PubMed

    Kota, S; Swisher, C B; Al-Shargabi, T; Andescavage, N; du Plessis, A; Govindan, R B

    2017-08-01

    Due to the high-frequency of routine interventions in an intensive care setting, electrocardiogram (ECG) recordings from sick infants are highly non-stationary, with recurrent changes in the baseline, alterations in the morphology of the waveform, and attenuations of the signal strength. Current methods lack reliability in identifying QRS complexes (a marker of individual cardiac cycles) in the non-stationary ECG. In the current study we address this problem by proposing a novel approach to QRS complex identification. Our approach employs lowpass filtering, half-wave rectification, and the use of instantaneous Hilbert phase to identify QRS complexes in the ECG. We demonstrate the application of this method using ECG recordings from eight preterm infants undergoing intensive care, as well as from 18 normal adult volunteers available via a public database. We compared our approach to the commonly used approaches including Pan and Tompkins (PT), gqrs, wavedet, and wqrs for identifying QRS complexes and then compared each with manually identified QRS complexes. For preterm infants, a comparison between the QRS complexes identified by our approach and those identified through manual annotations yielded sensitivity and positive predictive values of 99% and 99.91%, respectively. The comparison metrics for each method are as follows: PT (sensitivity: 84.49%, positive predictive value: 99.88%), gqrs (85.25%, 99.49%), wavedet (95.24%, 99.86%), and wqrs (96.99%, 96.55%). Thus, the sensitivity values of the four methods previously described, are lower than the sensitivity of the method we propose; however, the positive predictive values of these other approaches is comparable to those of our method, with the exception of the wqrs approach, which yielded a slightly lower value. For adult ECG, our approach yielded a sensitivity of 99.78%, whereas PT yielded 99.79%. The positive predictive value was 99.42% for both our approach as well as for PT. We propose a novel method for identifying QRS complexes that outperforms common currently available tools for non-stationary ECG data in infants. For stationary ECG our proposed approach and the PT approach perform equally well. The ECG acquired in a clinical environment may be prone to issues related to non-stationarity, especially in critically ill patients. The approach proposed in this report offers superior reliability in these scenarios. Copyright © 2017 Elsevier Ltd. All rights reserved.

  19. A trust region approach with multivariate Padé model for optimal circuit design

    NASA Astrophysics Data System (ADS)

    Abdel-Malek, Hany L.; Ebid, Shaimaa E. K.; Mohamed, Ahmed S. A.

    2017-11-01

    Since the optimization process requires a significant number of consecutive function evaluations, it is recommended to replace the function by an easily evaluated approximation model during the optimization process. The model suggested in this article is based on a multivariate Padé approximation. This model is constructed using data points of ?, where ? is the number of parameters. The model is updated over a sequence of trust regions. This model avoids the slow convergence of linear models of ? and has features of quadratic models that need interpolation data points of ?. The proposed approach is tested by applying it to several benchmark problems. Yield optimization using such a direct method is applied to some practical circuit examples. Minimax solution leads to a suitable initial point to carry out the yield optimization process. The yield is optimized by the proposed derivative-free method for active and passive filter examples.

  20. A comparison of two adaptive multivariate analysis methods (PLSR and ANN) for winter wheat yield forecasting using Landsat-8 OLI images

    NASA Astrophysics Data System (ADS)

    Chen, Pengfei; Jing, Qi

    2017-02-01

    An assumption that the non-linear method is more reasonable than the linear method when canopy reflectance is used to establish the yield prediction model was proposed and tested in this study. For this purpose, partial least squares regression (PLSR) and artificial neural networks (ANN), represented linear and non-linear analysis method, were applied and compared for wheat yield prediction. Multi-period Landsat-8 OLI images were collected at two different wheat growth stages, and a field campaign was conducted to obtain grain yields at selected sampling sites in 2014. The field data were divided into a calibration database and a testing database. Using calibration data, a cross-validation concept was introduced for the PLSR and ANN model construction to prevent over-fitting. All models were tested using the test data. The ANN yield-prediction model produced R2, RMSE and RMSE% values of 0.61, 979 kg ha-1, and 10.38%, respectively, in the testing phase, performing better than the PLSR yield-prediction model, which produced R2, RMSE, and RMSE% values of 0.39, 1211 kg ha-1, and 12.84%, respectively. Non-linear method was suggested as a better method for yield prediction.

  1. Background estimation and player detection in badminton video clips using histogram of pixel values along temporal dimension

    NASA Astrophysics Data System (ADS)

    Peng, Yahui; Ma, Xiao; Gao, Xinyu; Zhou, Fangxu

    2015-12-01

    Computer vision is an important tool for sports video processing. However, its application in badminton match analysis is very limited. In this study, we proposed a straightforward but robust histogram-based background estimation and player detection methods for badminton video clips, and compared the results with the naive averaging method and the mixture of Gaussians methods, respectively. The proposed method yielded better background estimation results than the naive averaging method and more accurate player detection results than the mixture of Gaussians player detection method. The preliminary results indicated that the proposed histogram-based method could estimate the background and extract the players accurately. We conclude that the proposed method can be used for badminton player tracking and further studies are warranted for automated match analysis.

  2. Constitutive Modeling of Piezoelectric Polymer Composites

    NASA Technical Reports Server (NTRS)

    Odegard, Gregory M.; Gates, Tom (Technical Monitor)

    2003-01-01

    A new modeling approach is proposed for predicting the bulk electromechanical properties of piezoelectric composites. The proposed model offers the same level of convenience as the well-known Mori-Tanaka method. In addition, it is shown to yield predicted properties that are, in most cases, more accurate or equally as accurate as the Mori-Tanaka scheme. In particular, the proposed method is used to determine the electromechanical properties of four piezoelectric polymer composite materials as a function of inclusion volume fraction. The predicted properties are compared to those calculated using the Mori-Tanaka and finite element methods.

  3. Enhanced low-temperature lithium storage performance of multilayer graphene made through an improved ionic liquid-assisted synthesis

    NASA Astrophysics Data System (ADS)

    Raccichini, Rinaldo; Varzi, Alberto; Chakravadhanula, Venkata Sai Kiran; Kübel, Christian; Balducci, Andrea; Passerini, Stefano

    2015-05-01

    The electrochemical properties of graphene are strongly depending on its synthesis. Between the different methods proposed so far, liquid phase exfoliation turns out to be a promising method for the production of graphene. Unfortunately, the low yield of this technique, in term of solid material obtained, still limit its use to small scale applications. In this article we propose a low cost and environmentally friendly method for producing multilayer crystalline graphene with high yield. Such innovative approach, involving an improved ionic liquid assisted, microwave exfoliation of expanded graphite, allows the production of graphene with advanced lithium ion storage performance, for the first time, at low temperatures (<0 °C), as low as -30 °C, with respect to commercially available graphite.

  4. $n$ -Dimensional Discrete Cat Map Generation Using Laplace Expansions.

    PubMed

    Wu, Yue; Hua, Zhongyun; Zhou, Yicong

    2016-11-01

    Different from existing methods that use matrix multiplications and have high computation complexity, this paper proposes an efficient generation method of n -dimensional ( [Formula: see text]) Cat maps using Laplace expansions. New parameters are also introduced to control the spatial configurations of the [Formula: see text] Cat matrix. Thus, the proposed method provides an efficient way to mix dynamics of all dimensions at one time. To investigate its implementations and applications, we further introduce a fast implementation algorithm of the proposed method with time complexity O(n 4 ) and a pseudorandom number generator using the Cat map generated by the proposed method. The experimental results show that, compared with existing generation methods, the proposed method has a larger parameter space and simpler algorithm complexity, generates [Formula: see text] Cat matrices with a lower inner correlation, and thus yields more random and unpredictable outputs of [Formula: see text] Cat maps.

  5. Simple method for quick estimation of aquifer hydrogeological parameters

    NASA Astrophysics Data System (ADS)

    Ma, C.; Li, Y. Y.

    2017-08-01

    Development of simple and accurate methods to determine the aquifer hydrogeological parameters was of importance for groundwater resources assessment and management. Aiming at the present issue of estimating aquifer parameters based on some data of the unsteady pumping test, a fitting function of Theis well function was proposed using fitting optimization method and then a unitary linear regression equation was established. The aquifer parameters could be obtained by solving coefficients of the regression equation. The application of the proposed method was illustrated, using two published data sets. By the error statistics and analysis on the pumping drawdown, it showed that the method proposed in this paper yielded quick and accurate estimates of the aquifer parameters. The proposed method could reliably identify the aquifer parameters from long distance observed drawdowns and early drawdowns. It was hoped that the proposed method in this paper would be helpful for practicing hydrogeologists and hydrologists.

  6. Hearing the Signal in the Noise: A Software-Based Content Analysis of Patterns in Responses by Experts and Students to a New Venture Investment Proposal

    ERIC Educational Resources Information Center

    Hostager, Todd J.; Voiovich, Jason; Hughes, Raymond K.

    2013-01-01

    The authors apply a software-based content analysis method to uncover differences in responses by expert entrepreneurs and undergraduate entrepreneur majors to a new venture investment proposal. Data analyzed via the Leximancer software package yielded conceptual maps highlighting key differences in the nature of these responses. Study methods and…

  7. A versatile method for the determination of photochemical quantum yields via online UV-Vis spectroscopy.

    PubMed

    Stadler, Eduard; Eibel, Anna; Fast, David; Freißmuth, Hilde; Holly, Christian; Wiech, Mathias; Moszner, Norbert; Gescheidt, Georg

    2018-05-16

    We have developed a simple method for determining the quantum yields of photo-induced reactions. Our setup features a fibre coupled UV-Vis spectrometer, LED irradiation sources, and a calibrated spectrophotometer for precise measurements of the LED photon flux. The initial slope in time-resolved absorbance profiles provides the quantum yield. We show the feasibility of our methodology for the kinetic analysis of photochemical reactions and quantum yield determination. The typical chemical actinometers, ferrioxalate and ortho-nitrobenzaldehyde, as well as riboflavin, a spiro-compound, phosphorus- and germanium-based photoinitiators for radical polymerizations and the frequently utilized photo-switch azobenzene serve as paradigms. The excellent agreement of our results with published data demonstrates the high potential of the proposed method as a convenient alternative to the time-consuming chemical actinometry.

  8. Combining Deep and Handcrafted Image Features for Presentation Attack Detection in Face Recognition Systems Using Visible-Light Camera Sensors

    PubMed Central

    Nguyen, Dat Tien; Pham, Tuyen Danh; Baek, Na Rae; Park, Kang Ryoung

    2018-01-01

    Although face recognition systems have wide application, they are vulnerable to presentation attack samples (fake samples). Therefore, a presentation attack detection (PAD) method is required to enhance the security level of face recognition systems. Most of the previously proposed PAD methods for face recognition systems have focused on using handcrafted image features, which are designed by expert knowledge of designers, such as Gabor filter, local binary pattern (LBP), local ternary pattern (LTP), and histogram of oriented gradients (HOG). As a result, the extracted features reflect limited aspects of the problem, yielding a detection accuracy that is low and varies with the characteristics of presentation attack face images. The deep learning method has been developed in the computer vision research community, which is proven to be suitable for automatically training a feature extractor that can be used to enhance the ability of handcrafted features. To overcome the limitations of previously proposed PAD methods, we propose a new PAD method that uses a combination of deep and handcrafted features extracted from the images by visible-light camera sensor. Our proposed method uses the convolutional neural network (CNN) method to extract deep image features and the multi-level local binary pattern (MLBP) method to extract skin detail features from face images to discriminate the real and presentation attack face images. By combining the two types of image features, we form a new type of image features, called hybrid features, which has stronger discrimination ability than single image features. Finally, we use the support vector machine (SVM) method to classify the image features into real or presentation attack class. Our experimental results indicate that our proposed method outperforms previous PAD methods by yielding the smallest error rates on the same image databases. PMID:29495417

  9. Combining Deep and Handcrafted Image Features for Presentation Attack Detection in Face Recognition Systems Using Visible-Light Camera Sensors.

    PubMed

    Nguyen, Dat Tien; Pham, Tuyen Danh; Baek, Na Rae; Park, Kang Ryoung

    2018-02-26

    Although face recognition systems have wide application, they are vulnerable to presentation attack samples (fake samples). Therefore, a presentation attack detection (PAD) method is required to enhance the security level of face recognition systems. Most of the previously proposed PAD methods for face recognition systems have focused on using handcrafted image features, which are designed by expert knowledge of designers, such as Gabor filter, local binary pattern (LBP), local ternary pattern (LTP), and histogram of oriented gradients (HOG). As a result, the extracted features reflect limited aspects of the problem, yielding a detection accuracy that is low and varies with the characteristics of presentation attack face images. The deep learning method has been developed in the computer vision research community, which is proven to be suitable for automatically training a feature extractor that can be used to enhance the ability of handcrafted features. To overcome the limitations of previously proposed PAD methods, we propose a new PAD method that uses a combination of deep and handcrafted features extracted from the images by visible-light camera sensor. Our proposed method uses the convolutional neural network (CNN) method to extract deep image features and the multi-level local binary pattern (MLBP) method to extract skin detail features from face images to discriminate the real and presentation attack face images. By combining the two types of image features, we form a new type of image features, called hybrid features, which has stronger discrimination ability than single image features. Finally, we use the support vector machine (SVM) method to classify the image features into real or presentation attack class. Our experimental results indicate that our proposed method outperforms previous PAD methods by yielding the smallest error rates on the same image databases.

  10. Genomic prediction using an iterative conditional expectation algorithm for a fast BayesC-like model.

    PubMed

    Dong, Linsong; Wang, Zhiyong

    2018-06-11

    Genomic prediction is feasible for estimating genomic breeding values because of dense genome-wide markers and credible statistical methods, such as Genomic Best Linear Unbiased Prediction (GBLUP) and various Bayesian methods. Compared with GBLUP, Bayesian methods propose more flexible assumptions for the distributions of SNP effects. However, most Bayesian methods are performed based on Markov chain Monte Carlo (MCMC) algorithms, leading to computational efficiency challenges. Hence, some fast Bayesian approaches, such as fast BayesB (fBayesB), were proposed to speed up the calculation. This study proposed another fast Bayesian method termed fast BayesC (fBayesC). The prior distribution of fBayesC assumes that a SNP with probability γ has a non-zero effect which comes from a normal density with a common variance. The simulated data from QTLMAS XII workshop and actual data on large yellow croaker were used to compare the predictive results of fBayesB, fBayesC and (MCMC-based) BayesC. The results showed that when γ was set as a small value, such as 0.01 in the simulated data or 0.001 in the actual data, fBayesB and fBayesC yielded lower prediction accuracies (abilities) than BayesC. In the actual data, fBayesC could yield very similar predictive abilities as BayesC when γ ≥ 0.01. When γ = 0.01, fBayesB could also yield similar results as fBayesC and BayesC. However, fBayesB could not yield an explicit result when γ ≥ 0.1, but a similar situation was not observed for fBayesC. Moreover, the computational speed of fBayesC was significantly faster than that of BayesC, making fBayesC a promising method for genomic prediction.

  11. Wavelet filtered shifted phase-encoded joint transform correlation for face recognition

    NASA Astrophysics Data System (ADS)

    Moniruzzaman, Md.; Alam, Mohammad S.

    2017-05-01

    A new wavelet-filtered-based Shifted- phase-encoded Joint Transform Correlation (WPJTC) technique has been proposed for efficient face recognition. The proposed technique uses discrete wavelet decomposition for preprocessing and can effectively accommodate various 3D facial distortions, effects of noise, and illumination variations. After analyzing different forms of wavelet basis functions, an optimal method has been proposed by considering the discrimination capability and processing speed as performance trade-offs. The proposed technique yields better correlation discrimination compared to alternate pattern recognition techniques such as phase-shifted phase-encoded fringe-adjusted joint transform correlator. The performance of the proposed WPJTC has been tested using the Yale facial database and extended Yale facial database under different environments such as illumination variation, noise, and 3D changes in facial expressions. Test results show that the proposed WPJTC yields better performance compared to alternate JTC based face recognition techniques.

  12. The role of interest and inflation rates in life-cycle cost analysis

    NASA Technical Reports Server (NTRS)

    Eisenberger, I.; Remer, D. S.; Lorden, G.

    1978-01-01

    The effect of projected interest and inflation rates on life cycle cost calculations is discussed and a method is proposed for making such calculations which replaces these rates by a single parameter. Besides simplifying the analysis, the method clarifies the roles of these rates. An analysis of historical interest and inflation rates from 1950 to 1976 shows that the proposed method can be expected to yield very good projections of life cycle cost even if the rates themselves fluctuate considerably.

  13. A range-free method to determine antoine vapor-pressure heat transfer-related equation coefficients using the Boubaker polynomial expansion scheme

    NASA Astrophysics Data System (ADS)

    Koçak, H.; Dahong, Z.; Yildirim, A.

    2011-05-01

    In this study, a range-free method is proposed in order to determine the Antoine constants for a given material (salicylic acid). The advantage of this method is mainly yielding analytical expressions which fit different temperature ranges.

  14. A one-shot-projection method for measurement of specular surfaces.

    PubMed

    Wang, Zhenzhou

    2015-02-09

    In this paper, a method is proposed to measure the shapes of specular surfaces with one-shot-projection of structured laser patterns. By intercepting the reflection of the reflected laser pattern twice with two diffusive planes, the closed form solution is achieved for each reflected ray. The points on the specular surface are reconstructed by computing the intersections of the incident rays and the reflected rays. The proposed method can measure both static and dynamic specular shapes due to its one-shot-projection, which is beyond the capability of most of state of art methods that need multiple projections. To our knowledge, the proposed method is the only method so far that could yield the closed form solutions for the dynamic and specular surfaces.

  15. Integration of membrane distillation into traditional salt farming method: Process development and modelling

    NASA Astrophysics Data System (ADS)

    Hizam, S.; Bilad, M. R.; Putra, Z. A.

    2017-10-01

    Farmers still practice the traditional salt farming in many regions, particularly in Indonesia. This archaic method not only produces low yield and poor salt quality, it is also laborious. Furthermore, the farming locations typically have poor access to fresh water and are far away from electricity grid, which restrict upgrade to a more advanced technology for salt production. This paper proposes a new concept of salt harvesting method that improves the salt yield and at the same time facilitates recovery of fresh water from seawater. The new concept integrates solar powered membrane distillation (MD) and photovoltaic cells to drive the pumping. We performed basic solar still experiments to quantify the heat flux received by a pond. The data were used as insight for designing the proposed concept, particularly on operational strategy and the most effective way to integrate MD. After the conceptual design had been developed, we formulated mass and energy balance to estimate the performance of the proposed concept. Based on our data and design, it is expected that the system would improve the yield and quality of the salt production, maximizing fresh water harvesting, and eventually provides economical gain for salt farmers hence improving their quality of life. The key performance can only be measured via experiment using gain output ratio as performance indicator, which will be done in a future study.

  16. Clinic expert information extraction based on domain model and block importance model.

    PubMed

    Zhang, Yuanpeng; Wang, Li; Qian, Danmin; Geng, Xingyun; Yao, Dengfu; Dong, Jiancheng

    2015-11-01

    To extract expert clinic information from the Deep Web, there are two challenges to face. The first one is to make a judgment on forms. A novel method based on a domain model, which is a tree structure constructed by the attributes of query interfaces is proposed. With this model, query interfaces can be classified to a domain and filled in with domain keywords. Another challenge is to extract information from response Web pages indexed by query interfaces. To filter the noisy information on a Web page, a block importance model is proposed, both content and spatial features are taken into account in this model. The experimental results indicate that the domain model yields a precision 4.89% higher than that of the rule-based method, whereas the block importance model yields an F1 measure 10.5% higher than that of the XPath method. Copyright © 2015 Elsevier Ltd. All rights reserved.

  17. Development of a “Fission-proxy” Method for the Measurement of 14-MeV Neutron Fission Yields at CAMS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gharibyan, Narek

    2016-10-25

    Relative fission yield measurements were made for 50 fission products from 25.6±0.5 MeV alpha-induced fission of Th-232. Quantitative comparison of these experimentally measured fission yields with the evaluated fission yields from 14-MeV neutron-induced fission of U-235 demonstrates the feasibility of the proposed fission-proxy method. This new technique, based on the Bohr-independence hypothesis, permits the measurement of fission yields from an alternate reaction pathway (Th-232 + 25.6 MeV α → U-236* vs. U-235 + 14-MeV n → U-236*) given that the fission process associated with the same compound nucleus is independent of its formation. Other suitable systems that can potentially bemore » investigated in this manner include (but are not limited to) Pu-239 and U-237.« less

  18. Measurement of rolling friction by a damped oscillator

    NASA Technical Reports Server (NTRS)

    Dayan, M.; Buckley, D. H.

    1983-01-01

    An experimental method for measuring rolling friction is proposed. The method is mechanically simple. It is based on an oscillator in a uniform magnetic field and does not involve any mechanical forces except for the measured friction. The measured pickup voltage is Fourier analyzed and yields the friction spectral response. The proposed experiment is not tailored for a particular case. Instead, various modes of operation, suitable to different experimental conditions, are discussed.

  19. Max-AUC Feature Selection in Computer-Aided Detection of Polyps in CT Colonography

    PubMed Central

    Xu, Jian-Wu; Suzuki, Kenji

    2014-01-01

    We propose a feature selection method based on a sequential forward floating selection (SFFS) procedure to improve the performance of a classifier in computerized detection of polyps in CT colonography (CTC). The feature selection method is coupled with a nonlinear support vector machine (SVM) classifier. Unlike the conventional linear method based on Wilks' lambda, the proposed method selected the most relevant features that would maximize the area under the receiver operating characteristic curve (AUC), which directly maximizes classification performance, evaluated based on AUC value, in the computer-aided detection (CADe) scheme. We presented two variants of the proposed method with different stopping criteria used in the SFFS procedure. The first variant searched all feature combinations allowed in the SFFS procedure and selected the subsets that maximize the AUC values. The second variant performed a statistical test at each step during the SFFS procedure, and it was terminated if the increase in the AUC value was not statistically significant. The advantage of the second variant is its lower computational cost. To test the performance of the proposed method, we compared it against the popular stepwise feature selection method based on Wilks' lambda for a colonic-polyp database (25 polyps and 2624 nonpolyps). We extracted 75 morphologic, gray-level-based, and texture features from the segmented lesion candidate regions. The two variants of the proposed feature selection method chose 29 and 7 features, respectively. Two SVM classifiers trained with these selected features yielded a 96% by-polyp sensitivity at false-positive (FP) rates of 4.1 and 6.5 per patient, respectively. Experiments showed a significant improvement in the performance of the classifier with the proposed feature selection method over that with the popular stepwise feature selection based on Wilks' lambda that yielded 18.0 FPs per patient at the same sensitivity level. PMID:24608058

  20. Max-AUC feature selection in computer-aided detection of polyps in CT colonography.

    PubMed

    Xu, Jian-Wu; Suzuki, Kenji

    2014-03-01

    We propose a feature selection method based on a sequential forward floating selection (SFFS) procedure to improve the performance of a classifier in computerized detection of polyps in CT colonography (CTC). The feature selection method is coupled with a nonlinear support vector machine (SVM) classifier. Unlike the conventional linear method based on Wilks' lambda, the proposed method selected the most relevant features that would maximize the area under the receiver operating characteristic curve (AUC), which directly maximizes classification performance, evaluated based on AUC value, in the computer-aided detection (CADe) scheme. We presented two variants of the proposed method with different stopping criteria used in the SFFS procedure. The first variant searched all feature combinations allowed in the SFFS procedure and selected the subsets that maximize the AUC values. The second variant performed a statistical test at each step during the SFFS procedure, and it was terminated if the increase in the AUC value was not statistically significant. The advantage of the second variant is its lower computational cost. To test the performance of the proposed method, we compared it against the popular stepwise feature selection method based on Wilks' lambda for a colonic-polyp database (25 polyps and 2624 nonpolyps). We extracted 75 morphologic, gray-level-based, and texture features from the segmented lesion candidate regions. The two variants of the proposed feature selection method chose 29 and 7 features, respectively. Two SVM classifiers trained with these selected features yielded a 96% by-polyp sensitivity at false-positive (FP) rates of 4.1 and 6.5 per patient, respectively. Experiments showed a significant improvement in the performance of the classifier with the proposed feature selection method over that with the popular stepwise feature selection based on Wilks' lambda that yielded 18.0 FPs per patient at the same sensitivity level.

  1. Computing the Baker-Campbell-Hausdorff series and the Zassenhaus product

    NASA Astrophysics Data System (ADS)

    Weyrauch, Michael; Scholz, Daniel

    2009-09-01

    The Baker-Campbell-Hausdorff (BCH) series and the Zassenhaus product are of fundamental importance for the theory of Lie groups and their applications in physics and physical chemistry. Standard methods for the explicit construction of the BCH and Zassenhaus terms yield polynomial representations, which must be translated into the usually required commutator representation. We prove that a new translation proposed recently yields a correct representation of the BCH and Zassenhaus terms. This representation entails fewer terms than the well-known Dynkin-Specht-Wever representation, which is of relevance for practical applications. Furthermore, various methods for the computation of the BCH and Zassenhaus terms are compared, and a new efficient approach for the calculation of the Zassenhaus terms is proposed. Mathematica implementations for the most efficient algorithms are provided together with comparisons of efficiency.

  2. An efficient scan diagnosis methodology according to scan failure mode for yield enhancement

    NASA Astrophysics Data System (ADS)

    Kim, Jung-Tae; Seo, Nam-Sik; Oh, Ghil-Geun; Kim, Dae-Gue; Lee, Kyu-Taek; Choi, Chi-Young; Kim, InSoo; Min, Hyoung Bok

    2008-12-01

    Yield has always been a driving consideration during fabrication of modern semiconductor industry. Statistically, the largest portion of wafer yield loss is defective scan failure. This paper presents efficient failure analysis methods for initial yield ramp up and ongoing product with scan diagnosis. Result of our analysis shows that more than 60% of the scan failure dies fall into the category of shift mode in the very deep submicron (VDSM) devices. However, localization of scan shift mode failure is very difficult in comparison to capture mode failure because it is caused by the malfunction of scan chain. Addressing the biggest challenge, we propose the most suitable analysis method according to scan failure mode (capture / shift) for yield enhancement. In the event of capture failure mode, this paper describes the method that integrates scan diagnosis flow and backside probing technology to obtain more accurate candidates. We also describe several unique techniques, such as bulk back-grinding solution, efficient backside probing and signal analysis method. Lastly, we introduce blocked chain analysis algorithm for efficient analysis of shift failure mode. In this paper, we contribute to enhancement of the yield as a result of the combination of two methods. We confirm the failure candidates with physical failure analysis (PFA) method. The direct feedback of the defective visualization is useful to mass-produce devices in a shorter time. The experimental data on mass products show that our method produces average reduction by 13.7% in defective SCAN & SRAM-BIST failure rates and by 18.2% in wafer yield rates.

  3. Noise Estimation and Quality Assessment of Gaussian Noise Corrupted Images

    NASA Astrophysics Data System (ADS)

    Kamble, V. M.; Bhurchandi, K.

    2018-03-01

    Evaluating the exact quantity of noise present in an image and quality of an image in the absence of reference image is a challenging task. We propose a near perfect noise estimation method and a no reference image quality assessment method for images corrupted by Gaussian noise. The proposed methods obtain initial estimate of noise standard deviation present in an image using the median of wavelet transform coefficients and then obtains a near to exact estimate using curve fitting. The proposed noise estimation method provides the estimate of noise within average error of +/-4%. For quality assessment, this noise estimate is mapped to fit the Differential Mean Opinion Score (DMOS) using a nonlinear function. The proposed methods require minimum training and yields the noise estimate and image quality score. Images from Laboratory for image and Video Processing (LIVE) database and Computational Perception and Image Quality (CSIQ) database are used for validation of the proposed quality assessment method. Experimental results show that the performance of proposed quality assessment method is at par with the existing no reference image quality assessment metric for Gaussian noise corrupted images.

  4. Graphical user interface for yield and dose estimations for cyclotron-produced technetium

    NASA Astrophysics Data System (ADS)

    Hou, X.; Vuckovic, M.; Buckley, K.; Bénard, F.; Schaffer, P.; Ruth, T.; Celler, A.

    2014-07-01

    The cyclotron-based 100Mo(p,2n)99mTc reaction has been proposed as an alternative method for solving the shortage of 99mTc. With this production method, however, even if highly enriched molybdenum is used, various radioactive and stable isotopes will be produced simultaneously with 99mTc. In order to optimize reaction parameters and estimate potential patient doses from radiotracers labeled with cyclotron produced 99mTc, the yields for all reaction products must be estimated. Such calculations, however, are extremely complex and time consuming. Therefore, the objective of this study was to design a graphical user interface (GUI) that would automate these calculations, facilitate analysis of the experimental data, and predict dosimetry. The resulting GUI, named Cyclotron production Yields and Dosimetry (CYD), is based on Matlab®. It has three parts providing (a) reaction yield calculations, (b) predictions of gamma emissions and (c) dosimetry estimations. The paper presents the outline of the GUI, lists the parameters that must be provided by the user, discusses the details of calculations and provides examples of the results. Our initial experience shows that the proposed GUI allows the user to very efficiently calculate the yields of reaction products and analyze gamma spectroscopy data. However, it is expected that the main advantage of this GUI will be at the later clinical stage when entering reaction parameters will allow the user to predict production yields and estimate radiation doses to patients for each particular cyclotron run.

  5. Graphical user interface for yield and dose estimations for cyclotron-produced technetium.

    PubMed

    Hou, X; Vuckovic, M; Buckley, K; Bénard, F; Schaffer, P; Ruth, T; Celler, A

    2014-07-07

    The cyclotron-based (100)Mo(p,2n)(99m)Tc reaction has been proposed as an alternative method for solving the shortage of (99m)Tc. With this production method, however, even if highly enriched molybdenum is used, various radioactive and stable isotopes will be produced simultaneously with (99m)Tc. In order to optimize reaction parameters and estimate potential patient doses from radiotracers labeled with cyclotron produced (99m)Tc, the yields for all reaction products must be estimated. Such calculations, however, are extremely complex and time consuming. Therefore, the objective of this study was to design a graphical user interface (GUI) that would automate these calculations, facilitate analysis of the experimental data, and predict dosimetry. The resulting GUI, named Cyclotron production Yields and Dosimetry (CYD), is based on Matlab®. It has three parts providing (a) reaction yield calculations, (b) predictions of gamma emissions and (c) dosimetry estimations. The paper presents the outline of the GUI, lists the parameters that must be provided by the user, discusses the details of calculations and provides examples of the results. Our initial experience shows that the proposed GUI allows the user to very efficiently calculate the yields of reaction products and analyze gamma spectroscopy data. However, it is expected that the main advantage of this GUI will be at the later clinical stage when entering reaction parameters will allow the user to predict production yields and estimate radiation doses to patients for each particular cyclotron run.

  6. Modulated Hebb-Oja learning rule--a method for principal subspace analysis.

    PubMed

    Jankovic, Marko V; Ogawa, Hidemitsu

    2006-03-01

    This paper presents analysis of the recently proposed modulated Hebb-Oja (MHO) method that performs linear mapping to a lower-dimensional subspace. Principal component subspace is the method that will be analyzed. Comparing to some other well-known methods for yielding principal component subspace (e.g., Oja's Subspace Learning Algorithm), the proposed method has one feature that could be seen as desirable from the biological point of view--synaptic efficacy learning rule does not need the explicit information about the value of the other efficacies to make individual efficacy modification. Also, the simplicity of the "neural circuits" that perform global computations and a fact that their number does not depend on the number of input and output neurons, could be seen as good features of the proposed method.

  7. High-efficient Extraction of Drainage Networks from Digital Elevation Model Data Constrained by Enhanced Flow Enforcement from Known River Map

    NASA Astrophysics Data System (ADS)

    Wu, T.; Li, T.; Li, J.; Wang, G.

    2017-12-01

    Improved drainage network extraction can be achieved by flow enforcement whereby information of known river maps is imposed to the flow-path modeling process. However, the common elevation-based stream burning method can sometimes cause unintended topological errors and misinterpret the overall drainage pattern. We presented an enhanced flow enforcement method to facilitate accurate and efficient process of drainage network extraction. Both the topology of the mapped hydrography and the initial landscape of the DEM are well preserved and fully utilized in the proposed method. An improved stream rasterization is achieved here, yielding continuous, unambiguous and stream-collision-free raster equivalent of stream vectors for flow enforcement. By imposing priority-based enforcement with a complementary flow direction enhancement procedure, the drainage patterns of the mapped hydrography are fully represented in the derived results. The proposed method was tested over the Rogue River Basin, using DEMs with various resolutions. As indicated by the visual and statistical analyses, the proposed method has three major advantages: (1) it significantly reduces the occurrences of topological errors, yielding very accurate watershed partition and channel delineation, (2) it ensures scale-consistent performance at DEMs of various resolutions, and (3) the entire extraction process is well-designed to achieve great computational efficiency.

  8. Optshrink LR + S: accelerated fMRI reconstruction using non-convex optimal singular value shrinkage.

    PubMed

    Aggarwal, Priya; Shrivastava, Parth; Kabra, Tanay; Gupta, Anubha

    2017-03-01

    This paper presents a new accelerated fMRI reconstruction method, namely, OptShrink LR + S method that reconstructs undersampled fMRI data using a linear combination of low-rank and sparse components. The low-rank component has been estimated using non-convex optimal singular value shrinkage algorithm, while the sparse component has been estimated using convex l 1 minimization. The performance of the proposed method is compared with the existing state-of-the-art algorithms on real fMRI dataset. The proposed OptShrink LR + S method yields good qualitative and quantitative results.

  9. Ammonia synthesis using magnetic induction method (MIM)

    NASA Astrophysics Data System (ADS)

    Puspitasari, P.; Razak, J. Abd; Yahya, N.

    2012-09-01

    The most challenging issues for ammonia synthesis is to get the high yield. New approach of ammonia synthesis by using Magnetic Induction Method (MIM) and the Helmholtz Coils has been proposed. The ammonia detection was done by using Kjeldahl Method and FTIR. The system was designed by using Autocad software. The magnetic field of MIM was vary from 100mT-200mT and the magnetic field for the Helmholtz coils was 14mT. The FTIR result shows that ammonia has been successfully formed at stretching peaks 1097,1119,1162,1236, 1377, and 1464 cm-1. UV-VIS result shows the ammonia bond at 195nm of wavelength. The ammonia yield was increase to 244.72μmole/g.h by using the MIM and six pairs of Helmholtz coils. Therefore this new method will be a new promising method to achieve the high yield ammonia at ambient condition (at 25δC and 1atm), under the Magnetic Induction Method (MIM).

  10. Beam Design and User Scheduling for Nonorthogonal Multiple Access With Multiple Antennas Based on Pareto Optimality

    NASA Astrophysics Data System (ADS)

    Seo, Junyeong; Sung, Youngchul

    2018-06-01

    In this paper, an efficient transmit beam design and user scheduling method is proposed for multi-user (MU) multiple-input single-output (MISO) non-orthogonal multiple access (NOMA) downlink, based on Pareto-optimality. The proposed beam design and user scheduling method groups simultaneously-served users into multiple clusters with practical two users in each cluster, and then applies spatical zeroforcing (ZF) across clusters to control inter-cluster interference (ICI) and Pareto-optimal beam design with successive interference cancellation (SIC) to two users in each cluster to remove interference to strong users and leverage signal-to-interference-plus-noise ratios (SINRs) of interference-experiencing weak users. The proposed method has flexibility to control the rates of strong and weak users and numerical results show that the proposed method yields good performance.

  11. A comparison of radiometric correction techniques in the evaluation of the relationship between LST and NDVI in Landsat imagery.

    PubMed

    Tan, Kok Chooi; Lim, Hwee San; Matjafri, Mohd Zubir; Abdullah, Khiruddin

    2012-06-01

    Atmospheric corrections for multi-temporal optical satellite images are necessary, especially in change detection analyses, such as normalized difference vegetation index (NDVI) rationing. Abrupt change detection analysis using remote-sensing techniques requires radiometric congruity and atmospheric correction to monitor terrestrial surfaces over time. Two atmospheric correction methods were used for this study: relative radiometric normalization and the simplified method for atmospheric correction (SMAC) in the solar spectrum. A multi-temporal data set consisting of two sets of Landsat images from the period between 1991 and 2002 of Penang Island, Malaysia, was used to compare NDVI maps, which were generated using the proposed atmospheric correction methods. Land surface temperature (LST) was retrieved using ATCOR3_T in PCI Geomatica 10.1 image processing software. Linear regression analysis was utilized to analyze the relationship between NDVI and LST. This study reveals that both of the proposed atmospheric correction methods yielded high accuracy through examination of the linear correlation coefficients. To check for the accuracy of the equation obtained through linear regression analysis for every single satellite image, 20 points were randomly chosen. The results showed that the SMAC method yielded a constant value (in terms of error) to predict the NDVI value from linear regression analysis-derived equation. The errors (average) from both proposed atmospheric correction methods were less than 10%.

  12. Noise suppressed partial volume correction for cardiac SPECT/CT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chan, Chung; Liu, Chi, E-mail: chi.liu@yale.edu

    Purpose: Partial volume correction (PVC) methods typically improve quantification at the expense of increased image noise and reduced reproducibility. In this study, the authors developed a novel voxel-based PVC method that incorporates anatomical knowledge to improve quantification while suppressing noise for cardiac SPECT/CT imaging. Methods: In the proposed method, the SPECT images were first reconstructed using anatomical-based maximum a posteriori (AMAP) with Bowsher’s prior to penalize noise while preserving boundaries. A sequential voxel-by-voxel PVC approach (Yang’s method) was then applied on the AMAP reconstruction using a template response. This template response was obtained by forward projecting a template derived frommore » a contrast-enhanced CT image, and then reconstructed using AMAP to model the partial volume effects (PVEs) introduced by both the system resolution and the smoothing applied during reconstruction. To evaluate the proposed noise suppressed PVC (NS-PVC), the authors first simulated two types of cardiac SPECT studies: a {sup 99m}Tc-tetrofosmin myocardial perfusion scan and a {sup 99m}Tc-labeled red blood cell (RBC) scan on a dedicated cardiac multiple pinhole SPECT/CT at both high and low count levels. The authors then applied the proposed method on a canine equilibrium blood pool study following injection with {sup 99m}Tc-RBCs at different count levels by rebinning the list-mode data into shorter acquisitions. The proposed method was compared to MLEM reconstruction without PVC, two conventional PVC methods, including Yang’s method and multitarget correction (MTC) applied on the MLEM reconstruction, and AMAP reconstruction without PVC. Results: The results showed that the Yang’s method improved quantification, however, yielded increased noise and reduced reproducibility in the regions with higher activity. MTC corrected for PVE on high count data with amplified noise, although yielded the worst performance among all the methods tested on low-count data. AMAP effectively suppressed noise and reduced the spill-in effect in the low activity regions. However it was unable to reduce the spill-out effect in high activity regions. NS-PVC yielded superior performance in terms of both quantitative assessment and visual image quality while improving reproducibility. Conclusions: The results suggest that NS-PVC may be a promising PVC algorithm for application in low-dose protocols, and in gated and dynamic cardiac studies with low counts.« less

  13. Nonexpansiveness of a linearized augmented Lagrangian operator for hierarchical convex optimization

    NASA Astrophysics Data System (ADS)

    Yamagishi, Masao; Yamada, Isao

    2017-04-01

    Hierarchical convex optimization concerns two-stage optimization problems: the first stage problem is a convex optimization; the second stage problem is the minimization of a convex function over the solution set of the first stage problem. For the hierarchical convex optimization, the hybrid steepest descent method (HSDM) can be applied, where the solution set of the first stage problem must be expressed as the fixed point set of a certain nonexpansive operator. In this paper, we propose a nonexpansive operator that yields a computationally efficient update when it is plugged into the HSDM. The proposed operator is inspired by the update of the linearized augmented Lagrangian method. It is applicable to characterize the solution set of recent sophisticated convex optimization problems found in the context of inverse problems, where the sum of multiple proximable convex functions involving linear operators must be minimized to incorporate preferable properties into the minimizers. For such a problem formulation, there has not yet been reported any nonexpansive operator that yields an update free from the inversions of linear operators in cases where it is utilized in the HSDM. Unlike previously known nonexpansive operators, the proposed operator yields an inversion-free update in such cases. As an application of the proposed operator plugged into the HSDM, we also present, in the context of the so-called superiorization, an algorithmic solution to a convex optimization problem over the generalized convex feasible set where the intersection of the hard constraints is not necessarily simple.

  14. Noise parameter estimation for poisson corrupted images using variance stabilization transforms.

    PubMed

    Jin, Xiaodan; Xu, Zhenyu; Hirakawa, Keigo

    2014-03-01

    Noise is present in all images captured by real-world image sensors. Poisson distribution is said to model the stochastic nature of the photon arrival process and agrees with the distribution of measured pixel values. We propose a method for estimating unknown noise parameters from Poisson corrupted images using properties of variance stabilization. With a significantly lower computational complexity and improved stability, the proposed estimation technique yields noise parameters that are comparable in accuracy to the state-of-art methods.

  15. Measures and models for angular correlation and angular-linear correlation. [correlation of random variables

    NASA Technical Reports Server (NTRS)

    Johnson, R. A.; Wehrly, T.

    1976-01-01

    Population models for dependence between two angular measurements and for dependence between an angular and a linear observation are proposed. The method of canonical correlations first leads to new population and sample measures of dependence in this latter situation. An example relating wind direction to the level of a pollutant is given. Next, applied to pairs of angular measurements, the method yields previously proposed sample measures in some special cases and a new sample measure in general.

  16. Improved accuracy in Wigner-Ville distribution-based sizing of rod-shaped particle using flip and replication technique

    NASA Astrophysics Data System (ADS)

    Chuamchaitrakool, Porntip; Widjaja, Joewono; Yoshimura, Hiroyuki

    2018-01-01

    A method for improving accuracy in Wigner-Ville distribution (WVD)-based particle size measurements from inline holograms using flip and replication technique (FRT) is proposed. The FRT extends the length of hologram signals being analyzed, yielding better spatial-frequency resolution of the WVD output. Experimental results verify reduction in measurement error as the length of the hologram signals increases. The proposed method is suitable for particle sizing from holograms recorded using small-sized image sensors.

  17. Feasibility of groundwater recharge dam projects in arid environments

    NASA Astrophysics Data System (ADS)

    Jaafar, H. H.

    2014-05-01

    A new method for determining feasibility and prioritizing investments for agricultural and domestic recharge dams in arid regions is developed and presented. The method is based on identifying the factors affecting the decision making process and evaluating these factors, followed by determining the indices in a GIS-aided environment. Evaluated parameters include results from field surveys and site visits, land cover and soils data, precipitation data, runoff data and modeling, number of beneficiaries, domestic irrigation demand, reservoir objectives, demography, reservoirs yield and reliability, dam structures, construction costs, and operation and maintenance costs. Results of a case study on more than eighty proposed dams indicate that assessment of reliability, annualized cost/demand satisfied and yield is crucial prior to investment decision making in arid areas. Irrigation demand is the major influencing parameter on yield and reliability of recharge dams, even when only 3 months of the demand were included. Reliability of the proposed reservoirs as related to their standardized size and net inflow was found to increase with increasing yield. High priority dams were less than 4% of the total, and less priority dams amounted to 23%, with the remaining found to be not feasible. The results of this methodology and its application has proved effective in guiding stakeholders for defining most favorable sites for preliminary and detailed design studies and commissioning.

  18. Segmentation of cortical bone using fast level sets

    NASA Astrophysics Data System (ADS)

    Chowdhury, Manish; Jörgens, Daniel; Wang, Chunliang; Smedby, Årjan; Moreno, Rodrigo

    2017-02-01

    Cortical bone plays a big role in the mechanical competence of bone. The analysis of cortical bone requires accurate segmentation methods. Level set methods are usually in the state-of-the-art for segmenting medical images. However, traditional implementations of this method are computationally expensive. This drawback was recently tackled through the so-called coherent propagation extension of the classical algorithm which has decreased computation times dramatically. In this study, we assess the potential of this technique for segmenting cortical bone in interactive time in 3D images acquired through High Resolution peripheral Quantitative Computed Tomography (HR-pQCT). The obtained segmentations are used to estimate cortical thickness and cortical porosity of the investigated images. Cortical thickness and Cortical porosity is computed using sphere fitting and mathematical morphological operations respectively. Qualitative comparison between the segmentations of our proposed algorithm and a previously published approach on six images volumes reveals superior smoothness properties of the level set approach. While the proposed method yields similar results to previous approaches in regions where the boundary between trabecular and cortical bone is well defined, it yields more stable segmentations in challenging regions. This results in more stable estimation of parameters of cortical bone. The proposed technique takes few seconds to compute, which makes it suitable for clinical settings.

  19. Investigation of Inconsistent ENDF/B-VII.1 Independent and Cumulative Fission Product Yields with Proposed Revisions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pigni, M.T., E-mail: pignimt@ornl.gov; Francis, M.W.; Gauld, I.C.

    A recent implementation of ENDF/B-VII.1 independent fission product yields and nuclear decay data identified inconsistencies in the data caused by the use of updated nuclear schemes in the decay sub-library that are not reflected in legacy fission product yield data. Recent changes in the decay data sub-library, particularly the delayed neutron branching fractions, result in calculated fission product concentrations that do not agree with the cumulative fission yields in the library as well as with experimental measurements. To address these issues, a comprehensive set of independent fission product yields was generated for thermal and fission spectrum neutron-induced fission for {supmore » 235,238}U and {sup 239,241}Pu in order to provide a preliminary assessment of the updated fission product yield data consistency. These updated independent fission product yields were utilized in the ORIGEN code to compare the calculated fission product inventories with experimentally measured inventories, with particular attention given to the noble gases. Another important outcome of this work is the development of fission product yield covariance data necessary for fission product uncertainty quantification. The evaluation methodology combines a sequential Bayesian method to guarantee consistency between independent and cumulative yields along with the physical constraints on the independent yields. This work was motivated to improve the performance of the ENDF/B-VII.1 library for stable and long-lived fission products. The revised fission product yields and the new covariance data are proposed as a revision to the fission yield data currently in ENDF/B-VII.1.« less

  20. [Study on Information Extraction of Clinic Expert Information from Hospital Portals].

    PubMed

    Zhang, Yuanpeng; Dong, Jiancheng; Qian, Danmin; Geng, Xingyun; Wu, Huiqun; Wang, Li

    2015-12-01

    Clinic expert information provides important references for residents in need of hospital care. Usually, such information is hidden in the deep web and cannot be directly indexed by search engines. To extract clinic expert information from the deep web, the first challenge is to make a judgment on forms. This paper proposes a novel method based on a domain model, which is a tree structure constructed by the attributes of search interfaces. With this model, search interfaces can be classified to a domain and filled in with domain keywords. Another challenge is to extract information from the returned web pages indexed by search interfaces. To filter the noise information on a web page, a block importance model is proposed. The experiment results indicated that the domain model yielded a precision 10.83% higher than that of the rule-based method, whereas the block importance model yielded an F₁ measure 10.5% higher than that of the XPath method.

  1. Two levels ARIMAX and regression models for forecasting time series data with calendar variation effects

    NASA Astrophysics Data System (ADS)

    Suhartono, Lee, Muhammad Hisyam; Prastyo, Dedy Dwi

    2015-12-01

    The aim of this research is to develop a calendar variation model for forecasting retail sales data with the Eid ul-Fitr effect. The proposed model is based on two methods, namely two levels ARIMAX and regression methods. Two levels ARIMAX and regression models are built by using ARIMAX for the first level and regression for the second level. Monthly men's jeans and women's trousers sales in a retail company for the period January 2002 to September 2009 are used as case study. In general, two levels of calendar variation model yields two models, namely the first model to reconstruct the sales pattern that already occurred, and the second model to forecast the effect of increasing sales due to Eid ul-Fitr that affected sales at the same and the previous months. The results show that the proposed two level calendar variation model based on ARIMAX and regression methods yields better forecast compared to the seasonal ARIMA model and Neural Networks.

  2. The long-term strength of Europe and its implications for plate-forming processes.

    PubMed

    Pérez-Gussinyé, M; Watts, A B

    2005-07-21

    Field-based geological studies show that continental deformation preferentially occurs in young tectonic provinces rather than in old cratons. This partitioning of deformation suggests that the cratons are stronger than surrounding younger Phanerozoic provinces. However, although Archaean and Phanerozoic lithosphere differ in their thickness and composition, their relative strength is a matter of much debate. One proxy of strength is the effective elastic thickness of the lithosphere, Te. Unfortunately, spatial variations in Te are not well understood, as different methods yield different results. The differences are most apparent in cratons, where the 'Bouguer coherence' method yields large Te values (> 60 km) whereas the 'free-air admittance' method yields low values (< 25 km). Here we present estimates of the variability of Te in Europe using both methods. We show that when they are consistently formulated, both methods yield comparable Te values that correlate with geology, and that the strength of old lithosphere (> or = 1.5 Gyr old) is much larger (mean Te > 60 km) than that of younger lithosphere (mean Te < 30 km). We propose that this strength difference reflects changes in lithospheric plate structure (thickness, geothermal gradient and composition) that result from mantle temperature and volatile content decrease through Earth's history.

  3. Investigation of inconsistent ENDF/B-VII.1 independent and cumulative fission product yields with proposed revisions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pigni, Marco T; Francis, Matthew W; Gauld, Ian C

    A recent implementation of ENDF/B-VII. independent fission product yields and nuclear decay data identified inconsistencies in the data caused by the use of updated nuclear scheme in the decay sub-library that is not reflected in legacy fission product yield data. Recent changes in the decay data sub-library, particularly the delayed neutron branching fractions, result in calculated fission product concentrations that are incompatible with the cumulative fission yields in the library, and also with experimental measurements. A comprehensive set of independent fission product yields was generated for thermal and fission spectrum neutron induced fission for 235,238U and 239,241Pu in order tomore » provide a preliminary assessment of the updated fission product yield data consistency. These updated independent fission product yields were utilized in the ORIGEN code to evaluate the calculated fission product inventories with experimentally measured inventories, with particular attention given to the noble gases. An important outcome of this work is the development of fission product yield covariance data necessary for fission product uncertainty quantification. The evaluation methodology combines a sequential Bayesian method to guarantee consistency between independent and cumulative yields along with the physical constraints on the independent yields. This work was motivated to improve the performance of the ENDF/B-VII.1 library in the case of stable and long-lived cumulative yields due to the inconsistency of ENDF/B-VII.1 fission p;roduct yield and decay data sub-libraries. The revised fission product yields and the new covariance data are proposed as a revision to the fission yield data currently in ENDF/B-VII.1.« less

  4. Sequence Based Prediction of DNA-Binding Proteins Based on Hybrid Feature Selection Using Random Forest and Gaussian Naïve Bayes

    PubMed Central

    Lou, Wangchao; Wang, Xiaoqing; Chen, Fan; Chen, Yixiao; Jiang, Bo; Zhang, Hua

    2014-01-01

    Developing an efficient method for determination of the DNA-binding proteins, due to their vital roles in gene regulation, is becoming highly desired since it would be invaluable to advance our understanding of protein functions. In this study, we proposed a new method for the prediction of the DNA-binding proteins, by performing the feature rank using random forest and the wrapper-based feature selection using forward best-first search strategy. The features comprise information from primary sequence, predicted secondary structure, predicted relative solvent accessibility, and position specific scoring matrix. The proposed method, called DBPPred, used Gaussian naïve Bayes as the underlying classifier since it outperformed five other classifiers, including decision tree, logistic regression, k-nearest neighbor, support vector machine with polynomial kernel, and support vector machine with radial basis function. As a result, the proposed DBPPred yields the highest average accuracy of 0.791 and average MCC of 0.583 according to the five-fold cross validation with ten runs on the training benchmark dataset PDB594. Subsequently, blind tests on the independent dataset PDB186 by the proposed model trained on the entire PDB594 dataset and by other five existing methods (including iDNA-Prot, DNA-Prot, DNAbinder, DNABIND and DBD-Threader) were performed, resulting in that the proposed DBPPred yielded the highest accuracy of 0.769, MCC of 0.538, and AUC of 0.790. The independent tests performed by the proposed DBPPred on completely a large non-DNA binding protein dataset and two RNA binding protein datasets also showed improved or comparable quality when compared with the relevant prediction methods. Moreover, we observed that majority of the selected features by the proposed method are statistically significantly different between the mean feature values of the DNA-binding and the non DNA-binding proteins. All of the experimental results indicate that the proposed DBPPred can be an alternative perspective predictor for large-scale determination of DNA-binding proteins. PMID:24475169

  5. High-Yield Method for Dispersing Simian Kidneys for Cell Cultures

    PubMed Central

    de Oca, H. Montes; Probst, P.; Grubbs, R.

    1971-01-01

    A technique for dispersion of animal tissue cells is described. The proposed technique is based on the concomitant use of trypsin and disodium ethylenediamine tetraacetate (EDTA). The use of the two dispersing agents (trypsin and disodium EDTA) markedly enhances cell yield as compared with the standard cell dispersion methods. Moreover, significant reduction in the amount of time required for complete tissue dispersal, presence of a very low number of nonviable cells, less cell clumping, and more uniform monolayer formation upon cultivation compare favorably with the results usually obtained with the standard trypsinization technique. Images PMID:4993235

  6. Determination of the Effective Detector Area of an Energy-Dispersive X-Ray Spectrometer at the Scanning Electron Microscope Using Experimental and Theoretical X-Ray Emission Yields.

    PubMed

    Procop, Mathias; Hodoroaba, Vasile-Dan; Terborg, Ralf; Berger, Dirk

    2016-12-01

    A method is proposed to determine the effective detector area for energy-dispersive X-ray spectrometers (EDS). Nowadays, detectors are available for a wide range of nominal areas ranging from 10 up to 150 mm2. However, it remains in most cases unknown whether this nominal area coincides with the "net active sensor area" that should be given according to the related standard ISO 15632, or with any other area of the detector device. Moreover, the specific geometry of EDS installation may further reduce a given detector area. The proposed method can be applied to most scanning electron microscope/EDS configurations. The basic idea consists in a comparison of the measured count rate with the count rate resulting from known X-ray yields of copper, titanium, or silicon. The method was successfully tested on three detectors with known effective area and applied further to seven spectrometers from different manufacturers. In most cases the method gave an effective area smaller than the area given in the detector description.

  7. A new strategy for strain improvement of Aurantiochytrium sp. based on heavy-ions mutagenesis and synergistic effects of cold stress and inhibitors of enoyl-ACP reductase.

    PubMed

    Cheng, Yu-Rong; Sun, Zhi-Jie; Cui, Gu-Zhen; Song, Xiaojin; Cui, Qiu

    2016-11-01

    Developing a strain with high docosahexaenoic acid (DHA) yield and stable fermenting-performance is an imperative way to improve DHA production using Aurantiochytrium sp., a microorganism with two fatty acid synthesis pathways: polyketide synthase (PKS) pathway and Type I fatty acid synthase (FAS) pathway. This study investigated the growth and metabolism response of Aurantiochytrium sp. CGMCC 6208 to two inhibitors of enoyl-ACP reductase of Type II FAS pathway (isoniazid and triclosan), and proposed a method of screening high DHA yield Aurantiochytrium sp. strains with heavy ion mutagenesis and pre-selection by synergistic usage of cold stress (4°C) and FAS inhibitors (triclosan and isoniazid). Results showed that (1) isoniazid and triclosan have positive effects on improving DHA level of cells; (2) mutants from irradiation dosage of 120Gy yielded more DHA compared with cells from 40Gy, 80Gy treatment and wild type; (3) DHA contents of mutants pre-selected by inhibitors of enoyl-ACP reductase of Type II FAS pathway (isoniazid and triclosan)at 4°C, were significantly higher than that of wild type; (4) compared to the wild type, the DHA productivity and yield of a mutant (T-99) obtained from Aurantiochytrium sp. CGMCC 6208 by the proposed method increased by 50% from 0.18 to 0.27g/Lh and 30% from 21 to 27g/L, respectively. In conclusion, this study developed a feasible method to screen Aurantiochytrium sp. with high DHA yield by a combination of heavy-ion mutagenesis and mutant-preselection by FAS inhibitors and cold stress. Copyright © 2016 Elsevier Inc. All rights reserved.

  8. A sediment graph model based on SCS-CN method

    NASA Astrophysics Data System (ADS)

    Singh, P. K.; Bhunya, P. K.; Mishra, S. K.; Chaube, U. C.

    2008-01-01

    SummaryThis paper proposes new conceptual sediment graph models based on coupling of popular and extensively used methods, viz., Nash model based instantaneous unit sediment graph (IUSG), soil conservation service curve number (SCS-CN) method, and Power law. These models vary in their complexity and this paper tests their performance using data of the Nagwan watershed (area = 92.46 km 2) (India). The sensitivity of total sediment yield and peak sediment flow rate computations to model parameterisation is analysed. The exponent of the Power law, β, is more sensitive than other model parameters. The models are found to have substantial potential for computing sediment graphs (temporal sediment flow rate distribution) as well as total sediment yield.

  9. Electron-stimulated desorption study of hydrogen-exposed aluminum films

    NASA Technical Reports Server (NTRS)

    Park, CH.; Bujor, M.; Poppa, H.

    1984-01-01

    H2 adsorption of evaporated clean and H2-exposed aluminum films is investigated by using the electron-stimulated desorption (ESD) method. A strong H(+)ESD signal is observed on a freshly evaporated aluminum surface which is clean according to previously proposed cleanlines criteria. An increased H(+) yield on H2 exposure is also observed. However, the increasing rate of H(+) emission could be directly correlated with small increases in H2O partial pressure during H2 exposure. It is proposed that the oxidation of aluminum by water vapor and subsequent adsorption of H2 or water is the primary process of the enhanced high H(+) yield during H2 exposure.

  10. A new constitutive model for simulation of softening, plateau, and densification phenomena for trabecular bone under compression.

    PubMed

    Lee, Chi-Seung; Lee, Jae-Myung; Youn, BuHyun; Kim, Hyung-Sik; Shin, Jong Ki; Goh, Tae Sik; Lee, Jung Sub

    2017-01-01

    A new type of constitutive model and its computational implementation procedure for the simulation of a trabecular bone are proposed in the present study. A yield surface-independent Frank-Brockman elasto-viscoplastic model is introduced to express the nonlinear material behavior such as softening beyond yield point, plateau, and densification under compressive loads. In particular, the hardening- and softening-dominant material functions are introduced and adopted in the plastic multiplier to describe each nonlinear material behavior separately. In addition, the elasto-viscoplastic model is transformed into an implicit type discrete model, and is programmed as a user-defined material subroutine in commercial finite element analysis code. In particular, the consistent tangent modulus method is proposed to improve the computational convergence and to save computational time during finite element analysis. Through the developed material library, the nonlinear stress-strain relationship is analyzed qualitatively and quantitatively, and the simulation results are compared with the results of compression test on the trabecular bone to validate the proposed constitutive model, computational method, and material library. Copyright © 2016 Elsevier Ltd. All rights reserved.

  11. Hybrid charge division multiplexing method for silicon photomultiplier based PET detectors

    NASA Astrophysics Data System (ADS)

    Park, Haewook; Ko, Guen Bae; Lee, Jae Sung

    2017-06-01

    Silicon photomultiplier (SiPM) is widely utilized in various positron emission tomography (PET) detectors and systems. However, the individual recording of SiPM output signals is still challenging owing to the high granularity of the SiPM; thus, charge division multiplexing is commonly used in PET detectors. Resistive charge division method is well established for reducing the number of output channels in conventional multi-channel photosensors, but it degrades the timing performance of SiPM-based PET detectors by yielding a large resistor-capacitor (RC) constant. Capacitive charge division method, on the other hand, yields a small RC constant and provides a faster timing response than the resistive method, but it suffers from an output signal undershoot. Therefore, in this study, we propose a hybrid charge division method which can be implemented by cascading the parallel combination of a resistor and a capacitor throughout the multiplexing network. In order to compare the performance of the proposed method with the conventional methods, a 16-channel Hamamatsu SiPM (S11064-050P) was coupled with a 4  ×  4 LGSO crystal block (3  ×  3  ×  20 mm3) and a 9  ×  9 LYSO crystal block (1.2  ×  1.2  ×  10 mm3). In addition, we tested a time-over-threshold (TOT) readout using the digitized position signals to further demonstrate the feasibility of the time-based readout of multiplexed signals based on the proposed method. The results indicated that the proposed method exhibited good energy and timing performance, thus inheriting only the advantages of conventional resistive and capacitive methods. Moreover, the proposed method showed excellent pulse shape uniformity that does not depend on the position of the interacted crystal. Accordingly, we can conclude that the hybrid charge division method is useful for effectively reducing the number of output channels of the SiPM array.

  12. Mass and Energy Balances of Dry Thermophilic Anaerobic Digestion Treating Swine Manure Mixed with Rice Straw.

    PubMed

    Zhou, Sheng; Zhang, Jining; Zou, Guoyan; Riya, Shohei; Hosomi, Masaaki

    2015-01-01

    To evaluate the feasibility of swine manure treatment by a proposed Dry Thermophilic Anaerobic Digestion (DT-AD) system, we evaluated the methane yield of swine manure treated using a DT-AD method with rice straw under different C/N ratios and solid retention time (SRT) and calculated the mass and energy balances when the DT-AD system is used for swine manure treatment from a model farm with 1000 pigs and the digested residue is used for forage rice production. A traditional swine manure treatment Oxidation Ditch system was used as the study control. The results suggest that methane yield using the proposed DT-AD system increased with a higher C/N ratio and shorter SRT. Correspondently, for the DT-AD system running with SRT of 80 days, the net energy yields for all treatments were negative, due to low biogas production and high heat loss of digestion tank. However, the biogas yield increased when the SRT was shortened to 40 days, and the generated energy was greater than consumed energy when C/N ratio was 20 : 1 and 30 : 1. The results suggest that with the correct optimization of C/N ratio and SRT, the proposed DT-AD system, followed by using digestate for forage rice production, can attain energy self-sufficiency.

  13. Mass and Energy Balances of Dry Thermophilic Anaerobic Digestion Treating Swine Manure Mixed with Rice Straw

    PubMed Central

    Zhou, Sheng; Zhang, Jining; Zou, Guoyan; Riya, Shohei; Hosomi, Masaaki

    2015-01-01

    To evaluate the feasibility of swine manure treatment by a proposed Dry Thermophilic Anaerobic Digestion (DT-AD) system, we evaluated the methane yield of swine manure treated using a DT-AD method with rice straw under different C/N ratios and solid retention time (SRT) and calculated the mass and energy balances when the DT-AD system is used for swine manure treatment from a model farm with 1000 pigs and the digested residue is used for forage rice production. A traditional swine manure treatment Oxidation Ditch system was used as the study control. The results suggest that methane yield using the proposed DT-AD system increased with a higher C/N ratio and shorter SRT. Correspondently, for the DT-AD system running with SRT of 80 days, the net energy yields for all treatments were negative, due to low biogas production and high heat loss of digestion tank. However, the biogas yield increased when the SRT was shortened to 40 days, and the generated energy was greater than consumed energy when C/N ratio was 20 : 1 and 30 : 1. The results suggest that with the correct optimization of C/N ratio and SRT, the proposed DT-AD system, followed by using digestate for forage rice production, can attain energy self-sufficiency. PMID:26609436

  14. A method to determine agro-climatic zones based on correlation and cluster analyses

    NASA Astrophysics Data System (ADS)

    Borges Valeriano, Taynara Tuany; de Souza Rolim, Glauco; de Oliveira Aparecido, Lucas Eduardo

    2017-12-01

    Determining agro-climatic zones (ACZs) is traditionally made by cross-comparing meteorological elements such as air temperature, rainfall, and water deficit (DEF). This study proposes a new method based on correlations between monthly DEFs during the crop cycle and annual yield and performs a multivariate cluster analysis on these correlations. This `correlation method' was applied to all municipalities in the state of São Paulo to determine ACZs for coffee plantations. A traditional ACZ method for coffee, which is based on temperature and DEF ranges (Evangelista et al.; RBEAA, 6:445-452, 2002), was applied to the study area to compare against the correlation method. The traditional ACZ classified the "Alta Mogina," "Média Mogiana," and "Garça and Marília" regions as traditional coffee regions that were either suitable or even restricted for coffee plantations. These traditional regions have produced coffee since 1800 and should not be classified as restricted. The correlation method classified those areas as high-producing regions and expanded them into other areas. The proposed method is innovative, because it is more detailed than common ACZ methods. Each developmental crop phase was analyzed based on correlations between the monthly DEF and yield, improving the importance of crop physiology in relation to climate.

  15. Augmented Reality for Real-Time Detection and Interpretation of Colorimetric Signals Generated by Paper-Based Biosensors.

    PubMed

    Russell, Steven M; Doménech-Sánchez, Antonio; de la Rica, Roberto

    2017-06-23

    Colorimetric tests are becoming increasingly popular in point-of-need analyses due to the possibility of detecting the signal with the naked eye, which eliminates the utilization of bulky and costly instruments only available in laboratories. However, colorimetric tests may be interpreted incorrectly by nonspecialists due to disparities in color perception or a lack of training. Here we solve this issue with a method that not only detects colorimetric signals but also interprets them so that the test outcome is understandable for anyone. It consists of an augmented reality (AR) app that uses a camera to detect the colored signals generated by a nanoparticle-based immunoassay, and that yields a warning symbol or message when the concentration of analyte is higher than a certain threshold. The proposed method detected the model analyte mouse IgG with a limit of detection of 0.3 μg mL -1 , which was comparable to the limit of detection afforded by classical densitometry performed with a nonportable device. When adapted to the detection of E. coli, the app always yielded a "hazard" warning symbol when the concentration of E. coli in the sample was above the infective dose (10 6 cfu mL -1 or higher). The proposed method could help nonspecialists make a decision about drinking from a potentially contaminated water source by yielding an unambiguous message that is easily understood by anyone. The widespread availability of smartphones along with the inexpensive paper test that requires no enzymes to generate the signal makes the proposed assay promising for analyses in remote locations and developing countries.

  16. Improving yield of PZT piezoelectric devices on glass substrates

    NASA Astrophysics Data System (ADS)

    Johnson-Wilke, Raegan L.; Wilke, Rudeger H. T.; Cotroneo, Vincenzo; Davis, William N.; Reid, Paul B.; Schwartz, Daniel A.; Trolier-McKinstry, Susan

    2012-10-01

    The proposed SMART-X telescope includes adaptive optics systems that use piezoelectric lead zirconate titanate (PZT) films deposited on flexible glass substrates. Several processing constraints are imposed by current designs: the crystallization temperature must be kept below 550 °C, the total stress in the film must be minimized, and the yield on 1 cm2 actuator elements should be < 90%. For this work, RF magnetron sputtering was used to deposit films since chemical solution deposition (CSD) led to warping of large area flexible glass substrates. A PZT 52/48 film that wasdeposited at 4 mTorr and annealed at 550 °C for 24 hours showed no detectable levels of either PbO or pyrochlore second phases. Large area electrodes (1cm x 1 cm) were deposited on 4" glass substrates. Initially, the yield of the devices was low, however, two methods were employed to increase the yield to near 100 %. The first method included a more rigorous cleaning to improve the continuity of the Pt bottom electrode. The second method was to apply 3 V DC across the capacitor structure to burn out regions of defective PZT. The result of this latter method essentially removed conducting filaments in the PZT but left the bulk of the material undamaged. By combining these two methods, the yield on the large area electrodes improved from < 10% to nearly 100%.

  17. Integrated model for predicting rice yield with climate change

    NASA Astrophysics Data System (ADS)

    Park, Jin-Ki; Das, Amrita; Park, Jong-Hwa

    2018-04-01

    Rice is the chief agricultural product and one of the primary food source. For this reason, it is of pivotal importance for worldwide economy and development. Therefore, in a decision-support-system both for the farmers and in the planning and management of the country's economy, forecasting yield is vital. However, crop yield, which is a dependent of the soil-bio-atmospheric system, is difficult to represent in statistical language. This paper describes a novel approach for predicting rice yield using artificial neural network, spatial interpolation, remote sensing and GIS methods. Herein, the variation in the yield is attributed to climatic parameters and crop health, and the normalized difference vegetation index from MODIS is used as an indicator of plant health and growth. Due importance was given to scaling up the input parameters using spatial interpolation and GIS and minimising the sources of error in every step of the modelling. The low percentage error (2.91) and high correlation (0.76) signifies the robust performance of the proposed model. This simple but effective approach is then used to estimate the influence of climate change on South Korean rice production. As proposed in the RCP8.5 scenario, an upswing in temperature may increase the rice yield throughout South Korea.

  18. Iterative deep convolutional encoder-decoder network for medical image segmentation.

    PubMed

    Jung Uk Kim; Hak Gu Kim; Yong Man Ro

    2017-07-01

    In this paper, we propose a novel medical image segmentation using iterative deep learning framework. We have combined an iterative learning approach and an encoder-decoder network to improve segmentation results, which enables to precisely localize the regions of interest (ROIs) including complex shapes or detailed textures of medical images in an iterative manner. The proposed iterative deep convolutional encoder-decoder network consists of two main paths: convolutional encoder path and convolutional decoder path with iterative learning. Experimental results show that the proposed iterative deep learning framework is able to yield excellent medical image segmentation performances for various medical images. The effectiveness of the proposed method has been proved by comparing with other state-of-the-art medical image segmentation methods.

  19. Modeling Infrared Signal Reflections to Characterize Indoor Multipath Propagation

    PubMed Central

    De-La-Llana-Calvo, Álvaro; Lázaro-Galilea, José Luis; Gardel-Vicente, Alfredo; Rodríguez-Navarro, David; Bravo-Muñoz, Ignacio; Tsirigotis, Georgios; Iglesias-Miguel, Juan

    2017-01-01

    In this paper, we propose a model to characterize Infrared (IR) signal reflections on any kind of surface material, together with a simplified procedure to compute the model parameters. The model works within the framework of Local Positioning Systems (LPS) based on IR signals (IR-LPS) to evaluate the behavior of transmitted signal Multipaths (MP), which are the main cause of error in IR-LPS, and makes several contributions to mitigation methods. Current methods are based on physics, optics, geometry and empirical methods, but these do not meet our requirements because of the need to apply several different restrictions and employ complex tools. We propose a simplified model based on only two reflection components, together with a method for determining the model parameters based on 12 empirical measurements that are easily performed in the real environment where the IR-LPS is being applied. Our experimental results show that the model provides a comprehensive solution to the real behavior of IR MP, yielding small errors when comparing real and modeled data (the mean error ranges from 1% to 4% depending on the environment surface materials). Other state-of-the-art methods yielded mean errors ranging from 15% to 40% in test measurements. PMID:28406436

  20. Contrast improvement of continuous wave diffuse optical tomography reconstruction by hybrid approach using least square and genetic algorithm

    NASA Astrophysics Data System (ADS)

    Patra, Rusha; Dutta, Pranab K.

    2015-07-01

    Reconstruction of the absorption coefficient of tissue with good contrast is of key importance in functional diffuse optical imaging. A hybrid approach using model-based iterative image reconstruction and a genetic algorithm is proposed to enhance the contrast of the reconstructed image. The proposed method yields an observed contrast of 98.4%, mean square error of 0.638×10-3, and object centroid error of (0.001 to 0.22) mm. Experimental validation of the proposed method has also been provided with tissue-like phantoms which shows a significant improvement in image quality and thus establishes the potential of the method for functional diffuse optical tomography reconstruction with continuous wave setup. A case study of finger joint imaging is illustrated as well to show the prospect of the proposed method in clinical diagnosis. The method can also be applied to the concentration measurement of a region of interest in a turbid medium.

  1. Semiblind channel estimation for MIMO-OFDM systems

    NASA Astrophysics Data System (ADS)

    Chen, Yi-Sheng; Song, Jyu-Han

    2012-12-01

    This article proposes a semiblind channel estimation method for multiple-input multiple-output orthogonal frequency-division multiplexing systems based on circular precoding. Relying on the precoding scheme at the transmitters, the autocorrelation matrix of the received data induces a structure relating the outer product of the channel frequency response matrix and precoding coefficients. This structure makes it possible to extract information about channel product matrices, which can be used to form a Hermitian matrix whose positive eigenvalues and corresponding eigenvectors yield the channel impulse response matrix. This article also tests the resistance of the precoding design to finite-sample estimation errors, and explores the effects of the precoding scheme on channel equalization by performing pairwise error probability analysis. The proposed method is immune to channel zero locations, and is reasonably robust to channel order overestimation. The proposed method is applicable to the scenarios in which the number of transmitters exceeds that of the receivers. Simulation results demonstrate the performance of the proposed method and compare it with some existing methods.

  2. Sparse reconstruction localization of multiple acoustic emissions in large diameter pipelines

    NASA Astrophysics Data System (ADS)

    Dubuc, Brennan; Ebrahimkhanlou, Arvin; Salamone, Salvatore

    2017-04-01

    A sparse reconstruction localization method is proposed, which is capable of localizing multiple acoustic emission events occurring closely in time. The events may be due to a number of sources, such as the growth of corrosion patches or cracks. Such acoustic emissions may yield localization failure if a triangulation method is used. The proposed method is implemented both theoretically and experimentally on large diameter thin-walled pipes. Experimental examples are presented, which demonstrate the failure of a triangulation method when multiple sources are present in this structure, while highlighting the capabilities of the proposed method. The examples are generated from experimental data of simulated acoustic emission events. The data corresponds to helical guided ultrasonic waves generated in a 3 m long large diameter pipe by pencil lead breaks on its outer surface. Acoustic emission waveforms are recorded by six sparsely distributed low-profile piezoelectric transducers instrumented on the outer surface of the pipe. The same array of transducers is used for both the proposed and the triangulation method. It is demonstrated that the proposed method is able to localize multiple events occurring closely in time. Furthermore, the matching pursuit algorithm and the basis pursuit densoising approach are each evaluated as potential numerical tools in the proposed sparse reconstruction method.

  3. Montelukast photodegradation: elucidation of Ф-order kinetics, determination of quantum yields and application to actinometry.

    PubMed

    Maafi, Mounir; Maafi, Wassila

    2014-08-25

    A recently developed Ф-order semi-emperical integrated rate-law for photoreversible AB(2Ф) reactions has been successfully applied to investigate Montelukast sodium (Monte) photodegradation kinetics in ethanol. The model equations also served to propose a new stepwise kinetic elucidation method valid for any AB(2Ф) system and its application to the determination of Monte's forward (Ф(λ(irr))(A-->B)) and reverse (Ф(λ(irr))(B-->A)) quantum yields at various irradiation wavelengths. It has been found that Ф(λ(irr))(A-->B) undergoes a 15-fold increase with wavelength between 220 and 360 nm, with the spectral section 250-360 nm representing Monte effective photodegradation causative range. The reverse quantum yield values were generally between 12 and 54% lower than those recorded for Ф(λ(irr))(A-->B), with the trans-isomer (Monte) converting almost completely to its cis-counterpart at high irradiation wavelengths. Furthermore, the potential use of Monte as an actinometer has been investigated, and an actinometric method was proposed. This study demonstrated the usefulness of Monte for monochromatic light actinometry for the dynamic range 258-380 nm. Copyright © 2014 Elsevier B.V. All rights reserved.

  4. Security Analysis and Improvements to the PsychoPass Method

    PubMed Central

    2013-01-01

    Background In a recent paper, Pietro Cipresso et al proposed the PsychoPass method, a simple way to create strong passwords that are easy to remember. However, the method has some security issues that need to be addressed. Objective To perform a security analysis on the PsychoPass method and outline the limitations of and possible improvements to the method. Methods We used the brute force analysis and dictionary attack analysis of the PsychoPass method to outline its weaknesses. Results The first issue with the Psychopass method is that it requires the password reproduction on the same keyboard layout as was used to generate the password. The second issue is a security weakness: although the produced password is 24 characters long, the password is still weak. We elaborate on the weakness and propose a solution that produces strong passwords. The proposed version first requires the use of the SHIFT and ALT-GR keys in combination with other keys, and second, the keys need to be 1-2 distances apart. Conclusions The proposed improved PsychoPass method yields passwords that can be broken only in hundreds of years based on current computing powers. The proposed PsychoPass method requires 10 keys, as opposed to 20 keys in the original method, for comparable password strength. PMID:23942458

  5. Incorporating conditional random fields and active learning to improve sentiment identification.

    PubMed

    Zhang, Kunpeng; Xie, Yusheng; Yang, Yi; Sun, Aaron; Liu, Hengchang; Choudhary, Alok

    2014-10-01

    Many machine learning, statistical, and computational linguistic methods have been developed to identify sentiment of sentences in documents, yielding promising results. However, most of state-of-the-art methods focus on individual sentences and ignore the impact of context on the meaning of a sentence. In this paper, we propose a method based on conditional random fields to incorporate sentence structure and context information in addition to syntactic information for improving sentiment identification. We also investigate how human interaction affects the accuracy of sentiment labeling using limited training data. We propose and evaluate two different active learning strategies for labeling sentiment data. Our experiments with the proposed approach demonstrate a 5%-15% improvement in accuracy on Amazon customer reviews compared to existing supervised learning and rule-based methods. Copyright © 2014 Elsevier Ltd. All rights reserved.

  6. Cancelable ECG biometrics using GLRT and performance improvement using guided filter with irreversible guide signal.

    PubMed

    Kim, Hanvit; Minh Phuong Nguyen; Se Young Chun

    2017-07-01

    Biometrics such as ECG provides a convenient and powerful security tool to verify or identify an individual. However, one important drawback of biometrics is that it is irrevocable. In other words, biometrics cannot be re-used practically once it is compromised. Cancelable biometrics has been investigated to overcome this drawback. In this paper, we propose a cancelable ECG biometrics by deriving a generalized likelihood ratio test (GLRT) detector from a composite hypothesis testing in randomly projected domain. Since it is common to observe performance degradation for cancelable biometrics, we also propose a guided filtering (GF) with irreversible guide signal that is a non-invertibly transformed signal of ECG authentication template. We evaluated our proposed method using ECG-ID database with 89 subjects. Conventional Euclidean detector with original ECG template yielded 93.9% PD1 (detection probability at 1% FAR) while Euclidean detector with 10% compressed ECG (1/10 of the original data size) yielded 90.8% PD1. Our proposed GLRT detector with 10% compressed ECG yielded 91.4%, which is better than Euclidean with the same compressed ECG. GF with our proposed irreversible ECG template further improved the performance of our GLRT with 10% compressed ECG up to 94.3%, which is higher than Euclidean detector with original ECG. Lastly, we showed that our proposed cancelable ECG biometrics practically met cancelable biometrics criteria such as efficiency, re-usability, diversity and non-invertibility.

  7. Magnesium oxide prepared via metal-chitosan complexation method: Application as catalyst for transesterification of soybean oil and catalyst deactivation studies

    NASA Astrophysics Data System (ADS)

    Almerindo, Gizelle I.; Probst, Luiz F. D.; Campos, Carlos E. M.; de Almeida, Rusiene M.; Meneghetti, Simoni M. P.; Meneghetti, Mario R.; Clacens, Jean-Marc; Fajardo, Humberto V.

    2011-10-01

    A simple method to prepare magnesium oxide catalysts for biodiesel production by transesterification reaction of soybean oil with ethanol is proposed. The method was developed using a metal-chitosan complex. Compared to the commercial oxide, the proposed catalysts displayed higher surface area and basicity values, leading to higher yield in terms of fatty acid ethyl esters (biodiesel). The deactivation of the catalyst due to contact with CO2 and H2O present in the ambient air was verified. It was confirmed that the active catalytic site is a hydrogenocarbonate adsorption site.

  8. Security analysis and improvements to the PsychoPass method.

    PubMed

    Brumen, Bostjan; Heričko, Marjan; Rozman, Ivan; Hölbl, Marko

    2013-08-13

    In a recent paper, Pietro Cipresso et al proposed the PsychoPass method, a simple way to create strong passwords that are easy to remember. However, the method has some security issues that need to be addressed. To perform a security analysis on the PsychoPass method and outline the limitations of and possible improvements to the method. We used the brute force analysis and dictionary attack analysis of the PsychoPass method to outline its weaknesses. The first issue with the Psychopass method is that it requires the password reproduction on the same keyboard layout as was used to generate the password. The second issue is a security weakness: although the produced password is 24 characters long, the password is still weak. We elaborate on the weakness and propose a solution that produces strong passwords. The proposed version first requires the use of the SHIFT and ALT-GR keys in combination with other keys, and second, the keys need to be 1-2 distances apart. The proposed improved PsychoPass method yields passwords that can be broken only in hundreds of years based on current computing powers. The proposed PsychoPass method requires 10 keys, as opposed to 20 keys in the original method, for comparable password strength.

  9. Different protocols for cryobiopsy versus forceps biopsy in diagnosis of patients with endobronchial tumors.

    PubMed

    Jabari, Hamidreza; Sami, Ramin; Fakhri, Mohammad; Kiani, Arda

    2012-01-01

    Forceps biopsy is the standard procedure to obtain specimens in endobronchial lesions. New studies have proposed flexible cryoprobe as an accepted alternative method for this technique. Although diagnostic use of the cryobiopsy is confirmed in few studies, there is paucity of data with regard to an optimum protocol for this method since one of the main considerations in cryobiopsy is the freezing time. To evaluate diagnostic yield and safety of endobronchial biopsies using the flexible cryoprobe. Moreover, different freezing times were assessed to propose an optimized protocol for this diagnostic modality. For each patient with a confirmed intrabronchial lesion, diagnostic o value of forceps biopsy, cryobiopsy in three seconds, cryobiopsy in five seconds and combined results of cryobiopsy in both timings were recorded. A total of 60 patients (39 males and 21 females; Mean age 56.7 +/- 13.3) were included. Specimens that were obtained by cryobiopsy in five seconds were significantly larger than those of forceps biopsy and cryobiopsy in three seconds (p < 0.001). We showed that the achieved diagnostic yields for all three methods were not statistically different (p > 0.05). Simultaneous usage of samples produced in both cryobiopsies can significantly improve the diagnostic yield (p = 0.02). Statistical analysis showed that there were no significant differences in case of bleeding frequency among the three sampling methods. This study confirmed safety and feasibility of cryobiopsy. Additionally, combination of sampling with two different cold induction timings would significantly increase sensitivity of this emerging technique..

  10. A supervoxel-based segmentation method for prostate MR images.

    PubMed

    Tian, Zhiqiang; Liu, Lizhi; Zhang, Zhenfeng; Xue, Jianru; Fei, Baowei

    2017-02-01

    Segmentation of the prostate on MR images has many applications in prostate cancer management. In this work, we propose a supervoxel-based segmentation method for prostate MR images. A supervoxel is a set of pixels that have similar intensities, locations, and textures in a 3D image volume. The prostate segmentation problem is considered as assigning a binary label to each supervoxel, which is either the prostate or background. A supervoxel-based energy function with data and smoothness terms is used to model the label. The data term estimates the likelihood of a supervoxel belonging to the prostate by using a supervoxel-based shape feature. The geometric relationship between two neighboring supervoxels is used to build the smoothness term. The 3D graph cut is used to minimize the energy function to get the labels of the supervoxels, which yields the prostate segmentation. A 3D active contour model is then used to get a smooth surface by using the output of the graph cut as an initialization. The performance of the proposed algorithm was evaluated on 30 in-house MR image data and PROMISE12 dataset. The mean Dice similarity coefficients are 87.2 ± 2.3% and 88.2 ± 2.8% for our 30 in-house MR volumes and the PROMISE12 dataset, respectively. The proposed segmentation method yields a satisfactory result for prostate MR images. The proposed supervoxel-based method can accurately segment prostate MR images and can have a variety of application in prostate cancer diagnosis and therapy. © 2016 American Association of Physicists in Medicine.

  11. Unified Least Squares Methods for the Evaluation of Diagnostic Tests With the Gold Standard

    PubMed Central

    Tang, Liansheng Larry; Yuan, Ao; Collins, John; Che, Xuan; Chan, Leighton

    2017-01-01

    The article proposes a unified least squares method to estimate the receiver operating characteristic (ROC) parameters for continuous and ordinal diagnostic tests, such as cancer biomarkers. The method is based on a linear model framework using the empirically estimated sensitivities and specificities as input “data.” It gives consistent estimates for regression and accuracy parameters when the underlying continuous test results are normally distributed after some monotonic transformation. The key difference between the proposed method and the method of Tang and Zhou lies in the response variable. The response variable in the latter is transformed empirical ROC curves at different thresholds. It takes on many values for continuous test results, but few values for ordinal test results. The limited number of values for the response variable makes it impractical for ordinal data. However, the response variable in the proposed method takes on many more distinct values so that the method yields valid estimates for ordinal data. Extensive simulation studies are conducted to investigate and compare the finite sample performance of the proposed method with an existing method, and the method is then used to analyze 2 real cancer diagnostic example as an illustration. PMID:28469385

  12. Differential Item Functioning Detection Using the Multiple Indicators, Multiple Causes Method with a Pure Short Anchor

    ERIC Educational Resources Information Center

    Shih, Ching-Lin; Wang, Wen-Chung

    2009-01-01

    The multiple indicators, multiple causes (MIMIC) method with a pure short anchor was proposed to detect differential item functioning (DIF). A simulation study showed that the MIMIC method with an anchor of 1, 2, 4, or 10 DIF-free items yielded a well-controlled Type I error rate even when such tests contained as many as 40% DIF items. In general,…

  13. A proposed method for wind velocity measurement from space

    NASA Technical Reports Server (NTRS)

    Censor, D.; Levine, D. M.

    1980-01-01

    An investigation was made of the feasibility of making wind velocity measurements from space by monitoring the apparent change in the refractive index of the atmosphere induced by motion of the air. The physical principle is the same as that resulting in the phase changes measured in the Fizeau experiment. It is proposed that this phase change could be measured using a three cornered arrangement of satellite borne source and reflectors, around which two laser beams propagate in opposite directions. It is shown that even though the velocity of the satellites is much larger than the wind velocity, factors such as change in satellite position and Doppler shifts can be taken into account in a reasonable manner and the Fizeau phase measured. This phase measurement yields an average wind velocity along the ray path through the atmosphere. The method requires neither high accuracy for satellite position or velocity, nor precise knowledge of the refractive index or its gradient in the atmosphere. However, the method intrinsically yields wind velocity integrated along the ray path; hence to obtain higher spatial resolution, inversion techniques are required.

  14. Biodiesel production from wet municipal sludge: evaluation of in situ transesterification using xylene as a cosolvent.

    PubMed

    Choi, O K; Song, J S; Cha, D K; Lee, J W

    2014-08-01

    This study proposes a method to produce biodiesel from wet wastewater sludge. Xylene was used as an alternative cosolvent to hexane for transesterification in order to enhance the biodiesel yield from wet wastewater sludge. The water present in the sludge could be separated during transesterification by employing xylene, which has a higher boiling point than water. Xylene enhanced the biodiesel yield up to 8.12%, which was 2.5 times higher than hexane. It was comparable to the maximum biodiesel yield of 9.68% obtained from dried sludge. Xylene could reduce either the reaction time or methanol consumption, when compared to hexane for a similar yield. The fatty acid methyl esters (FAMEs) content of the biodiesel increased approximately two fold by changing the cosolvent from hexane to xylene. The transesterification method using xylene as a cosolvent can be applied effectively and economically for biodiesel recovery from wet wastewater sludge without drying process. Copyright © 2014 Elsevier Ltd. All rights reserved.

  15. Measurement of hydroxyl radical production in ultrasonic aqueous solutions by a novel chemiluminescence method.

    PubMed

    Hu, Yufei; Zhang, Zhujun; Yang, Chunyan

    2008-07-01

    Measurement methods for ultrasonic fields are important for reasons of safety. The investigation of an ultrasonic field can be performed by detecting the yield of hydroxyl radicals resulting from ultrasonic cavitations. In this paper, a novel method is introduced for detecting hydroxyl radicals by a chemiluminescence (CL) reaction of luminol-hydrogen peroxide (H2O2)-K5[Cu(HIO6)2](DPC). The yield of hydroxyl radicals is calculated directly by the relative CL intensity according to the corresponding concentration of H2O2. This proposed CL method makes it possible to perform an in-line and real-time assay of hydroxyl radicals in an ultrasonic aqueous solution. With flow injection (FI) technology, this novel CL reaction is sensitive enough to detect ultra trace amounts of H2O2 with a limit of detection (3sigma) of 4.1 x 10(-11) mol L(-1). The influences of ultrasonic output power and ultrasonic treatment time on the yield of hydroxyl radicals by an ultrasound generator were also studied. The results indicate that the amount of hydroxyl radicals increases with the increase of ultrasonic output power (< or = 15 W mL(-1)). There is a linear relationship between the time of ultrasonic treatment and the yield of H2O2. The ultrasonic field of an ultrasonic cleaning baths has been measured by calculating the yield of hydroxyl radicals.

  16. Validated spectrofluorimetric method for the determination of tamsulosin in spiked human urine, pure and pharmaceutical preparations.

    PubMed

    Karasakal, A; Ulu, S T

    2014-05-01

    A novel, sensitive and selective spectrofluorimetric method was developed for the determination of tamsulosin in spiked human urine and pharmaceutical preparations. The proposed method is based on the reaction of tamsulosin with 1-dimethylaminonaphthalene-5-sulfonyl chloride in carbonate buffer pH 10.5 to yield a highly fluorescent derivative. The described method was validated and the analytical parameters of linearity, limit of detection (LOD), limit of quantification (LOQ), accuracy, precision, recovery and robustness were evaluated. The proposed method showed a linear dependence of the fluorescence intensity on drug concentration over the range 1.22 × 10(-7) to 7.35 × 10(-6)  M. LOD and LOQ were calculated as 1.07 × 10(-7) and 3.23 × 10(-7)  M, respectively. The proposed method was successfully applied for the determination of tamsulosin in pharmaceutical preparations and the obtained results were in good agreement with those obtained using the reference method. Copyright © 2013 John Wiley & Sons, Ltd.

  17. Removal of caffeine from green tea by microwave-enhanced vacuum ice water extraction.

    PubMed

    Lou, Zaixiang; Er, Chaojuan; Li, Jing; Wang, Hongxin; Zhu, Song; Sun, Juntao

    2012-02-24

    In order to selectively remove caffeine from green tea, a microwave-enhanced vacuum ice water extraction (MVIE) method was proposed. The effects of MVIE variables including extraction time, microwave power, and solvent to solid radio on the removal yield of caffeine and the loss of total phenolics (TP) from green tea were investigated. The optimized conditions were as follows: solvent (mL) to solid (g) ratio was 10:1, microwave extraction time was 6 min, microwave power was 350 W and 2.5 h of vacuum ice water extraction. The removal yield of caffeine by MVIE was 87.6%, which was significantly higher than that by hot water extraction, indicating a significant improvement of removal efficiency. Moreover, the loss of TP of green tea in the proposed method was much lower than that in the hot water extraction. After decaffeination by MVIE, the removal yield of TP tea was 36.2%, and the content of TP in green tea was still higher than 170 mg g(-1). Therefore, the proposed microwave-enhanced vacuum ice water extraction was selective, more efficient for the removal of caffeine. The main phenolic compounds of green tea were also determined, and the results indicated that the contents of several catechins were almost not changed in MVIE. This study suggests that MVIE is a new and good alternative for the removal of caffeine from green tea, with a great potential for industrial application. Copyright © 2011 Elsevier B.V. All rights reserved.

  18. Residential roof condition assessment system using deep learning

    NASA Astrophysics Data System (ADS)

    Wang, Fan; Kerekes, John P.; Xu, Zhuoyi; Wang, Yandong

    2018-01-01

    The emergence of high resolution (HR) and ultra high resolution (UHR) airborne remote sensing imagery is enabling humans to move beyond traditional land cover analysis applications to the detailed characterization of surface objects. A residential roof condition assessment method using techniques from deep learning is presented. The proposed method operates on individual roofs and divides the task into two stages: (1) roof segmentation, followed by (2) condition classification of the segmented roof regions. As the first step in this process, a self-tuning method is proposed to segment the images into small homogeneous areas. The segmentation is initialized with simple linear iterative clustering followed by deep learned feature extraction and region merging, with the optimal result selected by an unsupervised index, Q. After the segmentation, a pretrained residual network is fine-tuned on the augmented roof segments using a proposed k-pixel extension technique for classification. The effectiveness of the proposed algorithm was demonstrated on both HR and UHR imagery collected by EagleView over different study sites. The proposed algorithm has yielded promising results and has outperformed traditional machine learning methods using hand-crafted features.

  19. Data accuracy assessment using enterprise architecture

    NASA Astrophysics Data System (ADS)

    Närman, Per; Holm, Hannes; Johnson, Pontus; König, Johan; Chenine, Moustafa; Ekstedt, Mathias

    2011-02-01

    Errors in business processes result in poor data accuracy. This article proposes an architecture analysis method which utilises ArchiMate and the Probabilistic Relational Model formalism to model and analyse data accuracy. Since the resources available for architecture analysis are usually quite scarce, the method advocates interviews as the primary data collection technique. A case study demonstrates that the method yields correct data accuracy estimates and is more resource-efficient than a competing sampling-based data accuracy estimation method.

  20. Community Detection in Complex Networks via Clique Conductance.

    PubMed

    Lu, Zhenqi; Wahlström, Johan; Nehorai, Arye

    2018-04-13

    Network science plays a central role in understanding and modeling complex systems in many areas including physics, sociology, biology, computer science, economics, politics, and neuroscience. One of the most important features of networks is community structure, i.e., clustering of nodes that are locally densely interconnected. Communities reveal the hierarchical organization of nodes, and detecting communities is of great importance in the study of complex systems. Most existing community-detection methods consider low-order connection patterns at the level of individual links. But high-order connection patterns, at the level of small subnetworks, are generally not considered. In this paper, we develop a novel community-detection method based on cliques, i.e., local complete subnetworks. The proposed method overcomes the deficiencies of previous similar community-detection methods by considering the mathematical properties of cliques. We apply the proposed method to computer-generated graphs and real-world network datasets. When applied to networks with known community structure, the proposed method detects the structure with high fidelity and sensitivity. When applied to networks with no a priori information regarding community structure, the proposed method yields insightful results revealing the organization of these complex networks. We also show that the proposed method is guaranteed to detect near-optimal clusters in the bipartition case.

  1. A Unified Fisher's Ratio Learning Method for Spatial Filter Optimization.

    PubMed

    Li, Xinyang; Guan, Cuntai; Zhang, Haihong; Ang, Kai Keng

    To detect the mental task of interest, spatial filtering has been widely used to enhance the spatial resolution of electroencephalography (EEG). However, the effectiveness of spatial filtering is undermined due to the significant nonstationarity of EEG. Based on regularization, most of the conventional stationary spatial filter design methods address the nonstationarity at the cost of the interclass discrimination. Moreover, spatial filter optimization is inconsistent with feature extraction when EEG covariance matrices could not be jointly diagonalized due to the regularization. In this paper, we propose a novel framework for a spatial filter design. With Fisher's ratio in feature space directly used as the objective function, the spatial filter optimization is unified with feature extraction. Given its ratio form, the selection of the regularization parameter could be avoided. We evaluate the proposed method on a binary motor imagery data set of 16 subjects, who performed the calibration and test sessions on different days. The experimental results show that the proposed method yields improvement in classification performance for both single broadband and filter bank settings compared with conventional nonunified methods. We also provide a systematic attempt to compare different objective functions in modeling data nonstationarity with simulation studies.To detect the mental task of interest, spatial filtering has been widely used to enhance the spatial resolution of electroencephalography (EEG). However, the effectiveness of spatial filtering is undermined due to the significant nonstationarity of EEG. Based on regularization, most of the conventional stationary spatial filter design methods address the nonstationarity at the cost of the interclass discrimination. Moreover, spatial filter optimization is inconsistent with feature extraction when EEG covariance matrices could not be jointly diagonalized due to the regularization. In this paper, we propose a novel framework for a spatial filter design. With Fisher's ratio in feature space directly used as the objective function, the spatial filter optimization is unified with feature extraction. Given its ratio form, the selection of the regularization parameter could be avoided. We evaluate the proposed method on a binary motor imagery data set of 16 subjects, who performed the calibration and test sessions on different days. The experimental results show that the proposed method yields improvement in classification performance for both single broadband and filter bank settings compared with conventional nonunified methods. We also provide a systematic attempt to compare different objective functions in modeling data nonstationarity with simulation studies.

  2. Selective production of chemicals from biomass pyrolysis over metal chlorides supported on zeolite.

    PubMed

    Leng, Shuai; Wang, Xinde; Cai, Qiuxia; Ma, Fengyun; Liu, Yue'e; Wang, Jianguo

    2013-12-01

    Direct biomass conversion into chemicals remains a great challenge because of the complexity of the compounds; hence, this process has attracted less attention than conversion into fuel. In this study, we propose a simple one-step method for converting bagasse into furfural (FF) and acetic acid (AC). In this method, bagasse pyrolysis over ZnCl2/HZSM-5 achieved a high FF and AC yield (58.10%) and a 1.01 FF/AC ratio, but a very low yield of medium-boiling point components. However, bagasse pyrolysis using HZSM-5 alone or ZnCl2 alone still remained large amounts of medium-boiling point components or high-boiling point components. The synergistic effect of HZSM-5 and ZnCl2, which combines pyrolysis, zeolite cracking, and Lewis acid-selective catalysis results in highly efficient bagasse conversion into FF and AC. Therefore, our study provides a novel, simple method for directly converting biomass into high-yield useful chemical. Copyright © 2013 Elsevier Ltd. All rights reserved.

  3. Attitude-error compensation for airborne down-looking synthetic-aperture imaging lidar

    NASA Astrophysics Data System (ADS)

    Li, Guang-yuan; Sun, Jian-feng; Zhou, Yu; Lu, Zhi-yong; Zhang, Guo; Cai, Guang-yu; Liu, Li-ren

    2017-11-01

    Target-coordinate transformation in the lidar spot of the down-looking synthetic-aperture imaging lidar (SAIL) was performed, and the attitude errors were deduced in the process of imaging, according to the principle of the airborne down-looking SAIL. The influence of the attitude errors on the imaging quality was analyzed theoretically. A compensation method for the attitude errors was proposed and theoretically verified. An airborne down-looking SAIL experiment was performed and yielded the same results. A point-by-point error-compensation method for solving the azimuthal-direction space-dependent attitude errors was also proposed.

  4. An Information Transmission Measure for the Analysis of Effective Connectivity among Cortical Neurons

    PubMed Central

    Law, Andrew J.; Sharma, Gaurav; Schieber, Marc H.

    2014-01-01

    We present a methodology for detecting effective connections between simultaneously recorded neurons using an information transmission measure to identify the presence and direction of information flow from one neuron to another. Using simulated and experimentally-measured data, we evaluate the performance of our proposed method and compare it to the traditional transfer entropy approach. In simulations, our measure of information transmission outperforms transfer entropy in identifying the effective connectivity structure of a neuron ensemble. For experimentally recorded data, where ground truth is unavailable, the proposed method also yields a more plausible connectivity structure than transfer entropy. PMID:21096617

  5. Connected word recognition using a cascaded neuro-computational model

    NASA Astrophysics Data System (ADS)

    Hoya, Tetsuya; van Leeuwen, Cees

    2016-10-01

    We propose a novel framework for processing a continuous speech stream that contains a varying number of words, as well as non-speech periods. Speech samples are segmented into word-tokens and non-speech periods. An augmented version of an earlier-proposed, cascaded neuro-computational model is used for recognising individual words within the stream. Simulation studies using both a multi-speaker-dependent and speaker-independent digit string database show that the proposed method yields a recognition performance comparable to that obtained by a benchmark approach using hidden Markov models with embedded training.

  6. Line-Constrained Camera Location Estimation in Multi-Image Stereomatching.

    PubMed

    Donné, Simon; Goossens, Bart; Philips, Wilfried

    2017-08-23

    Stereomatching is an effective way of acquiring dense depth information from a scene when active measurements are not possible. So-called lightfield methods take a snapshot from many camera locations along a defined trajectory (usually uniformly linear or on a regular grid-we will assume a linear trajectory) and use this information to compute accurate depth estimates. However, they require the locations for each of the snapshots to be known: the disparity of an object between images is related to both the distance of the camera to the object and the distance between the camera positions for both images. Existing solutions use sparse feature matching for camera location estimation. In this paper, we propose a novel method that uses dense correspondences to do the same, leveraging an existing depth estimation framework to also yield the camera locations along the line. We illustrate the effectiveness of the proposed technique for camera location estimation both visually for the rectification of epipolar plane images and quantitatively with its effect on the resulting depth estimation. Our proposed approach yields a valid alternative for sparse techniques, while still being executed in a reasonable time on a graphics card due to its highly parallelizable nature.

  7. Developmental psycholinguistics teaches us that we need multi-method, not single-method, approaches to the study of linguistic representation.

    PubMed

    Rowland, Caroline F; Monaghan, Padraic

    2017-01-01

    In developmental psycholinguistics, we have, for many years, been generating and testing theories that propose both descriptions of adult representations and explanations of how those representations develop. We have learnt that restricting ourselves to any one methodology yields only incomplete data about the nature of linguistic representations. We argue that we need a multi-method approach to the study of representation.

  8. Biologically plausible particulate air pollution mortality concentration-response functions.

    PubMed Central

    Roberts, Steven

    2004-01-01

    In this article I introduce an alternative method for estimating particulate air pollution mortality concentration-response functions. This method constrains the particulate air pollution mortality concentration-response function to be biologically plausible--that is, a non-decreasing function of the particulate air pollution concentration. Using time-series data from Cook County, Illinois, the proposed method yields more meaningful particulate air pollution mortality concentration-response function estimates with an increase in statistical accuracy. PMID:14998745

  9. A Technique of Fuzzy C-Mean in Multiple Linear Regression Model toward Paddy Yield

    NASA Astrophysics Data System (ADS)

    Syazwan Wahab, Nur; Saifullah Rusiman, Mohd; Mohamad, Mahathir; Amira Azmi, Nur; Che Him, Norziha; Ghazali Kamardan, M.; Ali, Maselan

    2018-04-01

    In this paper, we propose a hybrid model which is a combination of multiple linear regression model and fuzzy c-means method. This research involved a relationship between 20 variates of the top soil that are analyzed prior to planting of paddy yields at standard fertilizer rates. Data used were from the multi-location trials for rice carried out by MARDI at major paddy granary in Peninsular Malaysia during the period from 2009 to 2012. Missing observations were estimated using mean estimation techniques. The data were analyzed using multiple linear regression model and a combination of multiple linear regression model and fuzzy c-means method. Analysis of normality and multicollinearity indicate that the data is normally scattered without multicollinearity among independent variables. Analysis of fuzzy c-means cluster the yield of paddy into two clusters before the multiple linear regression model can be used. The comparison between two method indicate that the hybrid of multiple linear regression model and fuzzy c-means method outperform the multiple linear regression model with lower value of mean square error.

  10. Multitask assessment of roads and vehicles network (MARVN)

    NASA Astrophysics Data System (ADS)

    Yang, Fang; Yi, Meng; Cai, Yiran; Blasch, Erik; Sullivan, Nichole; Sheaff, Carolyn; Chen, Genshe; Ling, Haibin

    2018-05-01

    Vehicle detection in wide area motion imagery (WAMI) has drawn increasing attention from the computer vision research community in recent decades. In this paper, we present a new architecture for vehicle detection on road using multi-task network, which is able to detect and segment vehicles, estimate their pose, and meanwhile yield road isolation for a given region. The multi-task network consists of three components: 1) vehicle detection, 2) vehicle and road segmentation, and 3) detection screening. Segmentation and detection components share the same backbone network and are trained jointly in an end-to-end way. Unlike background subtraction or frame differencing based methods, the proposed Multitask Assessment of Roads and Vehicles Network (MARVN) method can detect vehicles which are slowing down, stopped, and/or partially occluded in a single image. In addition, the method can eliminate the detections which are located at outside road using yielded road segmentation so as to decrease the false positive rate. As few WAMI datasets have road mask and vehicles bounding box anotations, we extract 512 frames from WPAFB 2009 dataset and carefully refine the original annotations. The resulting dataset is thus named as WAMI512. We extensively compare the proposed method with state-of-the-art methods on WAMI512 dataset, and demonstrate superior performance in terms of efficiency and accuracy.

  11. Relevant Feature Set Estimation with a Knock-out Strategy and Random Forests

    PubMed Central

    Ganz, Melanie; Greve, Douglas N.; Fischl, Bruce; Konukoglu, Ender

    2015-01-01

    Group analysis of neuroimaging data is a vital tool for identifying anatomical and functional variations related to diseases as well as normal biological processes. The analyses are often performed on a large number of highly correlated measurements using a relatively smaller number of samples. Despite the correlation structure, the most widely used approach is to analyze the data using univariate methods followed by post-hoc corrections that try to account for the data’s multivariate nature. Although widely used, this approach may fail to recover from the adverse effects of the initial analysis when local effects are not strong. Multivariate pattern analysis (MVPA) is a powerful alternative to the univariate approach for identifying relevant variations. Jointly analyzing all the measures, MVPA techniques can detect global effects even when individual local effects are too weak to detect with univariate analysis. Current approaches are successful in identifying variations that yield highly predictive and compact models. However, they suffer from lessened sensitivity and instabilities in identification of relevant variations. Furthermore, current methods’ user-defined parameters are often unintuitive and difficult to determine. In this article, we propose a novel MVPA method for group analysis of high-dimensional data that overcomes the drawbacks of the current techniques. Our approach explicitly aims to identify all relevant variations using a “knock-out” strategy and the Random Forest algorithm. In evaluations with synthetic datasets the proposed method achieved substantially higher sensitivity and accuracy than the state-of-the-art MVPA methods, and outperformed the univariate approach when the effect size is low. In experiments with real datasets the proposed method identified regions beyond the univariate approach, while other MVPA methods failed to replicate the univariate results. More importantly, in a reproducibility study with the well-known ADNI dataset the proposed method yielded higher stability and power than the univariate approach. PMID:26272728

  12. Improving Multidimensional Wireless Sensor Network Lifetime Using Pearson Correlation and Fractal Clustering

    PubMed Central

    Almeida, Fernando R.; Brayner, Angelo; Rodrigues, Joel J. P. C.; Maia, Jose E. Bessa

    2017-01-01

    An efficient strategy for reducing message transmission in a wireless sensor network (WSN) is to group sensors by means of an abstraction denoted cluster. The key idea behind the cluster formation process is to identify a set of sensors whose sensed values present some data correlation. Nowadays, sensors are able to simultaneously sense multiple different physical phenomena, yielding in this way multidimensional data. This paper presents three methods for clustering sensors in WSNs whose sensors collect multidimensional data. The proposed approaches implement the concept of multidimensional behavioral clustering. To show the benefits introduced by the proposed methods, a prototype has been implemented and experiments have been carried out on real data. The results prove that the proposed methods decrease the amount of data flowing in the network and present low root-mean-square error (RMSE). PMID:28590450

  13. Improving Multidimensional Wireless Sensor Network Lifetime Using Pearson Correlation and Fractal Clustering.

    PubMed

    Almeida, Fernando R; Brayner, Angelo; Rodrigues, Joel J P C; Maia, Jose E Bessa

    2017-06-07

    An efficient strategy for reducing message transmission in a wireless sensor network (WSN) is to group sensors by means of an abstraction denoted cluster. The key idea behind the cluster formation process is to identify a set of sensors whose sensed values present some data correlation. Nowadays, sensors are able to simultaneously sense multiple different physical phenomena, yielding in this way multidimensional data. This paper presents three methods for clustering sensors in WSNs whose sensors collect multidimensional data. The proposed approaches implement the concept of multidimensional behavioral clustering . To show the benefits introduced by the proposed methods, a prototype has been implemented and experiments have been carried out on real data. The results prove that the proposed methods decrease the amount of data flowing in the network and present low root-mean-square error (RMSE).

  14. Blood vessels segmentation of hatching eggs based on fully convolutional networks

    NASA Astrophysics Data System (ADS)

    Geng, Lei; Qiu, Ling; Wu, Jun; Xiao, Zhitao

    2018-04-01

    FCN, trained end-to-end, pixels-to-pixels, predict result of each pixel. It has been widely used for semantic segmentation. In order to realize the blood vessels segmentation of hatching eggs, a method based on FCN is proposed in this paper. The training datasets are composed of patches extracted from very few images to augment data. The network combines with lower layer and deconvolution to enables precise segmentation. The proposed method frees from the problem that training deep networks need large scale samples. Experimental results on hatching eggs demonstrate that this method can yield more accurate segmentation outputs than previous researches. It provides a convenient reference for fertility detection subsequently.

  15. Metal-assisted SIMS and cluster ion bombardment for ion yield enhancement

    NASA Astrophysics Data System (ADS)

    Heile, A.; Lipinsky, D.; Wehbe, N.; Delcorte, A.; Bertrand, P.; Felten, A.; Houssiau, L.; Pireaux, J.-J.; De Mondt, R.; Van Vaeck, L.; Arlinghaus, H. F.

    2008-12-01

    In addition to structural information, a detailed knowledge of the local chemical environment proves to be of ever greater importance, for example for the development of new types of materials as well as for specific modifications of surfaces and interfaces in multiple fields of materials science or various biomedical and chemical applications. But the ongoing miniaturization and therefore reduction of the amount of material available for analysis constitute a challenge to the detection limits of analytical methods. In the case of time-of-flight secondary ion mass spectrometry (TOF-SIMS), several methods of secondary ion yield enhancement have been proposed. This paper focuses on the investigation of the effects of two of these methods, metal-assisted SIMS and polyatomic primary ion bombardment. For this purpose, thicker layers of polystyrene (PS), both pristine and metallized with different amounts of gold, were analyzed using monoatomic (Ar +, Ga +, Xe +, Bi +) and polyatomic (SF 5+, Bi 3+, C 60+) primary ions. It was found that polyatomic ions generally induce a significant increase of the secondary ion yield. On the other hand, with gold deposition, a yield enhancement can only be detected for monoatomic ion bombardment.

  16. Calcul à la rupture en présence d'un écoulement : formulation cinématique avec un champ de pression approché

    NASA Astrophysics Data System (ADS)

    Corfdir, Alain

    2006-03-01

    We attempt here to use the kinematic method of yield design in the case of a porous medium subjected to flow (with or without free surface), without looking for the exact solution of the pressure field. The method proposed here is based on the use of approximate pressure fields. In this paper, we show how, under different conditions concerning the yield criterion and the velocity field, the use of such approximate fields allows one to obtain a necessary condition for stability without having to find the real pressure field. To cite this article: A. Corfdir, C. R. Mecanique 334 (2006).

  17. A robust two-way semi-linear model for normalization of cDNA microarray data

    PubMed Central

    Wang, Deli; Huang, Jian; Xie, Hehuang; Manzella, Liliana; Soares, Marcelo Bento

    2005-01-01

    Background Normalization is a basic step in microarray data analysis. A proper normalization procedure ensures that the intensity ratios provide meaningful measures of relative expression values. Methods We propose a robust semiparametric method in a two-way semi-linear model (TW-SLM) for normalization of cDNA microarray data. This method does not make the usual assumptions underlying some of the existing methods. For example, it does not assume that: (i) the percentage of differentially expressed genes is small; or (ii) the numbers of up- and down-regulated genes are about the same, as required in the LOWESS normalization method. We conduct simulation studies to evaluate the proposed method and use a real data set from a specially designed microarray experiment to compare the performance of the proposed method with that of the LOWESS normalization approach. Results The simulation results show that the proposed method performs better than the LOWESS normalization method in terms of mean square errors for estimated gene effects. The results of analysis of the real data set also show that the proposed method yields more consistent results between the direct and the indirect comparisons and also can detect more differentially expressed genes than the LOWESS method. Conclusions Our simulation studies and the real data example indicate that the proposed robust TW-SLM method works at least as well as the LOWESS method and works better when the underlying assumptions for the LOWESS method are not satisfied. Therefore, it is a powerful alternative to the existing normalization methods. PMID:15663789

  18. Getty: producing oil from diatomite

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zublin, L.

    1981-10-01

    Getty Oil Company has developed unconventional oil production techniques which will yield oil from diatomaceous earth. They propose to mine oil-saturated diatomite using open-pit mining methods. Getty's diatomite deposit in the McKittrick field of California is unique because it is cocoa brown and saturated with crude oil. It is classified also as a tightly packed deposit, and oil cannot be extracted by conventional oil field methods.

  19. Determining the 40K radioactivity in rocks using x-ray spectrometry

    NASA Astrophysics Data System (ADS)

    Pilakouta, M.; Kallithrakas-Kontos, N.; Nikolaou, G.

    2017-09-01

    In this paper we propose an experimental method for the determination of potassium-40 (40K) radioactivity in commercial granite samples using x-ray fluorescence (XRF). The method correlates the total potassium concentration (yield) in samples deduced by XRF analysis with the radioactivity of the sample due to the 40K radionuclide. This method can be used in an undergraduate student laboratory. A brief theoretical background and description of the method, as well as some results and their interpretation, are presented.

  20. An automated and robust image processing algorithm for glaucoma diagnosis from fundus images using novel blood vessel tracking and bend point detection.

    PubMed

    M, Soorya; Issac, Ashish; Dutta, Malay Kishore

    2018-02-01

    Glaucoma is an ocular disease which can cause irreversible blindness. The disease is currently identified using specialized equipment operated by optometrists manually. The proposed work aims to provide an efficient imaging solution which can help in automating the process of Glaucoma diagnosis using computer vision techniques from digital fundus images. The proposed method segments the optic disc using a geometrical feature based strategic framework which improves the detection accuracy and makes the algorithm invariant to illumination and noise. Corner thresholding and point contour joining based novel methods are proposed to construct smooth contours of Optic Disc. Based on a clinical approach as used by ophthalmologist, the proposed algorithm tracks blood vessels inside the disc region and identifies the points at which first vessel bend from the optic disc boundary and connects them to obtain the contours of Optic Cup. The proposed method has been compared with the ground truth marked by the medical experts and the similarity parameters, used to determine the performance of the proposed method, have yield a high similarity of segmentation. The proposed method has achieved a macro-averaged f-score of 0.9485 and accuracy of 97.01% in correctly classifying fundus images. The proposed method is clinically significant and can be used for Glaucoma screening over a large population which will work in a real time. Copyright © 2017 Elsevier B.V. All rights reserved.

  1. Static-dynamic hybrid communication scheduling and control co-design for networked control systems.

    PubMed

    Wen, Shixi; Guo, Ge

    2017-11-01

    In this paper, the static-dynamic hybrid communication scheduling and control co-design is proposed for the networked control systems (NCSs) to solve the capacity limitation of the wireless communication network. The analytical most regular binary sequences (MRBSs) are used as the communication scheduling function for NCSs. When the communication conflicts yielded in the binary sequence MRBSs, a dynamic scheduling strategy is proposed to on-line reallocate the medium access status for each plant. Under such static-dynamic hybrid scheduling policy, plants in NCSs are described as the non-uniform sampled-control systems, whose controller have a group of controller gains and switch according to the sampling interval yielded by the binary sequence. A useful communication scheduling and control co-design framework is proposed for the NCSs to simultaneously decide the controller gains and the parameters used to generate the communication sequences MRBS. Numerical example and realistic example are respectively given to demonstrate the effectiveness of the proposed co-design method. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.

  2. Imputation of missing data in time series for air pollutants

    NASA Astrophysics Data System (ADS)

    Junger, W. L.; Ponce de Leon, A.

    2015-02-01

    Missing data are major concerns in epidemiological studies of the health effects of environmental air pollutants. This article presents an imputation-based method that is suitable for multivariate time series data, which uses the EM algorithm under the assumption of normal distribution. Different approaches are considered for filtering the temporal component. A simulation study was performed to assess validity and performance of proposed method in comparison with some frequently used methods. Simulations showed that when the amount of missing data was as low as 5%, the complete data analysis yielded satisfactory results regardless of the generating mechanism of the missing data, whereas the validity began to degenerate when the proportion of missing values exceeded 10%. The proposed imputation method exhibited good accuracy and precision in different settings with respect to the patterns of missing observations. Most of the imputations obtained valid results, even under missing not at random. The methods proposed in this study are implemented as a package called mtsdi for the statistical software system R.

  3. Designing scalable product families by the radial basis function-high-dimensional model representation metamodelling technique

    NASA Astrophysics Data System (ADS)

    Pirmoradi, Zhila; Haji Hajikolaei, Kambiz; Wang, G. Gary

    2015-10-01

    Product family design is cost-efficient for achieving the best trade-off between commonalization and diversification. However, for computationally intensive design functions which are viewed as black boxes, the family design would be challenging. A two-stage platform configuration method with generalized commonality is proposed for a scale-based family with unknown platform configuration. Unconventional sensitivity analysis and information on variation in the individual variants' optimal design are used for platform configuration design. Metamodelling is employed to provide the sensitivity and variable correlation information, leading to significant savings in function calls. A family of universal electric motors is designed for product performance and the efficiency of this method is studied. The impact of the employed parameters is also analysed. Then, the proposed method is modified for obtaining higher commonality. The proposed method is shown to yield design solutions with better objective function values, allowable performance loss and higher commonality than the previously developed methods in the literature.

  4. Bootstrap-based methods for estimating standard errors in Cox's regression analyses of clustered event times.

    PubMed

    Xiao, Yongling; Abrahamowicz, Michal

    2010-03-30

    We propose two bootstrap-based methods to correct the standard errors (SEs) from Cox's model for within-cluster correlation of right-censored event times. The cluster-bootstrap method resamples, with replacement, only the clusters, whereas the two-step bootstrap method resamples (i) the clusters, and (ii) individuals within each selected cluster, with replacement. In simulations, we evaluate both methods and compare them with the existing robust variance estimator and the shared gamma frailty model, which are available in statistical software packages. We simulate clustered event time data, with latent cluster-level random effects, which are ignored in the conventional Cox's model. For cluster-level covariates, both proposed bootstrap methods yield accurate SEs, and type I error rates, and acceptable coverage rates, regardless of the true random effects distribution, and avoid serious variance under-estimation by conventional Cox-based standard errors. However, the two-step bootstrap method over-estimates the variance for individual-level covariates. We also apply the proposed bootstrap methods to obtain confidence bands around flexible estimates of time-dependent effects in a real-life analysis of cluster event times.

  5. Using Photo-Interviewing as Tool for Research and Evaluation.

    ERIC Educational Resources Information Center

    Dempsey, John V.; Tucker, Susan A.

    Arguing that photo-interviewing yields richer data than that usually obtained from verbal interviewing procedures alone, it is proposed that this method of data collection be added to "standard" methodologies in instructional development research and evaluation. The process, as described in this paper, consists of using photographs of…

  6. Demonstrating Cost-Effective Marker Assisted Selection for Biomass Yield in Red Clover (Trifolium pratense L.) – Part 1: Paternity Testing

    USDA-ARS?s Scientific Manuscript database

    Many methods have been proposed to incorporate molecular markers into breeding programs. Presented is a cost effective marker assisted selection (MAS) methodology that utilizes individual plant phenotypes, seed production-based knowledge of maternity, and molecular marker-determined paternity. Proge...

  7. Foliar application of plant growth-promoting bacteria and humic acid increase maize yields

    USDA-ARS?s Scientific Manuscript database

    Plant growth promoter bacteria (PGPB) can be used to reduce fertilizer inputs to crops. Seed inoculation is the main method of PGPB application, but competition with rhizosphere microorganisms reduces their effectiveness. Here we propose a new biotechnological tool for plant stimulation using endoph...

  8. Modeling and Calibration of a Novel One-Mirror Galvanometric Laser Scanner

    PubMed Central

    Yu, Chengyi; Chen, Xiaobo; Xi, Juntong

    2017-01-01

    A laser stripe sensor has limited application when a point cloud of geometric samples on the surface of the object needs to be collected, so a galvanometric laser scanner is designed by using a one-mirror galvanometer element as its mechanical device to drive the laser stripe to sweep along the object. A novel mathematical model is derived for the proposed galvanometer laser scanner without any position assumptions and then a model-driven calibration procedure is proposed. Compared with available model-driven approaches, the influence of machining and assembly errors is considered in the proposed model. Meanwhile, a plane-constraint-based approach is proposed to extract a large number of calibration points effectively and accurately to calibrate the galvanometric laser scanner. Repeatability and accuracy of the galvanometric laser scanner are evaluated on the automobile production line to verify the efficiency and accuracy of the proposed calibration method. Experimental results show that the proposed calibration approach yields similar measurement performance compared with a look-up table calibration method. PMID:28098844

  9. Soils Activity Mobility Study: Methodology and Application

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    None, None

    2014-09-29

    This report presents a three-level approach for estimation of sediment transport to provide an assessment of potential erosion risk for sites at the Nevada National Security Site (NNSS) that are posted for radiological purposes and where migration is suspected or known to occur due to storm runoff. Based on the assessed risk, the appropriate level of effort can be determined for analysis of radiological surveys, field experiments to quantify erosion and transport rates, and long-term monitoring. The method is demonstrated at contaminated sites, including Plutonium Valley, Shasta, Smoky, and T-1. The Pacific Southwest Interagency Committee (PSIAC) procedure is selected asmore » the Level 1 analysis tool. The PSIAC method provides an estimation of the total annual sediment yield based on factors derived from the climatic and physical characteristics of a watershed. If the results indicate low risk, then further analysis is not warranted. If the Level 1 analysis indicates high risk or is deemed uncertain, a Level 2 analysis using the Modified Universal Soil Loss Equation (MUSLE) is proposed. In addition, if a sediment yield for a storm event rather than an annual sediment yield is needed, then the proposed Level 2 analysis should be performed. MUSLE only provides sheet and rill erosion estimates. The U.S. Army Corps of Engineers Hydrologic Engineering Center-Hydrologic Modeling System (HEC-HMS) provides storm peak runoff rate and storm volumes, the inputs necessary for MUSLE. Channel Sediment Transport (CHAN-SED) I and II models are proposed for estimating sediment deposition or erosion in a channel reach from a storm event. These models require storm hydrograph associated sediment concentration and bed load particle size distribution data. When the Level 2 analysis indicates high risk for sediment yield and associated contaminant migration or when there is high uncertainty in the Level 2 results, the sites can be further evaluated with a Level 3 analysis using more complex and labor- and data-intensive methods. For the watersheds analyzed in this report using the Level 1 PSIAC method, the risk of erosion is low. The field reconnaissance surveys of these watersheds confirm the conclusion that the sediment yield of undisturbed areas at the NNSS would be low. The climate, geology, soils, ground cover, land use, and runoff potential are similar among these watersheds. There are no well-defined ephemeral channels except at the Smoky and Plutonium Valley sites. Topography seems to have the strongest influence on sediment yields, as sediment yields are higher on the steeper hill slopes. Lack of measured sediment yield data at the NNSS does not allow for a direct evaluation of the yield estimates by the PSIAC method. Level 2 MUSLE estimates in all the analyzed watersheds except Shasta are a small percentage of the estimates from PSIAC because MUSLE is not inclusive of channel erosion. This indicates that channel erosion dominates the total sediment yield in these watersheds. Annual sediment yields for these watersheds are estimated using the CHAN-SEDI and CHAN-SEDII channel sediment transport models. Both transport models give similar results and exceed the estimates obtained from PSIAC and MUSLE. It is recommended that the total watershed sediment yield of watersheds at the NNSS with flow channels be obtained by adding the washload estimate (rill and inter-rill erosion) from MUSLE to that obtained from channel transport models (bed load and suspended sediment). PSIAC will give comparable results if factor scores for channel erosion are revised towards the high erosion level. Application of the Level 3 process-based models to estimate sediment yields at the NNSS cannot be recommended at this time. Increased model complexity alone will not improve the certainty of the sediment yield estimates. Models must be calibrated against measured data before model results are accepted as certain. Because no measurements of sediment yields at the NNSS are available, model validation cannot be performed. This is also true for the models used in the Level 2 analyses presented in this study. The need to calibrate MUSLE to local conditions has been discussed. Likewise, the transport equations of CHAN-SEDI and CHAN-SEDII need to be calibrated against local data to assess their applicability under semi-arid conditions and for the ephemeral channels at the NNSS. Before these validations and calibration exercises can be undertaken, a long-term measured sediment yield data set must be developed. Development of long-term measured sediment yield data cannot be overemphasized. Long-term monitoring is essential for accurate characterization of watershed processes. It is recommended that a long-term monitoring program be set up to measure watershed erosion rates and channel sediment transport rates.« less

  10. Nakagami-based total variation method for speckle reduction in thyroid ultrasound images.

    PubMed

    Koundal, Deepika; Gupta, Savita; Singh, Sukhwinder

    2016-02-01

    A good statistical model is necessary for the reduction in speckle noise. The Nakagami model is more general than the Rayleigh distribution for statistical modeling of speckle in ultrasound images. In this article, the Nakagami-based noise removal method is presented to enhance thyroid ultrasound images and to improve clinical diagnosis. The statistics of log-compressed image are derived from the Nakagami distribution following a maximum a posteriori estimation framework. The minimization problem is solved by optimizing an augmented Lagrange and Chambolle's projection method. The proposed method is evaluated on both artificial speckle-simulated and real ultrasound images. The experimental findings reveal the superiority of the proposed method both quantitatively and qualitatively in comparison with other speckle reduction methods reported in the literature. The proposed method yields an average signal-to-noise ratio gain of more than 2.16 dB over the non-convex regularizer-based speckle noise removal method, 3.83 dB over the Aubert-Aujol model, 1.71 dB over the Shi-Osher model and 3.21 dB over the Rudin-Lions-Osher model on speckle-simulated synthetic images. Furthermore, visual evaluation of the despeckled images shows that the proposed method suppresses speckle noise well while preserving the textures and fine details. © IMechE 2015.

  11. Yield surface evolution for columnar ice

    NASA Astrophysics Data System (ADS)

    Zhou, Zhiwei; Ma, Wei; Zhang, Shujuan; Mu, Yanhu; Zhao, Shunpin; Li, Guoyu

    A series of triaxial compression tests, which has capable of measuring the volumetric strain of the sample, were conducted on columnar ice. A new testing approach of probing the experimental yield surface was performed from a single sample in order to investigate yield and hardening behaviors of the columnar ice under complex stress states. Based on the characteristic of the volumetric strain, a new method of defined the multiaxial yield strengths of the columnar ice is proposed. The experimental yield surface remains elliptical shape in the stress space of effective stress versus mean stress. The effect of temperature, loading rate and loading path in the initial yield surface and deformation properties of the columnar ice were also studied. Subsequent yield surfaces of the columnar ice have been explored by using uniaxial and hydrostatic paths. The evolution of the subsequent yield surface exhibits significant path-dependent characteristics. The multiaxial hardening law of the columnar ice was established experimentally. A phenomenological yield criterion was presented for multiaxial yield and hardening behaviors of the columnar ice. The comparisons between the theoretical and measured results indicate that this current model is capable of giving a reasonable prediction for the multiaxial yield and post-yield properties of the columnar ice subjected to different temperature, loading rate and path conditions.

  12. A GIHS-based spectral preservation fusion method for remote sensing images using edge restored spectral modulation

    NASA Astrophysics Data System (ADS)

    Zhou, Xiran; Liu, Jun; Liu, Shuguang; Cao, Lei; Zhou, Qiming; Huang, Huawen

    2014-02-01

    High spatial resolution and spectral fidelity are basic standards for evaluating an image fusion algorithm. Numerous fusion methods for remote sensing images have been developed. Some of these methods are based on the intensity-hue-saturation (IHS) transform and the generalized IHS (GIHS), which may cause serious spectral distortion. Spectral distortion in the GIHS is proven to result from changes in saturation during fusion. Therefore, reducing such changes can achieve high spectral fidelity. A GIHS-based spectral preservation fusion method that can theoretically reduce spectral distortion is proposed in this study. The proposed algorithm consists of two steps. The first step is spectral modulation (SM), which uses the Gaussian function to extract spatial details and conduct SM of multispectral (MS) images. This method yields a desirable visual effect without requiring histogram matching between the panchromatic image and the intensity of the MS image. The second step uses the Gaussian convolution function to restore lost edge details during SM. The proposed method is proven effective and shown to provide better results compared with other GIHS-based methods.

  13. Estimating relative risks for common outcome using PROC NLP.

    PubMed

    Yu, Binbing; Wang, Zhuoqiao

    2008-05-01

    In cross-sectional or cohort studies with binary outcomes, it is biologically interpretable and of interest to estimate the relative risk or prevalence ratio, especially when the response rates are not rare. Several methods have been used to estimate the relative risk, among which the log-binomial models yield the maximum likelihood estimate (MLE) of the parameters. Because of restrictions on the parameter space, the log-binomial models often run into convergence problems. Some remedies, e.g., the Poisson and Cox regressions, have been proposed. However, these methods may give out-of-bound predicted response probabilities. In this paper, a new computation method using the SAS Nonlinear Programming (NLP) procedure is proposed to find the MLEs. The proposed NLP method was compared to the COPY method, a modified method to fit the log-binomial model. Issues in the implementation are discussed. For illustration, both methods were applied to data on the prevalence of microalbuminuria (micro-protein leakage into urine) for kidney disease patients from the Diabetes Control and Complications Trial. The sample SAS macro for calculating relative risk is provided in the appendix.

  14. Towards a rational theory for CFD global stability

    NASA Technical Reports Server (NTRS)

    Baker, A. J.; Iannelli, G. S.

    1989-01-01

    The fundamental notion of the consistent stability of semidiscrete analogues of evolution PDEs is explored. Lyapunov's direct method is used to develop CFD semidiscrete algorithms which yield the TVD constraint as a special case. A general formula for supplying dissipation parameters for arbitrary multidimensional conservation law systems is proposed. The reliability of the method is demonstrated by the results of two numerical tests for representative Euler shocked flows.

  15. Shape-from-focus by tensor voting.

    PubMed

    Hariharan, R; Rajagopalan, A N

    2012-07-01

    In this correspondence, we address the task of recovering shape-from-focus (SFF) as a perceptual organization problem in 3-D. Using tensor voting, depth hypotheses from different focus operators are validated based on their likelihood to be part of a coherent 3-D surface, thereby exploiting scene geometry and focus information to generate reliable depth estimates. The proposed method is fast and yields significantly better results compared with existing SFF methods.

  16. Conceptual design of fast-ignition laser fusion reactor FALCON-D

    NASA Astrophysics Data System (ADS)

    Goto, T.; Someya, Y.; Ogawa, Y.; Hiwatari, R.; Asaoka, Y.; Okano, K.; Sunahara, A.; Johzaki, T.

    2009-07-01

    A new conceptual design of the laser fusion power plant FALCON-D (Fast-ignition Advanced Laser fusion reactor CONcept with a Dry wall chamber) has been proposed. The fast-ignition method can achieve sufficient fusion gain for a commercial operation (~100) with about 10 times smaller fusion yield than the conventional central ignition method. FALCON-D makes full use of this property and aims at designing with a compact dry wall chamber (5-6 m radius). 1D/2D simulations by hydrodynamic codes showed a possibility of achieving sufficient gain with a laser energy of 400 kJ, i.e. a 40 MJ target yield. The design feasibility of the compact dry wall chamber and the solid breeder blanket system was shown through thermomechanical analysis of the dry wall and neutronics analysis of the blanket system. Moderate electric output (~400 MWe) can be achieved with a high repetition (30 Hz) laser. This dry wall reactor concept not only reduces several difficulties associated with a liquid wall system but also enables a simple cask maintenance method for the replacement of the blanket system, which can shorten the maintenance period. The basic idea of the maintenance method for the final optics system has also been proposed. Some critical R&D issues required for this design are also discussed.

  17. Breast mass segmentation in mammograms combining fuzzy c-means and active contours

    NASA Astrophysics Data System (ADS)

    Hmida, Marwa; Hamrouni, Kamel; Solaiman, Basel; Boussetta, Sana

    2018-04-01

    Segmentation of breast masses in mammograms is a challenging issue due to the nature of mammography and the characteristics of masses. In fact, mammographic images are poor in contrast and breast masses have various shapes and densities with fuzzy and ill-defined borders. In this paper, we propose a method based on a modified Chan-Vese active contour model for mass segmentation in mammograms. We conduct the experiment on mass Regions of Interest (ROI) extracted from the MIAS database. The proposed method consists of mainly three stages: Firstly, the ROI is preprocessed to enhance the contrast. Next, two fuzzy membership maps are generated from the preprocessed ROI based on fuzzy C-Means algorithm. These fuzzy membership maps are finally used to modify the energy of the Chan-Vese model and to perform the final segmentation. Experimental results indicate that the proposed method yields good mass segmentation results.

  18. Modeling panel detection frequencies by queuing system theory: an application in gas chromatography olfactometry.

    PubMed

    Bult, Johannes H F; van Putten, Bram; Schifferstein, Hendrik N J; Roozen, Jacques P; Voragen, Alphons G J; Kroeze, Jan H A

    2004-10-01

    In continuous vigilance tasks, the number of coincident panel responses to stimuli provides an index of stimulus detectability. To determine whether this number is due to chance, panel noise levels have been approximated by the maximum coincidence level obtained in stimulus-free conditions. This study proposes an alternative method by which to assess noise levels, derived from queuing system theory (QST). Instead of critical coincidence levels, QST modeling estimates the duration of coinciding responses in the absence of stimuli. The proposed method has the advantage over previous approaches that it yields more reliable noise estimates and allows for statistical testing. The method was applied in an olfactory detection experiment using 16 panelists in stimulus-present and stimulus-free conditions. We propose that QST may be used as an alternative to signal detection theory for analyzing data from continuous vigilance tasks.

  19. Efficient ethanol production from dried oil palm trunk treated by hydrothermolysis and subsequent enzymatic hydrolysis.

    PubMed

    Eom, In-Yong; Yu, Ju-Hyun; Jung, Chan-Duck; Hong, Kyung-Sik

    2015-01-01

    Oil palm trunk (OPT) is a valuable bioresource for the biorefinery industry producing biofuels and biochemicals. It has the distinct feature of containing a large amount of starch, which, unlike cellulose, can be easily solubilized by water when heated and hydrolyzed to glucose by amylolytic enzymes without pretreatment for breaking down the biomass recalcitrance. Therefore, it is suggested as beneficial to extract most of the starch from OPT through autoclaving and subsequent amylolytic hydrolysis prior to pretreatment. However, this treatment requires high capital and operational costs, and there could be a high probability of microbial contamination during starch processing. In terms of biochemical conversion of OPT, this study aimed to develop a simple and efficient ethanol conversion process without any chemical use such as acids and bases or detoxification. For comparison with the proposed efficient ethanol conversion process, OPT was subjected to hydrothermal treatment at 180 °C for 30 min. After enzymatic hydrolysis of PWS, 43.5 g of glucose per 100 g dry biomass was obtained, which corresponds to 81.3 % of the theoretical glucose yield. Through subsequent alcohol fermentation, 81.4 % ethanol yield of the theoretical ethanol yield was achieved. To conduct the proposed new process, starch in OPT was converted to ethanol through enzymatic hydrolysis and subsequent fermentation prior to hydrothermal treatment, and the resulting slurry was subjected to identical processes that were applied to control. Consequently, a high-glucose yield of 96.3 % was achieved, and the resulting ethanol yield was 93.5 %. The proposed new process was a simple method for minimizing the loss of starch during biochemical conversion and maximizing ethanol production as well as fermentable sugars from OPT. In addition, this methodology offers the advantage of reducing operational and capital costs due to minimizing the process for ethanol production by excluding expensive processes related to detoxification prior to enzymatic hydrolysis and fermentation such as washing/conditioning and solid-liquid separation of pretreated slurry. The potential future use of xylose-digestible microorganisms could further increase the ethanol yield from the proposed process, thereby increasing its effectiveness for the conversion of OPT into biofuels and biochemicals.

  20. Asymmetric Yield Function Based on the Stress Invariants for Pressure Sensitive Metals

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jeong Wahn Yoon; Yanshan Lou; Jong Hun Yoon

    A general asymmetric yield function is proposed with dependence on the stress invariants for pressure sensitive metals. The pressure sensitivity of the proposed yield function is consistent with the experimental result of Spitzig and Richmond (1984) for steel and aluminum alloys while the asymmetry of the third invariant is preserved to model strength differential (SD) effect of pressure insensitive materials. The proposed yield function is transformed in the space of the stress triaxaility, the von Mises stress and the normalized invariant to theoretically investigate the possible reason of the SD effect. The proposed plasticity model is further extended to characterizemore » the anisotropic behavior of metals both in tension and compression. The extension of the yield function is realized by introducing two distinct fourth-order linear transformation tensors of the stress tensor for the second and third invariants, respectively. The extended yield function reasonably models the evolution of yield surfaces for a zirconium clock-rolled plate during in-plane and through-thickness compression reported by Plunkett et al. (2007). The extended yield function is also applied to describe the orthotropic behavior of a face-centered cubic metal of AA 2008-T4 and two hexagonal close-packed metals of high-purity-titanium and AZ31 magnesium alloy. The orthotropic behavior predicted by the generalized model is compared with experimental results of these metals. The comparison validates that the proposed yield function provides sufficient predictability on SD effect and anisotropic behavior both in tension and compression. When it is necessary to consider r-value anisotropy, the proposed function is efficient to be used with nonassociated flow plasticity by introducing a separate plastic potential for the consideration of r-values as shown in Stoughton & Yoon (2004, 2009).« less

  1. Quantifying electrical impacts on redundant wire insertion in 7nm unidirectional designs

    NASA Astrophysics Data System (ADS)

    Mohyeldin, Ahmed; Schroeder, Uwe Paul; Srinivasan, Ramya; Narisetty, Haritez; Malik, Shobhit; Madhavan, Sriram

    2017-04-01

    In nano-meter scale Integrated Circuits, via fails due to random defects is a well-known yield detractor, and via redundancy insertion is a common method to help enhance semiconductors yield. For the case of Self Aligned Double Patterning (SADP), which might require unidirectional design layers as in the case of some advanced technology nodes, the conventional methods of inserting redundant vias don't work any longer. This is because adding redundant vias conventionally requires adding metal shapes in the non-preferred direction, which will violate the SADP design constraints in that case. Therefore, such metal layers fabricated using unidirectional SADP require an alternative method for providing the needed redundancy. This paper proposes a post-layout Design for Manufacturability (DFM) redundancy insertion method tailored for the design requirements introduced by unidirectional metal layers. The proposed method adds redundant wires in the preferred direction - after searching for nearby vacant routing tracks - in order to provide redundant paths for electrical signals. This method opportunistically adds robustness against failures due to silicon defects without impacting area or incurring new design rule violations. Implementation details of this redundancy insertion method will be explained in this paper. One known challenge with similar DFM layout fixing methods is the possible introduction of undesired electrical impact, causing other unintentional failures in design functionality. In this paper, a study is presented to quantify the electrical impacts of such redundancy insertion scheme and to examine if that electrical impact can be tolerated. The paper will show results to evaluate DFM insertion rates and corresponding electrical impact for a given design utilization and maximum inserted wire length. Parasitic extraction and static timing analysis results will be presented. A typical digital design implemented using GLOBALFOUNDRIES 7nm technology is used for demonstration. The provided results can help evaluate such extensive DFM insertion method from an electrical standpoint. Furthermore, the results could provide guidance on how to implement the proposed method of adding electrical redundancy such that intolerable electrical impacts could be avoided.

  2. A simple method for determining stress intensity factors for a crack in bi-material interface

    NASA Astrophysics Data System (ADS)

    Morioka, Yuta

    Because of violently oscillating nature of stress and displacement fields near the crack tip, it is difficult to obtain stress intensity factors for a crack between two dis-similar media. For a crack in a homogeneous medium, it is a common practice to find stress intensity factors through strain energy release rates. However, individual strain energy release rates do not exist for bi-material interface crack. Hence it is necessary to find alternative methods to evaluate stress intensity factors. Several methods have been proposed in the past. However they involve mathematical complexity and sometimes require additional finite element analysis. The purpose of this research is to develop a simple method to find stress intensity factors in bi-material interface cracks. A finite element based projection method is proposed in the research. It is shown that the projection method yields very accurate stress intensity factors for a crack in isotropic and anisotropic bi-material interfaces. The projection method is also compared to displacement ratio method and energy method proposed by other authors. Through comparison it is found that projection method is much simpler to apply with its accuracy comparable to that of displacement ratio method.

  3. On correct evaluation techniques of brightness enhancement effect measurement data

    NASA Astrophysics Data System (ADS)

    Kukačka, Leoš; Dupuis, Pascal; Motomura, Hideki; Rozkovec, Jiří; Kolář, Milan; Zissis, Georges; Jinno, Masafumi

    2017-11-01

    This paper aims to establish confidence intervals of the quantification of brightness enhancement effects resulting from the use of pulsing bright light. It is found that the methods used so far may yield significant bias in the published results, overestimating or underestimating the enhancement effect. The authors propose to use a linear algebra method called the total least squares. Upon an example dataset, it is shown that this method does not yield biased results. The statistical significance of the results is also computed. It is concluded over an observation set that the currently used linear algebra methods present many patterns of noise sensitivity. Changing algorithm details leads to inconsistent results. It is thus recommended to use the method with the lowest noise sensitivity. Moreover, it is shown that this method also permits one to obtain an estimate of the confidence interval. This paper neither aims to publish results about a particular experiment nor to draw any particular conclusion about existence or nonexistence of the brightness enhancement effect.

  4. Improved Hierarchical Optimization-Based Classification of Hyperspectral Images Using Shape Analysis

    NASA Technical Reports Server (NTRS)

    Tarabalka, Yuliya; Tilton, James C.

    2012-01-01

    A new spectral-spatial method for classification of hyperspectral images is proposed. The HSegClas method is based on the integration of probabilistic classification and shape analysis within the hierarchical step-wise optimization algorithm. First, probabilistic support vector machines classification is applied. Then, at each iteration two neighboring regions with the smallest Dissimilarity Criterion (DC) are merged, and classification probabilities are recomputed. The important contribution of this work consists in estimating a DC between regions as a function of statistical, classification and geometrical (area and rectangularity) features. Experimental results are presented on a 102-band ROSIS image of the Center of Pavia, Italy. The developed approach yields more accurate classification results when compared to previously proposed methods.

  5. Determination of Material Strengths by Hydraulic Bulge Test.

    PubMed

    Wang, Hankui; Xu, Tong; Shou, Binan

    2016-12-30

    The hydraulic bulge test (HBT) method is proposed to determine material tensile strengths. The basic idea of HBT is similar to the small punch test (SPT), but inspired by the manufacturing process of rupture discs-high-pressure hydraulic oil is used instead of punch to cause specimen deformation. Compared with SPT method, the HBT method can avoid some of influence factors, such as punch dimension, punch material, and the friction between punch and specimen. A calculation procedure that is entirely based on theoretical derivation is proposed for estimate yield strength and ultimate tensile strength. Both conventional tensile tests and hydraulic bulge tests were carried out for several ferrous alloys, and the results showed that hydraulic bulge test results are reliable and accurate.

  6. Assay of potency of the proposed Fifth International Standard for Gas-Gangrene Antitoxin (Perfringens)

    PubMed Central

    Prigge, R.; Micke, H.; Krüger, J.

    1963-01-01

    As part of a collaborative assay of the proposed Fifth International Standard for Gas-Gangrene Antitoxin (Perfringens), five ampoules of the proposed replacement material were assayed in the authors' laboratory against the then current Fourth International Standard. Both in vitro and in vivo methods were used. This paper presents the results and their statistical analysis. The two methods yielded different results which were not likely to have been due to chance, but exact statistical comparison is not possible. It is thought, however, that the differences may be due, at least in part, to differences in the relative proportions of zeta-antitoxin and alpha-antitoxin in the Fourth and Fifth International Standards and the consequent different reactions with the test toxin that was used for titration. PMID:14107746

  7. Maximum Likelihood Reconstruction for Magnetic Resonance Fingerprinting

    PubMed Central

    Zhao, Bo; Setsompop, Kawin; Ye, Huihui; Cauley, Stephen; Wald, Lawrence L.

    2017-01-01

    This paper introduces a statistical estimation framework for magnetic resonance (MR) fingerprinting, a recently proposed quantitative imaging paradigm. Within this framework, we present a maximum likelihood (ML) formalism to estimate multiple parameter maps directly from highly undersampled, noisy k-space data. A novel algorithm, based on variable splitting, the alternating direction method of multipliers, and the variable projection method, is developed to solve the resulting optimization problem. Representative results from both simulations and in vivo experiments demonstrate that the proposed approach yields significantly improved accuracy in parameter estimation, compared to the conventional MR fingerprinting reconstruction. Moreover, the proposed framework provides new theoretical insights into the conventional approach. We show analytically that the conventional approach is an approximation to the ML reconstruction; more precisely, it is exactly equivalent to the first iteration of the proposed algorithm for the ML reconstruction, provided that a gridding reconstruction is used as an initialization. PMID:26915119

  8. Maximum Likelihood Reconstruction for Magnetic Resonance Fingerprinting.

    PubMed

    Zhao, Bo; Setsompop, Kawin; Ye, Huihui; Cauley, Stephen F; Wald, Lawrence L

    2016-08-01

    This paper introduces a statistical estimation framework for magnetic resonance (MR) fingerprinting, a recently proposed quantitative imaging paradigm. Within this framework, we present a maximum likelihood (ML) formalism to estimate multiple MR tissue parameter maps directly from highly undersampled, noisy k-space data. A novel algorithm, based on variable splitting, the alternating direction method of multipliers, and the variable projection method, is developed to solve the resulting optimization problem. Representative results from both simulations and in vivo experiments demonstrate that the proposed approach yields significantly improved accuracy in parameter estimation, compared to the conventional MR fingerprinting reconstruction. Moreover, the proposed framework provides new theoretical insights into the conventional approach. We show analytically that the conventional approach is an approximation to the ML reconstruction; more precisely, it is exactly equivalent to the first iteration of the proposed algorithm for the ML reconstruction, provided that a gridding reconstruction is used as an initialization.

  9. The Galactic Distribution of Planets via Spitzer Microlensing Parallax

    NASA Astrophysics Data System (ADS)

    Gould, Andrew; Yee, Jennifer; Carey, Sean; Shvartzvald, Yossi

    2018-05-01

    We will measure the Galactic distribution of planets by obtaining 'microlens parallaxes' of about 200 events, including 3 planetary events, from the comparison of microlens lightcurves observed from Spitzer and Earth, which are separated by >1.5 AU in projection. The proposed observations are part of a campaign that we have conducted with Spitzer since 2014. The planets expected to be identified in this campaign when combined with previous work will yield a first statistically significant measurement of the frequency of planets in the Galactic bulge versus the Galactic disk. As we have demonstrated in three previous programs, the difference in these lightcurves yields both the 'microlens parallax' (ratio of the lens-source relative parallax) to the Einstein radius, and the direction of lens-source relative motion. For planetary events, this measurement directly yields the mass and distance of the planet. This proposal is significantly more sensitive to planets than previous work because it takes advantage of the KMTNet observing strategy that covers >85 sq.deg t >0.4/hr cadence, 24/7 from 3 southern observatories and a alert system KMTNet is implementing for 2019. This same observing program also provides a unique probe of dark objects. It will yield an improved measurement of the isolated-brown-dwarf mass function. Thirteen percent of the observations will specifically target binaries, which will probe systems with dark components (brown dwarfs, neutron stars, black holes) that are difficult or impossible to investigate by other methods. The observations and methods from this work are a test bed for WFIRST microlensing.

  10. Alternative oil extraction methods from Echium plantagineum L. seeds using advanced techniques and green solvents.

    PubMed

    Castejón, Natalia; Luna, Pilar; Señoráns, Francisco J

    2018-04-01

    The edible oil processing industry involves large losses of organic solvent into the atmosphere and long extraction times. In this work, fast and environmentally friendly alternatives for the production of echium oil using green solvents are proposed. Advanced extraction techniques such as Pressurized Liquid Extraction (PLE), Microwave Assisted Extraction (MAE) and Ultrasound Assisted Extraction (UAE) were evaluated to efficiently extract omega-3 rich oil from Echium plantagineum seeds. Extractions were performed with ethyl acetate, ethanol, water and ethanol:water to develop a hexane-free processing method. Optimal PLE conditions with ethanol at 150 °C during 10 min produced a very similar oil yield (31.2%) to Soxhlet using hexane for 8 h (31.3%). UAE optimized method with ethanol at mild conditions (55 °C) produced a high oil yield (29.1%). Consequently, advanced extraction techniques showed good lipid yields and furthermore, the produced echium oil had the same omega-3 fatty acid composition than traditionally extracted oil. Copyright © 2017 Elsevier Ltd. All rights reserved.

  11. Calibrationless parallel magnetic resonance imaging: a joint sparsity model.

    PubMed

    Majumdar, Angshul; Chaudhury, Kunal Narayan; Ward, Rabab

    2013-12-05

    State-of-the-art parallel MRI techniques either explicitly or implicitly require certain parameters to be estimated, e.g., the sensitivity map for SENSE, SMASH and interpolation weights for GRAPPA, SPIRiT. Thus all these techniques are sensitive to the calibration (parameter estimation) stage. In this work, we have proposed a parallel MRI technique that does not require any calibration but yields reconstruction results that are at par with (or even better than) state-of-the-art methods in parallel MRI. Our proposed method required solving non-convex analysis and synthesis prior joint-sparsity problems. This work also derives the algorithms for solving them. Experimental validation was carried out on two datasets-eight channel brain and eight channel Shepp-Logan phantom. Two sampling methods were used-Variable Density Random sampling and non-Cartesian Radial sampling. For the brain data, acceleration factor of 4 was used and for the other an acceleration factor of 6 was used. The reconstruction results were quantitatively evaluated based on the Normalised Mean Squared Error between the reconstructed image and the originals. The qualitative evaluation was based on the actual reconstructed images. We compared our work with four state-of-the-art parallel imaging techniques; two calibrated methods-CS SENSE and l1SPIRiT and two calibration free techniques-Distributed CS and SAKE. Our method yields better reconstruction results than all of them.

  12. Multivariate Approach for Alzheimer's Disease Detection Using Stationary Wavelet Entropy and Predator-Prey Particle Swarm Optimization.

    PubMed

    Zhang, Yudong; Wang, Shuihua; Sui, Yuxiu; Yang, Ming; Liu, Bin; Cheng, Hong; Sun, Junding; Jia, Wenjuan; Phillips, Preetha; Gorriz, Juan Manuel

    2017-07-17

    The number of patients with Alzheimer's disease is increasing rapidly every year. Scholars often use computer vision and machine learning methods to develop an automatic diagnosis system. In this study, we developed a novel machine learning system that can make diagnoses automatically from brain magnetic resonance images. First, the brain imaging was processed, including skull stripping and spatial normalization. Second, one axial slice was selected from the volumetric image, and stationary wavelet entropy (SWE) was done to extract the texture features. Third, a single-hidden-layer neural network was used as the classifier. Finally, a predator-prey particle swarm optimization was proposed to train the weights and biases of the classifier. Our method used 4-level decomposition and yielded 13 SWE features. The classification yielded an overall accuracy of 92.73±1.03%, a sensitivity of 92.69±1.29%, and a specificity of 92.78±1.51%. The area under the curve is 0.95±0.02. Additionally, this method only cost 0.88 s to identify a subject in online stage, after its volumetric image is preprocessed. In terms of classification performance, our method performs better than 10 state-of-the-art approaches and the performance of human observers. Therefore, this proposed method is effective in the detection of Alzheimer's disease.

  13. Fractional domain varying-order differential denoising method

    NASA Astrophysics Data System (ADS)

    Zhang, Yan-Shan; Zhang, Feng; Li, Bing-Zhao; Tao, Ran

    2014-10-01

    Removal of noise is an important step in the image restoration process, and it remains a challenging problem in image processing. Denoising is a process used to remove the noise from the corrupted image, while retaining the edges and other detailed features as much as possible. Recently, denoising in the fractional domain is a hot research topic. The fractional-order anisotropic diffusion method can bring a less blocky effect and preserve edges in image denoising, a method that has received much interest in the literature. Based on this method, we propose a new method for image denoising, in which fractional-varying-order differential, rather than constant-order differential, is used. The theoretical analysis and experimental results show that compared with the state-of-the-art fractional-order anisotropic diffusion method, the proposed fractional-varying-order differential denoising model can preserve structure and texture well, while quickly removing noise, and yields good visual effects and better peak signal-to-noise ratio.

  14. Multi-Target State Extraction for the SMC-PHD Filter

    PubMed Central

    Si, Weijian; Wang, Liwei; Qu, Zhiyu

    2016-01-01

    The sequential Monte Carlo probability hypothesis density (SMC-PHD) filter has been demonstrated to be a favorable method for multi-target tracking. However, the time-varying target states need to be extracted from the particle approximation of the posterior PHD, which is difficult to implement due to the unknown relations between the large amount of particles and the PHD peaks representing potential target locations. To address this problem, a novel multi-target state extraction algorithm is proposed in this paper. By exploiting the information of measurements and particle likelihoods in the filtering stage, we propose a validation mechanism which aims at selecting effective measurements and particles corresponding to detected targets. Subsequently, the state estimates of the detected and undetected targets are performed separately: the former are obtained from the particle clusters directed by effective measurements, while the latter are obtained from the particles corresponding to undetected targets via clustering method. Simulation results demonstrate that the proposed method yields better estimation accuracy and reliability compared to existing methods. PMID:27322274

  15. dynGENIE3: dynamical GENIE3 for the inference of gene networks from time series expression data.

    PubMed

    Huynh-Thu, Vân Anh; Geurts, Pierre

    2018-02-21

    The elucidation of gene regulatory networks is one of the major challenges of systems biology. Measurements about genes that are exploited by network inference methods are typically available either in the form of steady-state expression vectors or time series expression data. In our previous work, we proposed the GENIE3 method that exploits variable importance scores derived from Random forests to identify the regulators of each target gene. This method provided state-of-the-art performance on several benchmark datasets, but it could however not specifically be applied to time series expression data. We propose here an adaptation of the GENIE3 method, called dynamical GENIE3 (dynGENIE3), for handling both time series and steady-state expression data. The proposed method is evaluated extensively on the artificial DREAM4 benchmarks and on three real time series expression datasets. Although dynGENIE3 does not systematically yield the best performance on each and every network, it is competitive with diverse methods from the literature, while preserving the main advantages of GENIE3 in terms of scalability.

  16. Assessment of the Effect of Climate Change on Grain Yields in China

    NASA Astrophysics Data System (ADS)

    Chou, J.

    2006-12-01

    The paper elaborates the social background and research background; makes clear what the key scientific issues need to be resolved and where the difficulties are. In the research area of parasailing the grain yield change caused by climate change, massive works have been done both in the domestic and in the foreign. It is our upcoming work to evaluate how our countrywide climate change information provided by this pattern influence our economic and social development; and how to make related policies and countermeasures. the main idea in this paper is that the grain yield change is by no means the linear composition of social economy function effect and the climatic change function effect. This paper identifies the economic evaluation object, proposes one new concept - climate change output. The grain yields change affected by the social factors and the climatic change working together. Climate change influences the grain yields by the non ¨C linear function from both climate change and social factor changes, not only by climate change itself. Therefore, in my paper, the appraisal object is defined as: The social factors change based on actual social changing situations; under the two kinds of climate change situation, the invariable climate change situation and variable climate change situation; the difference of grain yield outputs is called " climate change output ", In order to solve this problem, we propose a method to analyze and imitate on the historical materials. Giving the condition that the climate is invariable, the social economic factor changes cause the grain yield change. However, this grain yield change is a tentative quantity index, not an actual quantity number. So we use the existing historical materials to exam the climate change output, based on the characteristic that social factor changes greater in year than in age, but the climate factor changes greater in age than in year. The paper proposes and establishes one economy - climate model (C-D-C model) to appraise the grain yield change caused by the climatic change. Also the preliminary test on this model has been done. In selection of the appraisal methods, we take the C-D production function model, which has been proved more mature in the economic research, as our fundamental model. Then, we introduce climate index (arid index) to the C-D model to develop one new model. This new model utilizes the climatic change factor in the economical model to appraise how the climatic change influence the grain yield change. The new way of appraise should have the better application prospect. The economy - climate model (The C-D-C model) has been applied on the eight Chinese regions that we divide; it has been proved satisfactory in its feasibility, rationality and the application prospect. So we can provide the theoretical fundamentals for policy-making under the more complex and uncertain climate change. Therefore, we open a new possible channel for the global climate change research moving toward the actual social, economic life.

  17. A rapid and highly selective method for the estimation of pyro-, tri- and orthophosphates.

    PubMed

    Kamat, D R; Savant, V V; Sathyanarayana, D N

    1995-03-01

    A rapid, highly selective and simple method has been developed for the quantitative determination of pyro-, tri- and orthophosphates. The method is based on the formation of a solid complex of bis(ethylenediamine)cobalt(III) species with pyrophosphate at pH 4.2-4.3, with triphosphate at pH 2.0-2.1 and with orthophosphate at pH 8.2-8.6. The proposed method for pyro- and triphosphates differs from the available method, which is based on the formation of an adduct with tris(ethylenediamine)cobalt(III) species. The complexes have the composition [Co(en)(2)HP(2)O(7)]4H(2)O and [Co(en)(2)H(2)P(3)O(10)]2H(2)O, respectively. The precipitation is instantaneous and quantitative under the recommended optimum conditions giving 99.5% gravimetric yield in both cases. There is no interferences from orthophosphate, trimetaphosphate and pyrophosphate species in the triphosphate estimation up to 5% of each component. The efficacy of the method has been established by determining pyrophosphate and triphosphate contents in various matrices. In the case of orthophosphate, the proposed method differs from the available methods such as ammonium phosphomolybdate, vanadophosphomolybdate and quinoline phosphomolybdate, which are based on the formation of a precipitate, followed by either titrimetry or gravimetry. The precipitation is instantaneous and the method is simple. Under the recommended pH and other reaction conditions, gravimetric yields of 99.6-100% are obtainable. The method is applicable to orthophosphoric acid and a variety of phosphate salts.

  18. Modeling a color-rendering operator for high dynamic range images using a cone-response function

    NASA Astrophysics Data System (ADS)

    Choi, Ho-Hyoung; Kim, Gi-Seok; Yun, Byoung-Ju

    2015-09-01

    Tone-mapping operators are the typical algorithms designed to produce visibility and the overall impression of brightness, contrast, and color of high dynamic range (HDR) images on low dynamic range (LDR) display devices. Although several new tone-mapping operators have been proposed in recent years, the results of these operators have not matched those of the psychophysical experiments based on the human visual system. A color-rendering model that is a combination of tone-mapping and cone-response functions using an XYZ tristimulus color space is presented. In the proposed method, the tone-mapping operator produces visibility and the overall impression of brightness, contrast, and color in HDR images when mapped onto relatively LDR devices. The tone-mapping resultant image is obtained using chromatic and achromatic colors to avoid well-known color distortions shown in the conventional methods. The resulting image is then processed with a cone-response function wherein emphasis is placed on human visual perception (HVP). The proposed method covers the mismatch between the actual scene and the rendered image based on HVP. The experimental results show that the proposed method yields an improved color-rendering performance compared to conventional methods.

  19. A SPH elastic-viscoplastic model for granular flows and bed-load transport

    NASA Astrophysics Data System (ADS)

    Ghaïtanellis, Alex; Violeau, Damien; Ferrand, Martin; Abderrezzak, Kamal El Kadi; Leroy, Agnès; Joly, Antoine

    2018-01-01

    An elastic-viscoplastic model (Ulrich, 2013) is combined to a multi-phase SPH formulation (Hu and Adams, 2006; Ghaitanellis et al., 2015) to model granular flows and non-cohesive sediment transport. The soil is treated as a continuum exhibiting a viscoplastic behaviour. Thus, below a critical shear stress (i.e. the yield stress), the soil is assumed to behave as an isotropic linear-elastic solid. When the yield stress is exceeded, the soil flows and behaves as a shear-thinning fluid. A liquid-solid transition threshold based on the granular material properties is proposed, so as to make the model free of numerical parameter. The yield stress is obtained from Drucker-Prager criterion that requires an accurate computation of the effective stress in the soil. A novel method is proposed to compute the effective stress in SPH, solving a Laplace equation. The model is applied to a two-dimensional soil collapse (Bui et al., 2008) and a dam break over mobile beds (Spinewine and Zech, 2007). Results are compared with experimental data and a good agreement is obtained.

  20. Analysis of drugs in human tissues by supercritical fluid extraction/immunoassay

    NASA Astrophysics Data System (ADS)

    Furton, Kenneth G.; Sabucedo, Alberta; Rein, Joseph; Hearn, W. L.

    1997-02-01

    A rapid, readily automated method has been developed for the quantitative analysis of phenobarbital from human liver tissues based on supercritical carbon dioxide extraction followed by fluorescence enzyme immunoassay. The method developed significantly reduces sample handling and utilizes the entire liver homogenate. The current method yields comparable recoveries and precision and does not require the use of an internal standard, although traditional GC/MS confirmation can still be performed on sample extracts. Additionally, the proposed method uses non-toxic, inexpensive carbon dioxide, thus eliminating the use of halogenated organic solvents.

  1. Optimization of diffusion-weighted single-refocused spin-echo EPI by reducing eddy-current artifacts and shortening the echo time.

    PubMed

    Shrestha, Manoj; Hok, Pavel; Nöth, Ulrike; Lienerth, Bianca; Deichmann, Ralf

    2018-03-30

    The purpose of this work was to optimize the acquisition of diffusion-weighted (DW) single-refocused spin-echo (srSE) data without intrinsic eddy-current compensation (ECC) for an improved performance of ECC postprocessing. The rationale is that srSE sequences without ECC may yield shorter echo times (TE) and thus higher signal-to-noise ratios (SNR) than srSE or twice-refocused spin-echo (trSE) schemes with intrinsic ECC. The proposed method employs dummy scans with DW gradients to drive eddy currents into a steady state before data acquisition. Parameters of the ECC postprocessing algorithm were also optimized. Simulations were performed to obtain minimum TE values for the proposed sequence and sequences with intrinsic ECC. Experimentally, the proposed method was compared with standard DW-trSE imaging, both in vitro and in vivo. Simulations showed substantially shorter TE for the proposed method than for methods with intrinsic ECC when using shortened echo readouts. Data of the proposed method showed a marked increase in SNR. A dummy scan duration of at least 1.5 s improved performance of the ECC postprocessing algorithm. Changes proposed for the DW-srSE sequence and for the parameter setting of the postprocessing ECC algorithm considerably reduced eddy-current artifacts and provided a higher SNR.

  2. Photovoltaic Module Soiling Map | Photovoltaic Research | NREL

    Science.gov Websites

    proposed in: M. Deceglie, L. Micheli, and M. Muller, "Quantifying soiling loss directly from PV yield described in: L. Micheli and M. Muller, "An investigation of the key parameters for predicting PV : M. Muller, L. Micheli, and A.A. Martinez-Morales, "A Method to Extract Soiling Loss Data from

  3. Constructing a Covariance Matrix that Yields a Specified Minimizer and a Specified Minimum Discrepancy Function Value.

    ERIC Educational Resources Information Center

    Cudeck, Robert; Browne, Michael W.

    1992-01-01

    A method is proposed for constructing a population covariance matrix as the sum of a particular model plus a nonstochastic residual matrix, with the stipulation that the model holds with a prespecified lack of fit. The procedure is considered promising for Monte Carlo studies. (SLD)

  4. DOE Office of Scientific and Technical Information (OSTI.GOV)

    He, Hongmei; Zhang, Youjin, E-mail: zyj@ustc.edu.cn; Zhu, Wei

    Highlights: {yields} Flower-like Sm{sub 2}(C{sub 2}O{sub 4}){sub 3}.10H{sub 2}O was gained with Na{sub 3}Cit assisted precipitation method. {yields} The mechanism of the flower-like Sm{sub 2}(C{sub 2}O{sub 4}){sub 3}.10H{sub 2}O formation was proposed. {yields} The Sm{sub 2}(C{sub 2}O{sub 4}){sub 3}.10H{sub 2}O and Sm{sub 2}O{sub 3} samples exhibited obviously different PL spectra. {yields} Ln{sub 2}(C{sub 2}O{sub 4}){sub 3}.nH{sub 2}O (Ln = Gd, Dy, Lu, Y) also were achieved by the simple method. -- Abstract: Flower-like Sm{sub 2}(C{sub 2}O{sub 4}){sub 3}.10H{sub 2}O had been synthesized by a facile complex agent assisted precipitation method. The flower-like Sm{sub 2}(C{sub 2}O{sub 4}){sub 3}.10H{sub 2}O was characterizedmore » by X-ray diffraction, X-ray photoelectron spectroscopy, Fourier transform infrared spectroscopy, field-emission scanning electron microscopy, thermogravimetry-differential thermal analysis and photoluminescence. The possible growth mechanism of the flower-like Sm{sub 2}(C{sub 2}O{sub 4}){sub 3}.10H{sub 2}O was proposed. To extend this method, other Ln{sub 2}(C{sub 2}O{sub 4}){sub 3}.nH{sub 2}O (Ln = Gd, Dy, Lu, Y) with different morphologies also had been prepared by adjusting different rare earth precursors. Further studies revealed that besides the reaction conditions and the additive amount of complex agents, the morphologies of the as-synthesised lanthanide oxalates were also determined by the rare earth ions. The Sm{sub 2}(C{sub 2}O{sub 4}){sub 3}.10H{sub 2}O and Sm{sub 2}O{sub 3} samples exhibited different photoluminescence spectra, which was relevant to Sm{sup 3+} energy level structure of 4f electrons. The method may be applied in the synthesis of other lanthanide compounds, and the work could explore the potential optical materials.« less

  5. Methods to enhance seismic faults and construct fault surfaces

    NASA Astrophysics Data System (ADS)

    Wu, Xinming; Zhu, Zhihui

    2017-10-01

    Faults are often apparent as reflector discontinuities in a seismic volume. Numerous types of fault attributes have been proposed to highlight fault positions from a seismic volume by measuring reflection discontinuities. These attribute volumes, however, can be sensitive to noise and stratigraphic features that are also apparent as discontinuities in a seismic volume. We propose a matched filtering method to enhance a precomputed fault attribute volume, and simultaneously estimate fault strikes and dips. In this method, a set of efficient 2D exponential filters, oriented by all possible combinations of strike and dip angles, are applied to the input attribute volume to find the maximum filtering responses at all samples in the volume. These maximum filtering responses are recorded to obtain the enhanced fault attribute volume while the corresponding strike and dip angles, that yield the maximum filtering responses, are recoded to obtain volumes of fault strikes and dips. By doing this, we assume that a fault surface is locally planar, and a 2D smoothing filter will yield a maximum response if the smoothing plane coincides with a local fault plane. With the enhanced fault attribute volume and the estimated fault strike and dip volumes, we then compute oriented fault samples on the ridges of the enhanced fault attribute volume, and each sample is oriented by the estimated fault strike and dip. Fault surfaces can be constructed by directly linking the oriented fault samples with consistent fault strikes and dips. For complicated cases with missing fault samples and noisy samples, we further propose to use a perceptual grouping method to infer fault surfaces that reasonably fit the positions and orientations of the fault samples. We apply these methods to 3D synthetic and real examples and successfully extract multiple intersecting fault surfaces and complete fault surfaces without holes.

  6. Comparison of holographic and field theoretic complexities for time dependent thermofield double states

    NASA Astrophysics Data System (ADS)

    Yang, Run-Qiu; Niu, Chao; Zhang, Cheng-Yong; Kim, Keun-Young

    2018-02-01

    We compute the time-dependent complexity of the thermofield double states by four different proposals: two holographic proposals based on the "complexity-action" (CA) conjecture and "complexity-volume" (CV) conjecture, and two quantum field theoretic proposals based on the Fubini-Study metric (FS) and Finsler geometry (FG). We find that four different proposals yield both similarities and differences, which will be useful to deepen our understanding on the complexity and sharpen its definition. In particular, at early time the complexity linearly increase in the CV and FG proposals, linearly decreases in the FS proposal, and does not change in the CA proposal. In the late time limit, the CA, CV and FG proposals all show that the growth rate is 2 E/(πℏ) saturating the Lloyd's bound, while the FS proposal shows the growth rate is zero. It seems that the holographic CV conjecture and the field theoretic FG method are more correlated.

  7. Walsh-Hadamard transform kernel-based feature vector for shot boundary detection.

    PubMed

    Lakshmi, Priya G G; Domnic, S

    2014-12-01

    Video shot boundary detection (SBD) is the first step of video analysis, summarization, indexing, and retrieval. In SBD process, videos are segmented into basic units called shots. In this paper, a new SBD method is proposed using color, edge, texture, and motion strength as vector of features (feature vector). Features are extracted by projecting the frames on selected basis vectors of Walsh-Hadamard transform (WHT) kernel and WHT matrix. After extracting the features, based on the significance of the features, weights are calculated. The weighted features are combined to form a single continuity signal, used as input for Procedure Based shot transition Identification process (PBI). Using the procedure, shot transitions are classified into abrupt and gradual transitions. Experimental results are examined using large-scale test sets provided by the TRECVID 2007, which has evaluated hard cut and gradual transition detection. To evaluate the robustness of the proposed method, the system evaluation is performed. The proposed method yields F1-Score of 97.4% for cut, 78% for gradual, and 96.1% for overall transitions. We have also evaluated the proposed feature vector with support vector machine classifier. The results show that WHT-based features can perform well than the other existing methods. In addition to this, few more video sequences are taken from the Openvideo project and the performance of the proposed method is compared with the recent existing SBD method.

  8. Application of Ionic Liquids in the Microwave-Assisted Extraction of Proanthocyanidins from Larix gmelini Bark

    PubMed Central

    Yang, Lei; Sun, Xiaowei; Yang, Fengjian; Zhao, Chunjian; Zhang, Lin; Zu, Yuangang

    2012-01-01

    Ionic liquid based, microwave-assisted extraction (ILMAE) was successfully applied to the extraction of proanthocyanidins from Larix gmelini bark. In this work, in order to evaluate the performance of ionic liquids in the microwave-assisted extraction process, a series of 1-alkyl-3-methylimidazolium ionic liquids with different cations and anions were evaluated for extraction yield, and 1-butyl-3-methylimidazolium bromide was selected as the optimal solvent. In addition, the ILMAE procedure for the proanthocyanidins was optimized and compared with other conventional extraction techniques. Under the optimized conditions, satisfactory extraction yield of the proanthocyanidins was obtained. Relative to other methods, the proposed approach provided higher extraction yield and lower energy consumption. The Larix gmelini bark samples before and after extraction were analyzed by Thermal gravimetric analysis, Fourier-transform infrared spectroscopy and characterized by scanning electron microscopy. The results showed that the ILMAE method is a simple and efficient technique for sample preparation. PMID:22606036

  9. Novel Spectrophotometric Method for the Quantitation of Urinary Xanthurenic Acid and Its Application in Identifying Individuals with Hyperhomocysteinemia Associated with Vitamin B6 Deficiency

    PubMed Central

    Chen, Chi-Fen; Liu, Tsan-Zon; Lan, Wu-Hsiang; Wu, Li-An; Tsai, Chin-Hung; Chiou, Jeng-Fong; Tsai, Li-Yu

    2013-01-01

    A novel spectrophotometric method for the quantification of urinary xanthurenic acid (XA) is described. The direct acid ferric reduction (DAFR) procedure was used to quantify XA after it was purified by a solid-phase extraction column. The linearity of proposed method extends from 2.5 to 100.0 mg/L. The method is precise, yielding day-to-day CVs for two pooled controls of 3.5% and 4.6%, respectively. Correlation studies with an established HPLC method and a fluorometric procedure showed correlation coefficients of 0.98 and 0.98, respectively. Interference from various urinary metabolites was insignificant. In a small-scale screening of elderly conducted at Penghu county in Taiwan (n = 80), we were able to identify a group of twenty individuals having hyperhomocysteinemia (>15 μmole/L). Three of them were found to be positive for XA as analyzed by the proposed method, which correlated excellently with the results of the activation coefficient method for RBC's AST/B6 functional test. These data confirm the usefulness of the proposed method for identifying urinary XA as an indicator of vitamin B6 deficiency-associated hyperhomocysteinemic condition. PMID:24151616

  10. Simulating large-scale crop yield by using perturbed-parameter ensemble method

    NASA Astrophysics Data System (ADS)

    Iizumi, T.; Yokozawa, M.; Sakurai, G.; Nishimori, M.

    2010-12-01

    Toshichika Iizumi, Masayuki Yokozawa, Gen Sakurai, Motoki Nishimori Agro-Meteorology Division, National Institute for Agro-Environmental Sciences, Japan Abstract One of concerning issues of food security under changing climate is to predict the inter-annual variation of crop production induced by climate extremes and modulated climate. To secure food supply for growing world population, methodology that can accurately predict crop yield on a large scale is needed. However, for developing a process-based large-scale crop model with a scale of general circulation models (GCMs), 100 km in latitude and longitude, researchers encounter the difficulties in spatial heterogeneity of available information on crop production such as cultivated cultivars and management. This study proposed an ensemble-based simulation method that uses a process-based crop model and systematic parameter perturbation procedure, taking maize in U.S., China, and Brazil as examples. The crop model was developed modifying the fundamental structure of the Soil and Water Assessment Tool (SWAT) to incorporate the effect of heat stress on yield. We called the new model PRYSBI: the Process-based Regional-scale Yield Simulator with Bayesian Inference. The posterior probability density function (PDF) of 17 parameters, which represents the crop- and grid-specific features of the crop and its uncertainty under given data, was estimated by the Bayesian inversion analysis. We then take 1500 ensemble members of simulated yield values based on the parameter sets sampled from the posterior PDF to describe yearly changes of the yield, i.e. perturbed-parameter ensemble method. The ensemble median for 27 years (1980-2006) was compared with the data aggregated from the county yield. On a country scale, the ensemble median of the simulated yield showed a good correspondence with the reported yield: the Pearson’s correlation coefficient is over 0.6 for all countries. In contrast, on a grid scale, the correspondence is still high in most grids regardless of the countries. However, the model showed comparatively low reproducibility in the slope areas, such as around the Rocky Mountains in South Dakota, around the Great Xing'anling Mountains in Heilongjiang, and around the Brazilian Plateau. As there is a wide-ranging local climate conditions in the complex terrain, such as the slope of mountain, the GCM grid-scale weather inputs is likely one of major sources of error. The results of this study highlight the benefits of the perturbed-parameter ensemble method in simulating crop yield on a GCM grid scale: (1) the posterior PDF of parameter could quantify the uncertainty of parameter value of the crop model associated with the local crop production aspects; (2) the method can explicitly account for the uncertainty of parameter value in the crop model simulations; (3) the method achieve a Monte Carlo approximation of probability of sub-grid scale yield, accounting for the nonlinear response of crop yield to weather and management; (4) the method is therefore appropriate to aggregate the simulated sub-grid scale yields to a grid-scale yield and it may be a reason for high performance of the model in capturing inter-annual variation of yield.

  11. Trans-dimensional MCMC methods for fully automatic motion analysis in tagged MRI.

    PubMed

    Smal, Ihor; Carranza-Herrezuelo, Noemí; Klein, Stefan; Niessen, Wiro; Meijering, Erik

    2011-01-01

    Tagged magnetic resonance imaging (tMRI) is a well-known noninvasive method allowing quantitative analysis of regional heart dynamics. Its clinical use has so far been limited, in part due to the lack of robustness and accuracy of existing tag tracking algorithms in dealing with low (and intrinsically time-varying) image quality. In this paper, we propose a novel probabilistic method for tag tracking, implemented by means of Bayesian particle filtering and a trans-dimensional Markov chain Monte Carlo (MCMC) approach, which efficiently combines information about the imaging process and tag appearance with prior knowledge about the heart dynamics obtained by means of non-rigid image registration. Experiments using synthetic image data (with ground truth) and real data (with expert manual annotation) from preclinical (small animal) and clinical (human) studies confirm that the proposed method yields higher consistency, accuracy, and intrinsic tag reliability assessment in comparison with other frequently used tag tracking methods.

  12. 2D DOST based local phase pattern for face recognition

    NASA Astrophysics Data System (ADS)

    Moniruzzaman, Md.; Alam, Mohammad S.

    2017-05-01

    A new two dimensional (2-D) Discrete Orthogonal Stcokwell Transform (DOST) based Local Phase Pattern (LPP) technique has been proposed for efficient face recognition. The proposed technique uses 2-D DOST as preliminary preprocessing and local phase pattern to form robust feature signature which can effectively accommodate various 3D facial distortions and illumination variations. The S-transform, is an extension of the ideas of the continuous wavelet transform (CWT), is also known for its local spectral phase properties in time-frequency representation (TFR). It provides a frequency dependent resolution of the time-frequency space and absolutely referenced local phase information while maintaining a direct relationship with the Fourier spectrum which is unique in TFR. After utilizing 2-D Stransform as the preprocessing and build local phase pattern from extracted phase information yield fast and efficient technique for face recognition. The proposed technique shows better correlation discrimination compared to alternate pattern recognition techniques such as wavelet or Gabor based face recognition. The performance of the proposed method has been tested using the Yale and extended Yale facial database under different environments such as illumination variation and 3D changes in facial expressions. Test results show that the proposed technique yields better performance compared to alternate time-frequency representation (TFR) based face recognition techniques.

  13. Reflection full-waveform inversion using a modified phase misfit function

    NASA Astrophysics Data System (ADS)

    Cui, Chao; Huang, Jian-Ping; Li, Zhen-Chun; Liao, Wen-Yuan; Guan, Zhe

    2017-09-01

    Reflection full-waveform inversion (RFWI) updates the low- and highwavenumber components, and yields more accurate initial models compared with conventional full-waveform inversion (FWI). However, there is strong nonlinearity in conventional RFWI because of the lack of low-frequency data and the complexity of the amplitude. The separation of phase and amplitude information makes RFWI more linear. Traditional phase-calculation methods face severe phase wrapping. To solve this problem, we propose a modified phase-calculation method that uses the phase-envelope data to obtain the pseudo phase information. Then, we establish a pseudophase-information-based objective function for RFWI, with the corresponding source and gradient terms. Numerical tests verify that the proposed calculation method using the phase-envelope data guarantees the stability and accuracy of the phase information and the convergence of the objective function. The application on a portion of the Sigsbee2A model and comparison with inversion results of the improved RFWI and conventional FWI methods verify that the pseudophase-based RFWI produces a highly accurate and efficient velocity model. Moreover, the proposed method is robust to noise and high frequency.

  14. Excitation-resolved multispectral method for imaging pharmacokinetic parameters in dynamic fluorescent molecular tomography

    NASA Astrophysics Data System (ADS)

    Chen, Maomao; Zhou, Yuan; Su, Han; Zhang, Dong; Luo, Jianwen

    2017-04-01

    Imaging of the pharmacokinetic parameters in dynamic fluorescence molecular tomography (DFMT) can provide three-dimensional metabolic information for biological studies and drug development. However, owing to the ill-posed nature of the FMT inverse problem, the relatively low quality of the parametric images makes it difficult to investigate the different metabolic processes of the fluorescent targets with small distances. An excitation-resolved multispectral DFMT method is proposed; it is based on the fact that the fluorescent targets with different concentrations show different variations in the excitation spectral domain and can be considered independent signal sources. With an independent component analysis method, the spatial locations of different fluorescent targets can be decomposed, and the fluorescent yields of the targets at different time points can be recovered. Therefore, the metabolic process of each component can be independently investigated. Simulations and phantom experiments are carried out to evaluate the performance of the proposed method. The results demonstrated that the proposed excitation-resolved multispectral method can effectively improve the reconstruction accuracy of the parametric images in DFMT.

  15. Drogue pose estimation for unmanned aerial vehicle autonomous aerial refueling system based on infrared vision sensor

    NASA Astrophysics Data System (ADS)

    Chen, Shanjun; Duan, Haibin; Deng, Yimin; Li, Cong; Zhao, Guozhi; Xu, Yan

    2017-12-01

    Autonomous aerial refueling is a significant technology that can significantly extend the endurance of unmanned aerial vehicles. A reliable method that can accurately estimate the position and attitude of the probe relative to the drogue is the key to such a capability. A drogue pose estimation method based on infrared vision sensor is introduced with the general goal of yielding an accurate and reliable drogue state estimate. First, by employing direct least squares ellipse fitting and convex hull in OpenCV, a feature point matching and interference point elimination method is proposed. In addition, considering the conditions that some infrared LEDs are damaged or occluded, a missing point estimation method based on perspective transformation and affine transformation is designed. Finally, an accurate and robust pose estimation algorithm improved by the runner-root algorithm is proposed. The feasibility of the designed visual measurement system is demonstrated by flight test, and the results indicate that our proposed method enables precise and reliable pose estimation of the probe relative to the drogue, even in some poor conditions.

  16. Low-redundancy linear arrays in mirrored interferometric aperture synthesis.

    PubMed

    Zhu, Dong; Hu, Fei; Wu, Liang; Li, Jun; Lang, Liang

    2016-01-15

    Mirrored interferometric aperture synthesis (MIAS) is a novel interferometry that can improve spatial resolution compared with that of conventional IAS. In one-dimensional (1-D) MIAS, antenna array with low redundancy has the potential to achieve a high spatial resolution. This Letter presents a technique for the direct construction of low-redundancy linear arrays (LRLAs) in MIAS and derives two regular analytical patterns that can yield various LRLAs in short computation time. Moreover, for a better estimation of the observed scene, a bi-measurement method is proposed to handle the rank defect associated with the transmatrix of those LRLAs. The results of imaging simulation demonstrate the effectiveness of the proposed method.

  17. Non-recursive augmented Lagrangian algorithms for the forward and inverse dynamics of constrained flexible multibodies

    NASA Technical Reports Server (NTRS)

    Bayo, Eduardo; Ledesma, Ragnar

    1993-01-01

    A technique is presented for solving the inverse dynamics of flexible planar multibody systems. This technique yields the non-causal joint efforts (inverse dynamics) as well as the internal states (inverse kinematics) that produce a prescribed nominal trajectory of the end effector. A non-recursive global Lagrangian approach is used in formulating the equations for motion as well as in solving the inverse dynamics equations. Contrary to the recursive method previously presented, the proposed method solves the inverse problem in a systematic and direct manner for both open-chain as well as closed-chain configurations. Numerical simulation shows that the proposed procedure provides an excellent tracking of the desired end effector trajectory.

  18. Fast retinal layer segmentation of spectral domain optical coherence tomography images

    NASA Astrophysics Data System (ADS)

    Zhang, Tianqiao; Song, Zhangjun; Wang, Xiaogang; Zheng, Huimin; Jia, Fucang; Wu, Jianhuang; Li, Guanglin; Hu, Qingmao

    2015-09-01

    An approach to segment macular layer thicknesses from spectral domain optical coherence tomography has been proposed. The main contribution is to decrease computational costs while maintaining high accuracy via exploring Kalman filtering, customized active contour, and curve smoothing. Validation on 21 normal volumes shows that 8 layer boundaries could be segmented within 5.8 s with an average layer boundary error <2.35 μm. It has been compared with state-of-the-art methods for both normal and age-related macular degeneration cases to yield similar or significantly better accuracy and is 37 times faster. The proposed method could be a potential tool to clinically quantify the retinal layer boundaries.

  19. Multinomial mixture model with heterogeneous classification probabilities

    USGS Publications Warehouse

    Holland, M.D.; Gray, B.R.

    2011-01-01

    Royle and Link (Ecology 86(9):2505-2512, 2005) proposed an analytical method that allowed estimation of multinomial distribution parameters and classification probabilities from categorical data measured with error. While useful, we demonstrate algebraically and by simulations that this method yields biased multinomial parameter estimates when the probabilities of correct category classifications vary among sampling units. We address this shortcoming by treating these probabilities as logit-normal random variables within a Bayesian framework. We use Markov chain Monte Carlo to compute Bayes estimates from a simulated sample from the posterior distribution. Based on simulations, this elaborated Royle-Link model yields nearly unbiased estimates of multinomial and correct classification probability estimates when classification probabilities are allowed to vary according to the normal distribution on the logit scale or according to the Beta distribution. The method is illustrated using categorical submersed aquatic vegetation data. ?? 2010 Springer Science+Business Media, LLC.

  20. Improving Pharmaceutical Protein Production in Oryza sativa

    PubMed Central

    Kuo, Yu-Chieh; Tan, Chia-Chun; Ku, Jung-Ting; Hsu, Wei-Cho; Su, Sung-Chieh; Lu, Chung-An; Huang, Li-Fen

    2013-01-01

    Application of plant expression systems in the production of recombinant proteins has several advantages, such as low maintenance cost, absence of human pathogens, and possession of complex post-translational glycosylation capabilities. Plants have been successfully used to produce recombinant cytokines, vaccines, antibodies, and other proteins, and rice (Oryza sativa) is a potential plant used as recombinant protein expression system. After successful transformation, transgenic rice cells can be either regenerated into whole plants or grown as cell cultures that can be upscaled into bioreactors. This review summarizes recent advances in the production of different recombinant protein produced in rice and describes their production methods as well as methods to improve protein yield and quality. Glycosylation and its impact in plant development and protein production are discussed, and several methods of improving yield and quality that have not been incorporated in rice expression systems are also proposed. Finally, different bioreactor options are explored and their advantages are analyzed. PMID:23615467

  1. A Support Vector Machine-Based Gender Identification Using Speech Signal

    NASA Astrophysics Data System (ADS)

    Lee, Kye-Hwan; Kang, Sang-Ick; Kim, Deok-Hwan; Chang, Joon-Hyuk

    We propose an effective voice-based gender identification method using a support vector machine (SVM). The SVM is a binary classification algorithm that classifies two groups by finding the voluntary nonlinear boundary in a feature space and is known to yield high classification performance. In the present work, we compare the identification performance of the SVM with that of a Gaussian mixture model (GMM)-based method using the mel frequency cepstral coefficients (MFCC). A novel approach of incorporating a features fusion scheme based on a combination of the MFCC and the fundamental frequency is proposed with the aim of improving the performance of gender identification. Experimental results demonstrate that the gender identification performance using the SVM is significantly better than that of the GMM-based scheme. Moreover, the performance is substantially improved when the proposed features fusion technique is applied.

  2. Improved neural network based scene-adaptive nonuniformity correction method for infrared focal plane arrays.

    PubMed

    Lai, Rui; Yang, Yin-tang; Zhou, Duan; Li, Yue-jin

    2008-08-20

    An improved scene-adaptive nonuniformity correction (NUC) algorithm for infrared focal plane arrays (IRFPAs) is proposed. This method simultaneously estimates the infrared detectors' parameters and eliminates the nonuniformity causing fixed pattern noise (FPN) by using a neural network (NN) approach. In the learning process of neuron parameter estimation, the traditional LMS algorithm is substituted with the newly presented variable step size (VSS) normalized least-mean square (NLMS) based adaptive filtering algorithm, which yields faster convergence, smaller misadjustment, and lower computational cost. In addition, a new NN structure is designed to estimate the desired target value, which promotes the calibration precision considerably. The proposed NUC method reaches high correction performance, which is validated by the experimental results quantitatively tested with a simulative testing sequence and a real infrared image sequence.

  3. Single image super-resolution via regularized extreme learning regression for imagery from microgrid polarimeters

    NASA Astrophysics Data System (ADS)

    Sargent, Garrett C.; Ratliff, Bradley M.; Asari, Vijayan K.

    2017-08-01

    The advantage of division of focal plane imaging polarimeters is their ability to obtain temporally synchronized intensity measurements across a scene; however, they sacrifice spatial resolution in doing so due to their spatially modulated arrangement of the pixel-to-pixel polarizers and often result in aliased imagery. Here, we propose a super-resolution method based upon two previously trained extreme learning machines (ELM) that attempt to recover missing high frequency and low frequency content beyond the spatial resolution of the sensor. This method yields a computationally fast and simple way of recovering lost high and low frequency content from demosaicing raw microgrid polarimetric imagery. The proposed method outperforms other state-of-the-art single-image super-resolution algorithms in terms of structural similarity and peak signal-to-noise ratio.

  4. Low-dimensional approximation searching strategy for transfer entropy from non-uniform embedding

    PubMed Central

    2018-01-01

    Transfer entropy from non-uniform embedding is a popular tool for the inference of causal relationships among dynamical subsystems. In this study we present an approach that makes use of low-dimensional conditional mutual information quantities to decompose the original high-dimensional conditional mutual information in the searching procedure of non-uniform embedding for significant variables at different lags. We perform a series of simulation experiments to assess the sensitivity and specificity of our proposed method to demonstrate its advantage compared to previous algorithms. The results provide concrete evidence that low-dimensional approximations can help to improve the statistical accuracy of transfer entropy in multivariate causality analysis and yield a better performance over other methods. The proposed method is especially efficient as the data length grows. PMID:29547669

  5. Partial branch and bound algorithm for improved data association in multiframe processing

    NASA Astrophysics Data System (ADS)

    Poore, Aubrey B.; Yan, Xin

    1999-07-01

    A central problem in multitarget, multisensor, and multiplatform tracking remains that of data association. Lagrangian relaxation methods have shown themselves to yield near optimal answers in real-time. The necessary improvement in the quality of these solutions warrants a continuing interest in these methods. These problems are NP-hard; the only known methods for solving them optimally are enumerative in nature with branch-and-bound being most efficient. Thus, the development of methods less than a full branch-and-bound are needed for improving the quality. Such methods as K-best, local search, and randomized search have been proposed to improve the quality of the relaxation solution. Here, a partial branch-and-bound technique along with adequate branching and ordering rules are developed. Lagrangian relaxation is used as a branching method and as a method to calculate the lower bound for subproblems. The result shows that the branch-and-bound framework greatly improves the resolution quality of the Lagrangian relaxation algorithm and yields better multiple solutions in less time than relaxation alone.

  6. Role of weakest links and system-size scaling in multiscale modeling of stochastic plasticity

    NASA Astrophysics Data System (ADS)

    Ispánovity, Péter Dusán; Tüzes, Dániel; Szabó, Péter; Zaiser, Michael; Groma, István

    2017-02-01

    Plastic deformation of crystalline and amorphous matter often involves intermittent local strain burst events. To understand the physical background of the phenomenon a minimal stochastic mesoscopic model was introduced, where details of the microstructure evolution are statistically represented in terms of a fluctuating local yield threshold. In the present paper we propose a method for determining the corresponding yield stress distribution for the case of crystal plasticity from lower scale discrete dislocation dynamics simulations which we combine with weakest link arguments. The success of scale linking is demonstrated by comparing stress-strain curves obtained from the resulting mesoscopic and the underlying discrete dislocation models in the microplastic regime. As shown by various scaling relations they are statistically equivalent and behave identically in the thermodynamic limit. The proposed technique is expected to be applicable to different microstructures and also to amorphous materials.

  7. 76 FR 805 - Self-Regulatory Organizations; NYSE Arca, Inc.; Notice of Filing of Proposed Rule Change Relating...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-01-06

    ... Trading Shares of the SPDR Nuveen S&P High Yield Municipal Bond ETF December 30, 2010. Pursuant to Section... Change The Exchange proposes to list and trade shares of the SPDR Nuveen S&P High Yield Municipal Bond... for, the Proposed Rule Change 1. Purpose The Exchange proposes to list and trade shares (``Shares...

  8. Model predictive controller design for boost DC-DC converter using T-S fuzzy cost function

    NASA Astrophysics Data System (ADS)

    Seo, Sang-Wha; Kim, Yong; Choi, Han Ho

    2017-11-01

    This paper proposes a Takagi-Sugeno (T-S) fuzzy method to select cost function weights of finite control set model predictive DC-DC converter control algorithms. The proposed method updates the cost function weights at every sample time by using T-S type fuzzy rules derived from the common optimal control engineering knowledge that a state or input variable with an excessively large magnitude can be penalised by increasing the weight corresponding to the variable. The best control input is determined via the online optimisation of the T-S fuzzy cost function for all the possible control input sequences. This paper implements the proposed model predictive control algorithm in real time on a Texas Instruments TMS320F28335 floating-point Digital Signal Processor (DSP). Some experimental results are given to illuminate the practicality and effectiveness of the proposed control system under several operating conditions. The results verify that our method can yield not only good transient and steady-state responses (fast recovery time, small overshoot, zero steady-state error, etc.) but also insensitiveness to abrupt load or input voltage parameter variations.

  9. Trends in Spending on Training: An Analysis of the 1982 through 2008 Training Annual Industry Reports

    ERIC Educational Resources Information Center

    Carliner, Saul; Bakir, Ingy

    2010-01-01

    This article explores long-term trends in spending using data compiled from the "Training" magazine Annual Industry Survey from 1982 through 2008. It builds on literature that proposes spending on training is an investment that yields benefits--and that offers methods for demonstrating it. After adjusting for inflation, aggregate spending on…

  10. Optimization of processing parameters of UAV integral structural components based on yield response

    NASA Astrophysics Data System (ADS)

    Chen, Yunsheng

    2018-05-01

    In order to improve the overall strength of unmanned aerial vehicle (UAV), it is necessary to optimize the processing parameters of UAV structural components, which is affected by initial residual stress in the process of UAV structural components processing. Because machining errors are easy to occur, an optimization model for machining parameters of UAV integral structural components based on yield response is proposed. The finite element method is used to simulate the machining parameters of UAV integral structural components. The prediction model of workpiece surface machining error is established, and the influence of the path of walking knife on residual stress of UAV integral structure is studied, according to the stress of UAV integral component. The yield response of the time-varying stiffness is analyzed, and the yield response and the stress evolution mechanism of the UAV integral structure are analyzed. The simulation results show that this method is used to optimize the machining parameters of UAV integral structural components and improve the precision of UAV milling processing. The machining error is reduced, and the deformation prediction and error compensation of UAV integral structural parts are realized, thus improving the quality of machining.

  11. A Metric for Reducing False Positives in the Computer-Aided Detection of Breast Cancer from Dynamic Contrast-Enhanced Magnetic Resonance Imaging Based Screening Examinations of High-Risk Women.

    PubMed

    Levman, Jacob E D; Gallego-Ortiz, Cristina; Warner, Ellen; Causer, Petrina; Martel, Anne L

    2016-02-01

    Magnetic resonance imaging (MRI)-enabled cancer screening has been shown to be a highly sensitive method for the early detection of breast cancer. Computer-aided detection systems have the potential to improve the screening process by standardizing radiologists to a high level of diagnostic accuracy. This retrospective study was approved by the institutional review board of Sunnybrook Health Sciences Centre. This study compares the performance of a proposed method for computer-aided detection (based on the second-order spatial derivative of the relative signal intensity) with the signal enhancement ratio (SER) on MRI-based breast screening examinations. Comparison is performed using receiver operating characteristic (ROC) curve analysis as well as free-response receiver operating characteristic (FROC) curve analysis. A modified computer-aided detection system combining the proposed approach with the SER method is also presented. The proposed method provides improvements in the rates of false positive markings over the SER method in the detection of breast cancer (as assessed by FROC analysis). The modified computer-aided detection system that incorporates both the proposed method and the SER method yields ROC results equal to that produced by SER while simultaneously providing improvements over the SER method in terms of false positives per noncancerous exam. The proposed method for identifying malignancies outperforms the SER method in terms of false positives on a challenging dataset containing many small lesions and may play a useful role in breast cancer screening by MRI as part of a computer-aided detection system.

  12. Legendre-tau approximation for functional differential equations. Part 2: The linear quadratic optimal control problem

    NASA Technical Reports Server (NTRS)

    Ito, K.; Teglas, R.

    1984-01-01

    The numerical scheme based on the Legendre-tau approximation is proposed to approximate the feedback solution to the linear quadratic optimal control problem for hereditary differential systems. The convergence property is established using Trotter ideas. The method yields very good approximations at low orders and provides an approximation technique for computing closed-loop eigenvalues of the feedback system. A comparison with existing methods (based on averaging and spline approximations) is made.

  13. Legendre-tau approximation for functional differential equations. II - The linear quadratic optimal control problem

    NASA Technical Reports Server (NTRS)

    Ito, Kazufumi; Teglas, Russell

    1987-01-01

    The numerical scheme based on the Legendre-tau approximation is proposed to approximate the feedback solution to the linear quadratic optimal control problem for hereditary differential systems. The convergence property is established using Trotter ideas. The method yields very good approximations at low orders and provides an approximation technique for computing closed-loop eigenvalues of the feedback system. A comparison with existing methods (based on averaging and spline approximations) is made.

  14. Thermal conductivity of catalyst layer of polymer electrolyte membrane fuel cells: Part 1 - Experimental study

    NASA Astrophysics Data System (ADS)

    Ahadi, Mohammad; Tam, Mickey; Saha, Madhu S.; Stumper, Jürgen; Bahrami, Majid

    2017-06-01

    In this work, a new methodology is proposed for measuring the through-plane thermal conductivity of catalyst layers (CLs) in polymer electrolyte membrane fuel cells. The proposed methodology is based on deconvolution of bulk thermal conductivity of a CL from measurements of two thicknesses of the CL, where the CLs are sandwiched in a stack made of two catalyst-coated substrates. Effects of hot-pressing, compression, measurement method, and substrate on the through-plane thermal conductivity of the CL are studied. For this purpose, different thicknesses of catalyst are coated on ethylene tetrafluoroethylene (ETFE) and aluminum (Al) substrates by a conventional Mayer bar coater and measured by scanning electron microscopy (SEM). The through-plane thermal conductivity of the CLs is measured by the well-known guarded heat flow (GHF) method as well as a recently developed transient plane source (TPS) method for thin films which modifies the original TPS thin film method. Measurements show that none of the studied factors has any effect on the through-plane thermal conductivity of the CL. GHF measurements of a non-hot-pressed CL on Al yield thermal conductivity of 0.214 ± 0.005 Wṡm-1ṡK-1, and TPS measurements of a hot-pressed CL on ETFE yield thermal conductivity of 0.218 ± 0.005 Wṡm-1ṡK-1.

  15. Detect2Rank: Combining Object Detectors Using Learning to Rank.

    PubMed

    Karaoglu, Sezer; Yang Liu; Gevers, Theo

    2016-01-01

    Object detection is an important research area in the field of computer vision. Many detection algorithms have been proposed. However, each object detector relies on specific assumptions of the object appearance and imaging conditions. As a consequence, no algorithm can be considered universal. With the large variety of object detectors, the subsequent question is how to select and combine them. In this paper, we propose a framework to learn how to combine object detectors. The proposed method uses (single) detectors like Deformable Part Models, Color Names and Ensemble of Exemplar-SVMs, and exploits their correlation by high-level contextual features to yield a combined detection list. Experiments on the PASCAL VOC07 and VOC10 data sets show that the proposed method significantly outperforms single object detectors, DPM (8.4%), CN (6.8%) and EES (17.0%) on VOC07 and DPM (6.5%), CN (5.5%) and EES (16.2%) on VOC10. We show with an experiment that there are no constraints on the type of the detector. The proposed method outperforms (2.4%) the state-of-the-art object detector (RCNN) on VOC07 when Regions with Convolutional Neural Network is combined with other detectors used in this paper.

  16. Hippocampus Segmentation Based on Local Linear Mapping

    PubMed Central

    Pang, Shumao; Jiang, Jun; Lu, Zhentai; Li, Xueli; Yang, Wei; Huang, Meiyan; Zhang, Yu; Feng, Yanqiu; Huang, Wenhua; Feng, Qianjin

    2017-01-01

    We propose local linear mapping (LLM), a novel fusion framework for distance field (DF) to perform automatic hippocampus segmentation. A k-means cluster method is propose for constructing magnetic resonance (MR) and DF dictionaries. In LLM, we assume that the MR and DF samples are located on two nonlinear manifolds and the mapping from the MR manifold to the DF manifold is differentiable and locally linear. We combine the MR dictionary using local linear representation to present the test sample, and combine the DF dictionary using the corresponding coefficients derived from local linear representation procedure to predict the DF of the test sample. We then merge the overlapped predicted DF patch to obtain the DF value of each point in the test image via a confidence-based weighted average method. This approach enabled us to estimate the label of the test image according to the predicted DF. The proposed method was evaluated on brain images of 35 subjects obtained from SATA dataset. Results indicate the effectiveness of the proposed method, which yields mean Dice similarity coefficients of 0.8697, 0.8770 and 0.8734 for the left, right and bi-lateral hippocampus, respectively. PMID:28368016

  17. Hippocampus Segmentation Based on Local Linear Mapping.

    PubMed

    Pang, Shumao; Jiang, Jun; Lu, Zhentai; Li, Xueli; Yang, Wei; Huang, Meiyan; Zhang, Yu; Feng, Yanqiu; Huang, Wenhua; Feng, Qianjin

    2017-04-03

    We propose local linear mapping (LLM), a novel fusion framework for distance field (DF) to perform automatic hippocampus segmentation. A k-means cluster method is propose for constructing magnetic resonance (MR) and DF dictionaries. In LLM, we assume that the MR and DF samples are located on two nonlinear manifolds and the mapping from the MR manifold to the DF manifold is differentiable and locally linear. We combine the MR dictionary using local linear representation to present the test sample, and combine the DF dictionary using the corresponding coefficients derived from local linear representation procedure to predict the DF of the test sample. We then merge the overlapped predicted DF patch to obtain the DF value of each point in the test image via a confidence-based weighted average method. This approach enabled us to estimate the label of the test image according to the predicted DF. The proposed method was evaluated on brain images of 35 subjects obtained from SATA dataset. Results indicate the effectiveness of the proposed method, which yields mean Dice similarity coefficients of 0.8697, 0.8770 and 0.8734 for the left, right and bi-lateral hippocampus, respectively.

  18. A thermodynamically consistent discontinuous Galerkin formulation for interface separation

    DOE PAGES

    Versino, Daniele; Mourad, Hashem M.; Dávila, Carlos G.; ...

    2015-07-31

    Our paper describes the formulation of an interface damage model, based on the discontinuous Galerkin (DG) method, for the simulation of failure and crack propagation in laminated structures. The DG formulation avoids common difficulties associated with cohesive elements. Specifically, it does not introduce any artificial interfacial compliance and, in explicit dynamic analysis, it leads to a stable time increment size which is unaffected by the presence of stiff massless interfaces. This proposed method is implemented in a finite element setting. Convergence and accuracy are demonstrated in Mode I and mixed-mode delamination in both static and dynamic analyses. Significantly, numerical resultsmore » obtained using the proposed interface model are found to be independent of the value of the penalty factor that characterizes the DG formulation. By contrast, numerical results obtained using a classical cohesive method are found to be dependent on the cohesive penalty stiffnesses. The proposed approach is shown to yield more accurate predictions pertaining to crack propagation under mixed-mode fracture because of the advantage. Furthermore, in explicit dynamic analysis, the stable time increment size calculated with the proposed method is found to be an order of magnitude larger than the maximum allowable value for classical cohesive elements.« less

  19. Hippocampus Segmentation Based on Local Linear Mapping

    NASA Astrophysics Data System (ADS)

    Pang, Shumao; Jiang, Jun; Lu, Zhentai; Li, Xueli; Yang, Wei; Huang, Meiyan; Zhang, Yu; Feng, Yanqiu; Huang, Wenhua; Feng, Qianjin

    2017-04-01

    We propose local linear mapping (LLM), a novel fusion framework for distance field (DF) to perform automatic hippocampus segmentation. A k-means cluster method is propose for constructing magnetic resonance (MR) and DF dictionaries. In LLM, we assume that the MR and DF samples are located on two nonlinear manifolds and the mapping from the MR manifold to the DF manifold is differentiable and locally linear. We combine the MR dictionary using local linear representation to present the test sample, and combine the DF dictionary using the corresponding coefficients derived from local linear representation procedure to predict the DF of the test sample. We then merge the overlapped predicted DF patch to obtain the DF value of each point in the test image via a confidence-based weighted average method. This approach enabled us to estimate the label of the test image according to the predicted DF. The proposed method was evaluated on brain images of 35 subjects obtained from SATA dataset. Results indicate the effectiveness of the proposed method, which yields mean Dice similarity coefficients of 0.8697, 0.8770 and 0.8734 for the left, right and bi-lateral hippocampus, respectively.

  20. GM(1,N) method for the prediction of anaerobic digestion system and sensitivity analysis of influential factors.

    PubMed

    Ren, Jingzheng

    2018-01-01

    Anaerobic digestion process has been recognized as a promising way for waste treatment and energy recovery in a sustainable way. Modelling of anaerobic digestion system is significantly important for effectively and accurately controlling, adjusting, and predicting the system for higher methane yield. The GM(1,N) approach which does not need the mechanism or a large number of samples was employed to model the anaerobic digestion system to predict methane yield. In order to illustrate the proposed model, an illustrative case about anaerobic digestion of municipal solid waste for methane yield was studied, and the results demonstrate that GM(1,N) model can effectively simulate anaerobic digestion system at the cases of poor information with less computational expense. Copyright © 2017 Elsevier Ltd. All rights reserved.

  1. Microwave-assisted extraction of coumarin and related compounds from Melilotus officinalis (L.) Pallas as an alternative to Soxhlet and ultrasound-assisted extraction.

    PubMed

    Martino, Emanuela; Ramaiola, Ilaria; Urbano, Mariangela; Bracco, Francesco; Collina, Simona

    2006-09-01

    Soxhlet extraction, ultrasound-assisted extraction (USAE) and microwaves-assisted extraction (MAE) in closed system have been investigated to determine the content of coumarin, o-coumaric and melilotic acids in flowering tops of Melilotus officinalis. The extracts were analyzed with an appropriate HPLC procedure. The reproducibility of extraction and of chromatographic analysis was proved. Taking into account the extraction yield, the cost and the time, we studied the effects of extraction variables on the yield of the above-mentioned compounds. Better results were obtained with MAE (50% v/v aqueous ethanol, two heating cycles of 5 min, 50 degrees C). On the basis of the ratio extraction yield/extraction time, we therefore propose MAE as the most efficient method.

  2. Dipeptide Formation from Amino Acid Monomer Induced by keV Ion Irradiation: An Implication for Physicochemical Repair by Radiation Itself

    NASA Astrophysics Data System (ADS)

    Wang, Wei; Yuan, Hang; Wang, Xiangqin; Yu, Zengliang

    2008-02-01

    An identification of Phe dipeptide from L-phenylalanine monomers after keV nitrogen and argon ion implantation, by using the HPLC (high performance liquid chromatography) and LC-MS(liquid chromatography mass spectrometer) methods is reported. The results showed a similar yield behavior for both ion species, namely: 1) the yield of dipeptides under alkalescent conditions was distinctly higher than that under acidic or neutral conditions; 2) for different ion species, the dose-yield curves tracked a similar trend which was called a counter-saddle curve. The dipeptide formation may implicate a recombination repair mechanism of damaged biomolecules that energetic ions have left in their wake. Accordingly a physicochemical self-repair mechanism by radiation itself for the ion-beam radiobiological effects is proposed.

  3. Multiview echocardiography fusion using an electromagnetic tracking system.

    PubMed

    Punithakumar, Kumaradevan; Hareendranathan, Abhilash R; Paakkanen, Riitta; Khan, Nehan; Noga, Michelle; Boulanger, Pierre; Becher, Harald

    2016-08-01

    Three-dimensional ultrasound is an emerging modality for the assessment of complex cardiac anatomy and function. The advantages of this modality include lack of ionizing radiation, portability, low cost, and high temporal resolution. Major limitations include limited field-of-view, reliance on frequently limited acoustic windows, and poor signal to noise ratio. This study proposes a novel approach to combine multiple views into a single image using an electromagnetic tracking system in order to improve the field-of-view. The novel method has several advantages: 1) it does not rely on image information for alignment, and therefore, the method does not require image overlap; 2) the alignment accuracy of the proposed approach is not affected by any poor image quality as in the case of image registration based approaches; 3) in contrast to previous optical tracking based system, the proposed approach does not suffer from line-of-sight limitation; and 4) it does not require any initial calibration. In this pilot project, we were able to show that using a heart phantom, our method can fuse multiple echocardiographic images and improve the field-of view. Quantitative evaluations showed that the proposed method yielded a nearly optimal alignment of image data sets in three-dimensional space. The proposed method demonstrates the electromagnetic system can be used for the fusion of multiple echocardiography images with a seamless integration of sensors to the transducer.

  4. Numerical simulations of incompressible laminar flows using viscous-inviscid interaction procedures

    NASA Astrophysics Data System (ADS)

    Shatalov, Alexander V.

    The present method is based on Helmholtz velocity decomposition where velocity is written as a sum of irrotational (gradient of a potential) and rotational (correction due to vorticity) components. Substitution of the velocity decomposition into the continuity equation yields an equation for the potential, while substitution into the momentum equations yields equations for the velocity corrections. A continuation approach is used to relate the pressure to the gradient of the potential through a modified Bernoulli's law, which allows the elimination of the pressure variable from the momentum equations. The present work considers steady and unsteady two-dimensional incompressible flows over an infinite cylinder and NACA 0012 airfoil shape. The numerical results are compared against standard methods (stream function-vorticity and SMAC methods) and data available in literature. The results demonstrate that the proposed formulation leads to a good approximation with some possible benefits compared to the available formulations. The method is not restricted to two-dimensional flows and can be used for viscous-inviscid domain decomposition calculations.

  5. Using pre-screening methods for an effective and reliable site characterization at megasites.

    PubMed

    Algreen, Mette; Kalisz, Mariusz; Stalder, Marcel; Martac, Eugeniu; Krupanek, Janusz; Trapp, Stefan; Bartke, Stephan

    2015-10-01

    This paper illustrates the usefulness of pre-screening methods for an effective characterization of polluted sites. We applied a sequence of site characterization methods to a former Soviet military airbase with likely fuel and benzene, toluene, ethylbenzene, and xylene (BTEX) contamination in shallow groundwater and subsoil. The methods were (i) phytoscreening with tree cores; (ii) soil gas measurements for CH4, O2, and photoionization detector (PID); (iii) direct-push with membrane interface probe (MIP) and laser-induced fluorescence (LIF) sensors; (iv) direct-push sampling; and (v) sampling from soil and from groundwater monitoring wells. Phytoscreening and soil gas measurements are rapid and inexpensive pre-screening methods. Both indicated subsurface pollution and hot spots successfully. The direct-push sensors yielded 3D information about the extension and the volume of the subsurface plume. This study also expanded the applicability of tree coring to BTEX compounds and tested the use of high-resolution direct-push sensors for light hydrocarbons. Comparison of screening results to results from conventional soil and groundwater sampling yielded in most cases high rank correlation and confirmed the findings. The large-scale application of non- or low-invasive pre-screening can be of help in directing and focusing the subsequent, more expensive investigation methods. The rapid pre-screening methods also yielded useful information about potential remediation methods. Overall, we see several benefits of a stepwise screening and site characterization scheme, which we propose in conclusion.

  6. Validation of a T1 and T2* leakage correction method based on multi-echo DSC-MRI using MION as a reference standard

    PubMed Central

    Stokes, Ashley M.; Semmineh, Natenael; Quarles, C. Chad

    2015-01-01

    Purpose A combined biophysical- and pharmacokinetic-based method is proposed to separate, quantify, and correct for both T1 and T2* leakage effects using dual-echo DSC acquisitions to provide more accurate hemodynamic measures, as validated by a reference intravascular contrast agent (CA). Methods Dual-echo DSC-MRI data were acquired in two rodent glioma models. The T1 leakage effects were removed and also quantified in order to subsequently correct for the remaining T2* leakage effects. Pharmacokinetic, biophysical, and combined biophysical and pharmacokinetic models were used to obtain corrected cerebral blood volume (CBV) and cerebral blood flow (CBF), and these were compared with CBV and CBF from an intravascular CA. Results T1-corrected CBV was significantly overestimated compared to MION CBV, while T1+T2*-correction yielded CBV values closer to the reference values. The pharmacokinetic and simplified biophysical methods showed similar results and underestimated CBV in tumors exhibiting strong T2* leakage effects. The combined method was effective for correcting T1 and T2* leakage effects across tumor types. Conclusions Correcting for both T1 and T2* leakage effects yielded more accurate measures of CBV. The combined correction method yields more reliable CBV measures than either correction method alone, but for certain brain tumor types (e.g., gliomas) the simplified biophysical method may provide a robust and computationally efficient alternative. PMID:26362714

  7. Hierarchical semi-numeric method for pairwise fuzzy group decision making.

    PubMed

    Marimin, M; Umano, M; Hatono, I; Tamura, H

    2002-01-01

    Gradual improvements to a single-level semi-numeric method, i.e., linguistic labels preference representation by fuzzy sets computation for pairwise fuzzy group decision making are summarized. The method is extended to solve multiple criteria hierarchical structure pairwise fuzzy group decision-making problems. The problems are hierarchically structured into focus, criteria, and alternatives. Decision makers express their evaluations of criteria and alternatives based on each criterion by using linguistic labels. The labels are converted into and processed in triangular fuzzy numbers (TFNs). Evaluations of criteria yield relative criteria weights. Evaluations of the alternatives, based on each criterion, yield a degree of preference for each alternative or a degree of satisfaction for each preference value. By using a neat ordered weighted average (OWA) or a fuzzy weighted average operator, solutions obtained based on each criterion are aggregated into final solutions. The hierarchical semi-numeric method is suitable for solving a larger and more complex pairwise fuzzy group decision-making problem. The proposed method has been verified and applied to solve some real cases and is compared to Saaty's (1996) analytic hierarchy process (AHP) method.

  8. Multistrategy Self-Organizing Map Learning for Classification Problems

    PubMed Central

    Hasan, S.; Shamsuddin, S. M.

    2011-01-01

    Multistrategy Learning of Self-Organizing Map (SOM) and Particle Swarm Optimization (PSO) is commonly implemented in clustering domain due to its capabilities in handling complex data characteristics. However, some of these multistrategy learning architectures have weaknesses such as slow convergence time always being trapped in the local minima. This paper proposes multistrategy learning of SOM lattice structure with Particle Swarm Optimisation which is called ESOMPSO for solving various classification problems. The enhancement of SOM lattice structure is implemented by introducing a new hexagon formulation for better mapping quality in data classification and labeling. The weights of the enhanced SOM are optimised using PSO to obtain better output quality. The proposed method has been tested on various standard datasets with substantial comparisons with existing SOM network and various distance measurement. The results show that our proposed method yields a promising result with better average accuracy and quantisation errors compared to the other methods as well as convincing significant test. PMID:21876686

  9. Quasi-closed phase forward-backward linear prediction analysis of speech for accurate formant detection and estimation.

    PubMed

    Gowda, Dhananjaya; Airaksinen, Manu; Alku, Paavo

    2017-09-01

    Recently, a quasi-closed phase (QCP) analysis of speech signals for accurate glottal inverse filtering was proposed. However, the QCP analysis which belongs to the family of temporally weighted linear prediction (WLP) methods uses the conventional forward type of sample prediction. This may not be the best choice especially in computing WLP models with a hard-limiting weighting function. A sample selective minimization of the prediction error in WLP reduces the effective number of samples available within a given window frame. To counter this problem, a modified quasi-closed phase forward-backward (QCP-FB) analysis is proposed, wherein each sample is predicted based on its past as well as future samples thereby utilizing the available number of samples more effectively. Formant detection and estimation experiments on synthetic vowels generated using a physical modeling approach as well as natural speech utterances show that the proposed QCP-FB method yields statistically significant improvements over the conventional linear prediction and QCP methods.

  10. A quantitative method for risk assessment of agriculture due to climate change

    NASA Astrophysics Data System (ADS)

    Dong, Zhiqiang; Pan, Zhihua; An, Pingli; Zhang, Jingting; Zhang, Jun; Pan, Yuying; Huang, Lei; Zhao, Hui; Han, Guolin; Wu, Dong; Wang, Jialin; Fan, Dongliang; Gao, Lin; Pan, Xuebiao

    2018-01-01

    Climate change has greatly affected agriculture. Agriculture is facing increasing risks as its sensitivity and vulnerability to climate change. Scientific assessment of climate change-induced agricultural risks could help to actively deal with climate change and ensure food security. However, quantitative assessment of risk is a difficult issue. Here, based on the IPCC assessment reports, a quantitative method for risk assessment of agriculture due to climate change is proposed. Risk is described as the product of the degree of loss and its probability of occurrence. The degree of loss can be expressed by the yield change amplitude. The probability of occurrence can be calculated by the new concept of climate change effect-accumulated frequency (CCEAF). Specific steps of this assessment method are suggested. This method is determined feasible and practical by using the spring wheat in Wuchuan County of Inner Mongolia as a test example. The results show that the fluctuation of spring wheat yield increased with the warming and drying climatic trend in Wuchuan County. The maximum yield decrease and its probability were 3.5 and 64.6%, respectively, for the temperature maximum increase 88.3%, and its risk was 2.2%. The maximum yield decrease and its probability were 14.1 and 56.1%, respectively, for the precipitation maximum decrease 35.2%, and its risk was 7.9%. For the comprehensive impacts of temperature and precipitation, the maximum yield decrease and its probability were 17.6 and 53.4%, respectively, and its risk increased to 9.4%. If we do not adopt appropriate adaptation strategies, the degree of loss from the negative impacts of multiclimatic factors and its probability of occurrence will both increase accordingly, and the risk will also grow obviously.

  11. Classification of EEG Signals Based on Pattern Recognition Approach.

    PubMed

    Amin, Hafeez Ullah; Mumtaz, Wajid; Subhani, Ahmad Rauf; Saad, Mohamad Naufal Mohamad; Malik, Aamir Saeed

    2017-01-01

    Feature extraction is an important step in the process of electroencephalogram (EEG) signal classification. The authors propose a "pattern recognition" approach that discriminates EEG signals recorded during different cognitive conditions. Wavelet based feature extraction such as, multi-resolution decompositions into detailed and approximate coefficients as well as relative wavelet energy were computed. Extracted relative wavelet energy features were normalized to zero mean and unit variance and then optimized using Fisher's discriminant ratio (FDR) and principal component analysis (PCA). A high density EEG dataset validated the proposed method (128-channels) by identifying two classifications: (1) EEG signals recorded during complex cognitive tasks using Raven's Advance Progressive Metric (RAPM) test; (2) EEG signals recorded during a baseline task (eyes open). Classifiers such as, K-nearest neighbors (KNN), Support Vector Machine (SVM), Multi-layer Perceptron (MLP), and Naïve Bayes (NB) were then employed. Outcomes yielded 99.11% accuracy via SVM classifier for coefficient approximations (A5) of low frequencies ranging from 0 to 3.90 Hz. Accuracy rates for detailed coefficients were 98.57 and 98.39% for SVM and KNN, respectively; and for detailed coefficients (D5) deriving from the sub-band range (3.90-7.81 Hz). Accuracy rates for MLP and NB classifiers were comparable at 97.11-89.63% and 91.60-81.07% for A5 and D5 coefficients, respectively. In addition, the proposed approach was also applied on public dataset for classification of two cognitive tasks and achieved comparable classification results, i.e., 93.33% accuracy with KNN. The proposed scheme yielded significantly higher classification performances using machine learning classifiers compared to extant quantitative feature extraction. These results suggest the proposed feature extraction method reliably classifies EEG signals recorded during cognitive tasks with a higher degree of accuracy.

  12. Classification of EEG Signals Based on Pattern Recognition Approach

    PubMed Central

    Amin, Hafeez Ullah; Mumtaz, Wajid; Subhani, Ahmad Rauf; Saad, Mohamad Naufal Mohamad; Malik, Aamir Saeed

    2017-01-01

    Feature extraction is an important step in the process of electroencephalogram (EEG) signal classification. The authors propose a “pattern recognition” approach that discriminates EEG signals recorded during different cognitive conditions. Wavelet based feature extraction such as, multi-resolution decompositions into detailed and approximate coefficients as well as relative wavelet energy were computed. Extracted relative wavelet energy features were normalized to zero mean and unit variance and then optimized using Fisher's discriminant ratio (FDR) and principal component analysis (PCA). A high density EEG dataset validated the proposed method (128-channels) by identifying two classifications: (1) EEG signals recorded during complex cognitive tasks using Raven's Advance Progressive Metric (RAPM) test; (2) EEG signals recorded during a baseline task (eyes open). Classifiers such as, K-nearest neighbors (KNN), Support Vector Machine (SVM), Multi-layer Perceptron (MLP), and Naïve Bayes (NB) were then employed. Outcomes yielded 99.11% accuracy via SVM classifier for coefficient approximations (A5) of low frequencies ranging from 0 to 3.90 Hz. Accuracy rates for detailed coefficients were 98.57 and 98.39% for SVM and KNN, respectively; and for detailed coefficients (D5) deriving from the sub-band range (3.90–7.81 Hz). Accuracy rates for MLP and NB classifiers were comparable at 97.11–89.63% and 91.60–81.07% for A5 and D5 coefficients, respectively. In addition, the proposed approach was also applied on public dataset for classification of two cognitive tasks and achieved comparable classification results, i.e., 93.33% accuracy with KNN. The proposed scheme yielded significantly higher classification performances using machine learning classifiers compared to extant quantitative feature extraction. These results suggest the proposed feature extraction method reliably classifies EEG signals recorded during cognitive tasks with a higher degree of accuracy. PMID:29209190

  13. Does photodissociation of molecular oxygen from myoglobin and hemoglobin yield singlet oxygen?

    PubMed

    Lepeshkevich, Sergei V; Stasheuski, Alexander S; Parkhats, Marina V; Galievsky, Victor A; Dzhagarov, Boris M

    2013-03-05

    Time-resolved luminescence measurements in the near-infrared region indicate that photodissociation of molecular oxygen from myoglobin and hemoglobin does not produce detectable quantities of singlet oxygen. A simple and highly sensitive method of luminescence quantification is developed and used to determine the upper limit for the quantum yield of singlet oxygen production. The proposed method was preliminarily evaluated using model data sets and confirmed with experimental data for aqueous solutions of 5,10,15,20-tetrakis(4-N-methylpyridyl) porphyrin. A general procedure for error estimation is suggested. The method is shown to provide a determination of the integral luminescence intensity in a wide range of values even for kinetics with extremely low signal-to-noise ratio. The present experimental data do not deny the possibility of singlet oxygen generation during the photodissociation of molecular oxygen from myoglobin and hemoglobin. However, the photodissociation is not efficient to yield singlet oxygen escaped from the proteins into the surrounding medium. The upper limits for the quantum yields of singlet oxygen production in the surrounding medium after the photodissociation for oxyhemoglobin and oxymyoglobin do not exceed 3.4×10(-3) and 2.3×10(-3), respectively. On the average, no more than one molecule of singlet oxygen from every hundred photodissociated oxygen molecules can succeed in escaping from the protein matrix. Copyright © 2013 Elsevier B.V. All rights reserved.

  14. Standard addition with internal standardisation as an alternative to using stable isotope labelled internal standards to correct for matrix effects-Comparison and validation using liquid chromatography-​tandem mass spectrometric assay of vitamin D.

    PubMed

    Hewavitharana, Amitha K; Abu Kassim, Nur Sofiah; Shaw, Paul Nicholas

    2018-06-08

    With mass spectrometric detection in liquid chromatography, co-eluting impurities affect the analyte response due to ion suppression/enhancement. Internal standard calibration method, using co-eluting stable isotope labelled analogue of each analyte as the internal standard, is the most appropriate technique available to correct for these matrix effects. However, this technique is not without drawbacks, proved to be expensive because separate internal standard for each analyte is required, and the labelled compounds are expensive or require synthesising. Traditionally, standard addition method has been used to overcome the matrix effects in atomic spectroscopy and was a well-established method. This paper proposes the same for mass spectrometric detection, and demonstrates that the results are comparable to those with the internal standard method using labelled analogues, for vitamin D assay. As conventional standard addition procedure does not address procedural errors, we propose the inclusion of an additional internal standard (not co-eluting). Recoveries determined on human serum samples show that the proposed method of standard addition yields more accurate results than the internal standardisation using stable isotope labelled analogues. The precision of the proposed method of standard addition is superior to the conventional standard addition method. Copyright © 2018 Elsevier B.V. All rights reserved.

  15. Alternating steady state free precession for estimation of current-induced magnetic flux density: A feasibility study.

    PubMed

    Lee, Hyunyeol; Jeong, Woo Chul; Kim, Hyung Joong; Woo, Eung Je; Park, Jaeseok

    2016-05-01

    To develop a novel, current-controlled alternating steady-state free precession (SSFP)-based conductivity imaging method and corresponding MR signal models to estimate current-induced magnetic flux density (Bz ) and conductivity distribution. In the proposed method, an SSFP pulse sequence, which is in sync with alternating current pulses, produces dual oscillating steady states while yielding nonlinear relation between signal phase and Bz . A ratiometric signal model between the states was analytically derived using the Bloch equation, wherein Bz was estimated by solving a nonlinear inverse problem for conductivity estimation. A theoretical analysis on the signal-to-noise ratio of Bz was given. Numerical and experimental studies were performed using SSFP-FID and SSFP-ECHO with current pulses positioned either before or after signal encoding to investigate the feasibility of the proposed method in conductivity estimation. Given all SSFP variants herein, SSFP-FID with alternating current pulses applied before signal encoding exhibits the highest Bz signal-to-noise ratio and conductivity contrast. Additionally, compared with conventional conductivity imaging, the proposed method benefits from rapid SSFP acquisition without apparent loss of conductivity contrast. We successfully demonstrated the feasibility of the proposed method in estimating current-induced Bz and conductivity distribution. It can be a promising, rapid imaging strategy for quantitative conductivity imaging. © 2015 Wiley Periodicals, Inc.

  16. Bayesian Normalization Model for Label-Free Quantitative Analysis by LC-MS

    PubMed Central

    Nezami Ranjbar, Mohammad R.; Tadesse, Mahlet G.; Wang, Yue; Ressom, Habtom W.

    2016-01-01

    We introduce a new method for normalization of data acquired by liquid chromatography coupled with mass spectrometry (LC-MS) in label-free differential expression analysis. Normalization of LC-MS data is desired prior to subsequent statistical analysis to adjust variabilities in ion intensities that are not caused by biological differences but experimental bias. There are different sources of bias including variabilities during sample collection and sample storage, poor experimental design, noise, etc. In addition, instrument variability in experiments involving a large number of LC-MS runs leads to a significant drift in intensity measurements. Although various methods have been proposed for normalization of LC-MS data, there is no universally applicable approach. In this paper, we propose a Bayesian normalization model (BNM) that utilizes scan-level information from LC-MS data. Specifically, the proposed method uses peak shapes to model the scan-level data acquired from extracted ion chromatograms (EIC) with parameters considered as a linear mixed effects model. We extended the model into BNM with drift (BNMD) to compensate for the variability in intensity measurements due to long LC-MS runs. We evaluated the performance of our method using synthetic and experimental data. In comparison with several existing methods, the proposed BNM and BNMD yielded significant improvement. PMID:26357332

  17. Optimized multiple linear mappings for single image super-resolution

    NASA Astrophysics Data System (ADS)

    Zhang, Kaibing; Li, Jie; Xiong, Zenggang; Liu, Xiuping; Gao, Xinbo

    2017-12-01

    Learning piecewise linear regression has been recognized as an effective way for example learning-based single image super-resolution (SR) in literature. In this paper, we employ an expectation-maximization (EM) algorithm to further improve the SR performance of our previous multiple linear mappings (MLM) based SR method. In the training stage, the proposed method starts with a set of linear regressors obtained by the MLM-based method, and then jointly optimizes the clustering results and the low- and high-resolution subdictionary pairs for regression functions by using the metric of the reconstruction errors. In the test stage, we select the optimal regressor for SR reconstruction by accumulating the reconstruction errors of m-nearest neighbors in the training set. Thorough experimental results carried on six publicly available datasets demonstrate that the proposed SR method can yield high-quality images with finer details and sharper edges in terms of both quantitative and perceptual image quality assessments.

  18. A Method for Counting Moving People in Video Surveillance Videos

    NASA Astrophysics Data System (ADS)

    Conte, Donatello; Foggia, Pasquale; Percannella, Gennaro; Tufano, Francesco; Vento, Mario

    2010-12-01

    People counting is an important problem in video surveillance applications. This problem has been faced either by trying to detect people in the scene and then counting them or by establishing a mapping between some scene feature and the number of people (avoiding the complex detection problem). This paper presents a novel method, following this second approach, that is based on the use of SURF features and of an [InlineEquation not available: see fulltext.]-SVR regressor provide an estimate of this count. The algorithm takes specifically into account problems due to partial occlusions and to perspective. In the experimental evaluation, the proposed method has been compared with the algorithm by Albiol et al., winner of the PETS 2009 contest on people counting, using the same PETS 2009 database. The provided results confirm that the proposed method yields an improved accuracy, while retaining the robustness of Albiol's algorithm.

  19. Pulse combustion reactor as a fast and scalable synthetic method for preparation of Li-ion cathode materials

    NASA Astrophysics Data System (ADS)

    Križan, Gregor; Križan, Janez; Dominko, Robert; Gaberšček, Miran

    2017-09-01

    In this work a novel pulse combustion reactor method for preparation of Li-ion cathode materials is introduced. Its advantages and potential challenges are demonstrated on two widely studied cathode materials, LiFePO4/C and Li-rich NMC. By exploiting the nature of efficiency of pulse combustion we have successfully established a slightly reductive or oxidative environment necessary for synthesis. As a whole, the proposed method is fast, environmentally friendly and easy to scale. An important advantage of the proposed method is that it preferentially yields small-sized powders (in the nanometric range) at a fast production rate of 2 s. A potential disadvantage is the relatively high degree of disorder of synthesized active material which however can be removed using a post-annealing step. This additional step allows a further tuning of materials morphology as shown and commented in some detail.

  20. Delaunay Triangulation as a New Coverage Measurement Method in Wireless Sensor Network

    PubMed Central

    Chizari, Hassan; Hosseini, Majid; Poston, Timothy; Razak, Shukor Abd; Abdullah, Abdul Hanan

    2011-01-01

    Sensing and communication coverage are among the most important trade-offs in Wireless Sensor Network (WSN) design. A minimum bound of sensing coverage is vital in scheduling, target tracking and redeployment phases, as well as providing communication coverage. Some methods measure the coverage as a percentage value, but detailed information has been missing. Two scenarios with equal coverage percentage may not have the same Quality of Coverage (QoC). In this paper, we propose a new coverage measurement method using Delaunay Triangulation (DT). This can provide the value for all coverage measurement tools. Moreover, it categorizes sensors as ‘fat’, ‘healthy’ or ‘thin’ to show the dense, optimal and scattered areas. It can also yield the largest empty area of sensors in the field. Simulation results show that the proposed DT method can achieve accurate coverage information, and provides many tools to compare QoC between different scenarios. PMID:22163792

  1. A Fast Gradient Method for Nonnegative Sparse Regression With Self-Dictionary

    NASA Astrophysics Data System (ADS)

    Gillis, Nicolas; Luce, Robert

    2018-01-01

    A nonnegative matrix factorization (NMF) can be computed efficiently under the separability assumption, which asserts that all the columns of the given input data matrix belong to the cone generated by a (small) subset of them. The provably most robust methods to identify these conic basis columns are based on nonnegative sparse regression and self dictionaries, and require the solution of large-scale convex optimization problems. In this paper we study a particular nonnegative sparse regression model with self dictionary. As opposed to previously proposed models, this model yields a smooth optimization problem where the sparsity is enforced through linear constraints. We show that the Euclidean projection on the polyhedron defined by these constraints can be computed efficiently, and propose a fast gradient method to solve our model. We compare our algorithm with several state-of-the-art methods on synthetic data sets and real-world hyperspectral images.

  2. A novel one-class SVM based negative data sampling method for reconstructing proteome-wide HTLV-human protein interaction networks.

    PubMed

    Mei, Suyu; Zhu, Hao

    2015-01-26

    Protein-protein interaction (PPI) prediction is generally treated as a problem of binary classification wherein negative data sampling is still an open problem to be addressed. The commonly used random sampling is prone to yield less representative negative data with considerable false negatives. Meanwhile rational constraints are seldom exerted on model selection to reduce the risk of false positive predictions for most of the existing computational methods. In this work, we propose a novel negative data sampling method based on one-class SVM (support vector machine, SVM) to predict proteome-wide protein interactions between HTLV retrovirus and Homo sapiens, wherein one-class SVM is used to choose reliable and representative negative data, and two-class SVM is used to yield proteome-wide outcomes as predictive feedback for rational model selection. Computational results suggest that one-class SVM is more suited to be used as negative data sampling method than two-class PPI predictor, and the predictive feedback constrained model selection helps to yield a rational predictive model that reduces the risk of false positive predictions. Some predictions have been validated by the recent literature. Lastly, gene ontology based clustering of the predicted PPI networks is conducted to provide valuable cues for the pathogenesis of HTLV retrovirus.

  3. Measuring 3D point configurations in pictorial space

    PubMed Central

    Wagemans, Johan; van Doorn, Andrea J; Koenderink, Jan J

    2011-01-01

    We propose a novel method to probe the depth structure of the pictorial space evoked by paintings. The method involves an exocentric pointing paradigm that allows one to find the slope of the geodesic connection between any pair of points in pictorial space. Since the locations of the points in the picture plane are known, this immediately yields the depth difference between the points. A set of depth differences between all pairs of points from an N-point (N > 2) configuration then yields the configuration in depth up to an arbitrary depth offset. Since an N-point configuration implies N(N−1) (ordered) pairs, the number of observations typically far exceeds the number of inferred depths. This yields a powerful check on the geometrical consistency of the results. We report that the remaining inconsistencies are fully accounted for by the spread encountered in repeated observations. This implies that the concept of ‘pictorial space’ indeed has an empirical significance. The method is analyzed and empirically verified in considerable detail. We report large quantitative interobserver differences, though the results of all observers agree modulo a certain affine transformation that describes the basic cue ambiguities. This is expected on the basis of a formal analysis of monocular optical structure. The method will prove useful in a variety of potential applications. PMID:23145227

  4. Recursive feature selection with significant variables of support vectors.

    PubMed

    Tsai, Chen-An; Huang, Chien-Hsun; Chang, Ching-Wei; Chen, Chun-Houh

    2012-01-01

    The development of DNA microarray makes researchers screen thousands of genes simultaneously and it also helps determine high- and low-expression level genes in normal and disease tissues. Selecting relevant genes for cancer classification is an important issue. Most of the gene selection methods use univariate ranking criteria and arbitrarily choose a threshold to choose genes. However, the parameter setting may not be compatible to the selected classification algorithms. In this paper, we propose a new gene selection method (SVM-t) based on the use of t-statistics embedded in support vector machine. We compared the performance to two similar SVM-based methods: SVM recursive feature elimination (SVMRFE) and recursive support vector machine (RSVM). The three methods were compared based on extensive simulation experiments and analyses of two published microarray datasets. In the simulation experiments, we found that the proposed method is more robust in selecting informative genes than SVMRFE and RSVM and capable to attain good classification performance when the variations of informative and noninformative genes are different. In the analysis of two microarray datasets, the proposed method yields better performance in identifying fewer genes with good prediction accuracy, compared to SVMRFE and RSVM.

  5. Towards a formal genealogical classification of the Lezgian languages (North Caucasus): testing various phylogenetic methods on lexical data.

    PubMed

    Kassian, Alexei

    2015-01-01

    A lexicostatistical classification is proposed for 20 languages and dialects of the Lezgian group of the North Caucasian family, based on meticulously compiled 110-item wordlists, published as part of the Global Lexicostatistical Database project. The lexical data have been subsequently analyzed with the aid of the principal phylogenetic methods, both distance-based and character-based: Starling neighbor joining (StarlingNJ), Neighbor joining (NJ), Unweighted pair group method with arithmetic mean (UPGMA), Bayesian Markov chain Monte Carlo (MCMC), Unweighted maximum parsimony (UMP). Cognation indexes within the input matrix were marked by two different algorithms: traditional etymological approach and phonetic similarity, i.e., the automatic method of consonant classes (Levenshtein distances). Due to certain reasons (first of all, high lexicographic quality of the wordlists and a consensus about the Lezgian phylogeny among Caucasologists), the Lezgian database is a perfect testing area for appraisal of phylogenetic methods. For the etymology-based input matrix, all the phylogenetic methods, with the possible exception of UMP, have yielded trees that are sufficiently compatible with each other to generate a consensus phylogenetic tree of the Lezgian lects. The obtained consensus tree agrees with the traditional expert classification as well as some of the previously proposed formal classifications of this linguistic group. Contrary to theoretical expectations, the UMP method has suggested the least plausible tree of all. In the case of the phonetic similarity-based input matrix, the distance-based methods (StarlingNJ, NJ, UPGMA) have produced the trees that are rather close to the consensus etymology-based tree and the traditional expert classification, whereas the character-based methods (Bayesian MCMC, UMP) have yielded less likely topologies.

  6. Towards a Formal Genealogical Classification of the Lezgian Languages (North Caucasus): Testing Various Phylogenetic Methods on Lexical Data

    PubMed Central

    Kassian, Alexei

    2015-01-01

    A lexicostatistical classification is proposed for 20 languages and dialects of the Lezgian group of the North Caucasian family, based on meticulously compiled 110-item wordlists, published as part of the Global Lexicostatistical Database project. The lexical data have been subsequently analyzed with the aid of the principal phylogenetic methods, both distance-based and character-based: Starling neighbor joining (StarlingNJ), Neighbor joining (NJ), Unweighted pair group method with arithmetic mean (UPGMA), Bayesian Markov chain Monte Carlo (MCMC), Unweighted maximum parsimony (UMP). Cognation indexes within the input matrix were marked by two different algorithms: traditional etymological approach and phonetic similarity, i.e., the automatic method of consonant classes (Levenshtein distances). Due to certain reasons (first of all, high lexicographic quality of the wordlists and a consensus about the Lezgian phylogeny among Caucasologists), the Lezgian database is a perfect testing area for appraisal of phylogenetic methods. For the etymology-based input matrix, all the phylogenetic methods, with the possible exception of UMP, have yielded trees that are sufficiently compatible with each other to generate a consensus phylogenetic tree of the Lezgian lects. The obtained consensus tree agrees with the traditional expert classification as well as some of the previously proposed formal classifications of this linguistic group. Contrary to theoretical expectations, the UMP method has suggested the least plausible tree of all. In the case of the phonetic similarity-based input matrix, the distance-based methods (StarlingNJ, NJ, UPGMA) have produced the trees that are rather close to the consensus etymology-based tree and the traditional expert classification, whereas the character-based methods (Bayesian MCMC, UMP) have yielded less likely topologies. PMID:25719456

  7. A Marker-Based Approach for the Automated Selection of a Single Segmentation from a Hierarchical Set of Image Segmentations

    NASA Technical Reports Server (NTRS)

    Tarabalka, Y.; Tilton, J. C.; Benediktsson, J. A.; Chanussot, J.

    2012-01-01

    The Hierarchical SEGmentation (HSEG) algorithm, which combines region object finding with region object clustering, has given good performances for multi- and hyperspectral image analysis. This technique produces at its output a hierarchical set of image segmentations. The automated selection of a single segmentation level is often necessary. We propose and investigate the use of automatically selected markers for this purpose. In this paper, a novel Marker-based HSEG (M-HSEG) method for spectral-spatial classification of hyperspectral images is proposed. Two classification-based approaches for automatic marker selection are adapted and compared for this purpose. Then, a novel constrained marker-based HSEG algorithm is applied, resulting in a spectral-spatial classification map. Three different implementations of the M-HSEG method are proposed and their performances in terms of classification accuracies are compared. The experimental results, presented for three hyperspectral airborne images, demonstrate that the proposed approach yields accurate segmentation and classification maps, and thus is attractive for remote sensing image analysis.

  8. Image reconstruction from few-view CT data by gradient-domain dictionary learning.

    PubMed

    Hu, Zhanli; Liu, Qiegen; Zhang, Na; Zhang, Yunwan; Peng, Xi; Wu, Peter Z; Zheng, Hairong; Liang, Dong

    2016-05-21

    Decreasing the number of projections is an effective way to reduce the radiation dose exposed to patients in medical computed tomography (CT) imaging. However, incomplete projection data for CT reconstruction will result in artifacts and distortions. In this paper, a novel dictionary learning algorithm operating in the gradient-domain (Grad-DL) is proposed for few-view CT reconstruction. Specifically, the dictionaries are trained from the horizontal and vertical gradient images, respectively and the desired image is reconstructed subsequently from the sparse representations of both gradients by solving the least-square method. Since the gradient images are sparser than the image itself, the proposed approach could lead to sparser representations than conventional DL methods in the image-domain, and thus a better reconstruction quality is achieved. To evaluate the proposed Grad-DL algorithm, both qualitative and quantitative studies were employed through computer simulations as well as real data experiments on fan-beam and cone-beam geometry. The results show that the proposed algorithm can yield better images than the existing algorithms.

  9. Learning to Rapidly Re-Contact the Lost Plume in Chemical Plume Tracing

    PubMed Central

    Cao, Meng-Li; Meng, Qing-Hao; Wang, Jia-Ying; Luo, Bing; Jing, Ya-Qi; Ma, Shu-Gen

    2015-01-01

    Maintaining contact between the robot and plume is significant in chemical plume tracing (CPT). In the time immediately following the loss of chemical detection during the process of CPT, Track-Out activities bias the robot heading relative to the upwind direction, expecting to rapidly re-contact the plume. To determine the bias angle used in the Track-Out activity, we propose an online instance-based reinforcement learning method, namely virtual trail following (VTF). In VTF, action-value is generalized from recently stored instances of successful Track-Out activities. We also propose a collaborative VTF (cVTF) method, in which multiple robots store their own instances, and learn from the stored instances, in the same database. The proposed VTF and cVTF methods are compared with biased upwind surge (BUS) method, in which all Track-Out activities utilize an offline optimized universal bias angle, in an indoor environment with three different airflow fields. With respect to our experimental conditions, VTF and cVTF show stronger adaptability to different airflow environments than BUS, and furthermore, cVTF yields higher success rates and time-efficiencies than VTF. PMID:25825974

  10. Microstructure based model for sound absorption predictions of perforated closed-cell metallic foams.

    PubMed

    Chevillotte, Fabien; Perrot, Camille; Panneton, Raymond

    2010-10-01

    Closed-cell metallic foams are known for their rigidity, lightness, thermal conductivity as well as their low production cost compared to open-cell metallic foams. However, they are also poor sound absorbers. Similarly to a rigid solid, a method to enhance their sound absorption is to perforate them. This method has shown good preliminary results but has not yet been analyzed from a microstructure point of view. The objective of this work is to better understand how perforations interact with closed-cell foam microstructure and how it modifies the sound absorption of the foam. A simple two-dimensional microstructural model of the perforated closed-cell metallic foam is presented and numerically solved. A rough three-dimensional conversion of the two-dimensional results is proposed. The results obtained with the calculation method show that the perforated closed-cell foam behaves similarly to a perforated solid; however, its sound absorption is modulated by the foam microstructure, and most particularly by the diameters of both perforation and pore. A comparison with measurements demonstrates that the proposed calculation method yields realistic trends. Some design guides are also proposed.

  11. A supervoxel-based segmentation method for prostate MR images

    NASA Astrophysics Data System (ADS)

    Tian, Zhiqiang; Liu, LiZhi; Fei, Baowei

    2015-03-01

    Accurate segmentation of the prostate has many applications in prostate cancer diagnosis and therapy. In this paper, we propose a "Supervoxel" based method for prostate segmentation. The prostate segmentation problem is considered as assigning a label to each supervoxel. An energy function with data and smoothness terms is used to model the labeling process. The data term estimates the likelihood of a supervoxel belongs to the prostate according to a shape feature. The geometric relationship between two neighboring supervoxels is used to construct a smoothness term. A threedimensional (3D) graph cut method is used to minimize the energy function in order to segment the prostate. A 3D level set is then used to get a smooth surface based on the output of the graph cut. The performance of the proposed segmentation algorithm was evaluated with respect to the manual segmentation ground truth. The experimental results on 12 prostate volumes showed that the proposed algorithm yields a mean Dice similarity coefficient of 86.9%+/-3.2%. The segmentation method can be used not only for the prostate but also for other organs.

  12. Subglottal Impedance-Based Inverse Filtering of Voiced Sounds Using Neck Surface Acceleration

    PubMed Central

    Zañartu, Matías; Ho, Julio C.; Mehta, Daryush D.; Hillman, Robert E.; Wodicka, George R.

    2014-01-01

    A model-based inverse filtering scheme is proposed for an accurate, non-invasive estimation of the aerodynamic source of voiced sounds at the glottis. The approach, referred to as subglottal impedance-based inverse filtering (IBIF), takes as input the signal from a lightweight accelerometer placed on the skin over the extrathoracic trachea and yields estimates of glottal airflow and its time derivative, offering important advantages over traditional methods that deal with the supraglottal vocal tract. The proposed scheme is based on mechano-acoustic impedance representations from a physiologically-based transmission line model and a lumped skin surface representation. A subject-specific calibration protocol is used to account for individual adjustments of subglottal impedance parameters and mechanical properties of the skin. Preliminary results for sustained vowels with various voice qualities show that the subglottal IBIF scheme yields comparable estimates with respect to current aerodynamics-based methods of clinical vocal assessment. A mean absolute error of less than 10% was observed for two glottal airflow measures –maximum flow declination rate and amplitude of the modulation component– that have been associated with the pathophysiology of some common voice disorders caused by faulty and/or abusive patterns of vocal behavior (i.e., vocal hyperfunction). The proposed method further advances the ambulatory assessment of vocal function based on the neck acceleration signal, that previously have been limited to the estimation of phonation duration, loudness, and pitch. Subglottal IBIF is also suitable for other ambulatory applications in speech communication, in which further evaluation is underway. PMID:25400531

  13. Estimation of biomedical optical properties by simultaneous use of diffuse reflectometry and photothermal radiometry: investigation of light propagation models

    NASA Astrophysics Data System (ADS)

    Fonseca, E. S. R.; de Jesus, M. E. P.

    2007-07-01

    The estimation of optical properties of highly turbid and opaque biological tissue is a difficult task since conventional purely optical methods rapidly loose sensitivity as the mean photon path length decreases. Photothermal methods, such as pulsed or frequency domain photothermal radiometry (FD-PTR), on the other hand, show remarkable sensitivity in experimental conditions that produce very feeble optical signals. Photothermal Radiometry is primarily sensitive to absorption coefficient yielding considerably higher estimation errors on scattering coefficients. Conversely, purely optical methods such as Local Diffuse Reflectance (LDR) depend mainly on the scattering coefficient and yield much better estimates of this parameter. Therefore, at moderate transport albedos, the combination of photothermal and reflectance methods can improve considerably the sensitivity of detection of tissue optical properties. The authors have recently proposed a novel method that combines FD-PTR with LDR, aimed at improving sensitivity on the determination of both optical properties. Signal analysis was performed by global fitting the experimental data to forward models based on Monte-Carlo simulations. Although this approach is accurate, the associated computational burden often limits its use as a forward model. Therefore, the application of analytical models based on the diffusion approximation offers a faster alternative. In this work, we propose the calculation of the diffuse reflectance and the fluence rate profiles under the δ-P I approximation. This approach is known to approximate fluence rate expressions better close to collimated sources and boundaries than the standard diffusion approximation (SDA). We extend this study to the calculation of the diffuse reflectance profiles. The ability of the δ-P I based model to provide good estimates of the absorption, scattering and anisotropy coefficients is tested against Monte-Carlo simulations over a wide range of scattering to absorption ratios. Experimental validation of the proposed method is accomplished by a set of measurements on solid absorbing and scattering phantoms.

  14. Robust hepatic vessel segmentation using multi deep convolution network

    NASA Astrophysics Data System (ADS)

    Kitrungrotsakul, Titinunt; Han, Xian-Hua; Iwamoto, Yutaro; Foruzan, Amir Hossein; Lin, Lanfen; Chen, Yen-Wei

    2017-03-01

    Extraction of blood vessels of the organ is a challenging task in the area of medical image processing. It is really difficult to get accurate vessel segmentation results even with manually labeling by human being. The difficulty of vessels segmentation is the complicated structure of blood vessels and its large variations that make them hard to recognize. In this paper, we present deep artificial neural network architecture to automatically segment the hepatic vessels from computed tomography (CT) image. We proposed novel deep neural network (DNN) architecture for vessel segmentation from a medical CT volume, which consists of three deep convolution neural networks to extract features from difference planes of CT data. The three networks have share features at the first convolution layer but will separately learn their own features in the second layer. All three networks will join again at the top layer. To validate effectiveness and efficiency of our proposed method, we conduct experiments on 12 CT volumes which training data are randomly generate from 5 CT volumes and 7 using for test. Our network can yield an average dice coefficient 0.830, while 3D deep convolution neural network can yield around 0.7 and multi-scale can yield only 0.6.

  15. Hybrid simulated annealing and its application to optimization of hidden Markov models for visual speech recognition.

    PubMed

    Lee, Jong-Seok; Park, Cheol Hoon

    2010-08-01

    We propose a novel stochastic optimization algorithm, hybrid simulated annealing (SA), to train hidden Markov models (HMMs) for visual speech recognition. In our algorithm, SA is combined with a local optimization operator that substitutes a better solution for the current one to improve the convergence speed and the quality of solutions. We mathematically prove that the sequence of the objective values converges in probability to the global optimum in the algorithm. The algorithm is applied to train HMMs that are used as visual speech recognizers. While the popular training method of HMMs, the expectation-maximization algorithm, achieves only local optima in the parameter space, the proposed method can perform global optimization of the parameters of HMMs and thereby obtain solutions yielding improved recognition performance. The superiority of the proposed algorithm to the conventional ones is demonstrated via isolated word recognition experiments.

  16. Superstatistics analysis of the ion current distribution function: Met3PbCl influence study.

    PubMed

    Miśkiewicz, Janusz; Trela, Zenon; Przestalski, Stanisław; Karcz, Waldemar

    2010-09-01

    A novel analysis of ion current time series is proposed. It is shown that higher (second, third and fourth) statistical moments of the ion current probability distribution function (PDF) can yield new information about ion channel properties. The method is illustrated on a two-state model where the PDF of the compound states are given by normal distributions. The proposed method was applied to the analysis of the SV cation channels of vacuolar membrane of Beta vulgaris and the influence of trimethyllead chloride (Met(3)PbCl) on the ion current probability distribution. Ion currents were measured by patch-clamp technique. It was shown that Met(3)PbCl influences the variance of the open-state ion current but does not alter the PDF of the closed-state ion current. Incorporation of higher statistical moments into the standard investigation of ion channel properties is proposed.

  17. Comparing optical test methods for a lightweight primary mirror of a space-borne Cassegrain telescope

    NASA Astrophysics Data System (ADS)

    Lin, Wei-Cheng; Chang, Shenq-Tsong; Yu, Zong-Ru; Lin, Yu-Chuan; Ho, Cheng-Fong; Huang, Ting-Ming; Chen, Cheng-Huan

    2014-09-01

    A Cassegrain telescope with a 450 mm clear aperture was developed for use in a spaceborne optical remote-sensing instrument. Self-weight deformation and thermal distortion were considered: to this end, Zerodur was used to manufacture the primary mirror. The lightweight scheme adopted a hexagonal cell structure yielding a lightweight ratio of 50%. In general, optical testing on a lightweight mirror is a critical technique during both the manufacturing and assembly processes. To prevent unexpected measurement errors that cause erroneous judgment, this paper proposes a novel and reliable analytical method for optical testing, called the bench test. The proposed algorithm was used to distinguish the manufacturing form error from surface deformation caused by the mounting, supporter and gravity effects for the optical testing. The performance of the proposed bench test was compared with a conventional vertical setup for optical testing during the manufacturing process of the lightweight mirror.

  18. Accurate Time/Frequency Transfer Method Using Bi-Directional WDM Transmission

    NASA Technical Reports Server (NTRS)

    Imaoka, Atsushi; Kihara, Masami

    1996-01-01

    An accurate time transfer method is proposed using b-directional wavelength division multiplexing (WDM) signal transmission along a single optical fiber. This method will be used in digital telecommunication networks and yield a time synchronization accuracy of better than 1 ns for long transmission lines over several tens of kilometers. The method can accurately measure the difference in delay between two wavelength signals caused by the chromatic dispersion of the fiber in conventional simple bi-directional dual-wavelength frequency transfer methods. We describe the characteristics of this difference in delay and then show that the accuracy of the delay measurements can be obtained below 0.1 ns by transmitting 156 Mb/s times reference signals of 1.31 micrometer and 1.55 micrometers along a 50 km fiber using the proposed method. The sub-nanosecond delay measurement using the simple bi-directional dual-wavelength transmission along a 100 km fiber with a wavelength spacing of 1 nm in the 1.55 micrometer range is also shown.

  19. Direct Regularized Estimation of Retinal Vascular Oxygen Tension Based on an Experimental Model

    PubMed Central

    Yildirim, Isa; Ansari, Rashid; Yetik, I. Samil; Shahidi, Mahnaz

    2014-01-01

    Phosphorescence lifetime imaging is commonly used to generate oxygen tension maps of retinal blood vessels by classical least squares (LS) estimation method. A spatial regularization method was later proposed and provided improved results. However, both methods obtain oxygen tension values from the estimates of intermediate variables, and do not yield an optimum estimate of oxygen tension values, due to their nonlinear dependence on the ratio of intermediate variables. In this paper, we provide an improved solution by devising a regularized direct least squares (RDLS) method that exploits available knowledge in studies that provide models of oxygen tension in retinal arteries and veins, unlike the earlier regularized LS approach where knowledge about intermediate variables is limited. The performance of the proposed RDLS method is evaluated by investigating and comparing the bias, variance, oxygen tension maps, 1-D profiles of arterial oxygen tension, and mean absolute error with those of earlier methods, and its superior performance both quantitatively and qualitatively is demonstrated. PMID:23732915

  20. Estimation of Qualitative and Quantitative Parameters of Air Cleaning by a Pulsed Corona Discharge Using Multicomponent Standard Mixtures

    NASA Astrophysics Data System (ADS)

    Filatov, I. E.; Uvarin, V. V.; Kuznetsov, D. L.

    2018-05-01

    The efficiency of removal of volatile organic impurities in air by a pulsed corona discharge is investigated using model mixtures. Based on the method of competing reactions, an approach to estimating the qualitative and quantitative parameters of the employed electrophysical technique is proposed. The concept of the "toluene coefficient" characterizing the relative reactivity of a component as compared to toluene is introduced. It is proposed that the energy efficiency of the electrophysical method be estimated using the concept of diversified yield of the removal process. Such an approach makes it possible to substantially intensify the determination of energy parameters of removal of impurities and can also serve as a criterion for estimating the effectiveness of various methods in which a nonequilibrium plasma is used for air cleaning from volatile impurities.

  1. Magnetic quadrupoles lens for hot spot proton imaging in inertial confinement fusion

    NASA Astrophysics Data System (ADS)

    Teng, J.; Gu, Y. Q.; Chen, J.; Zhu, B.; Zhang, B.; Zhang, T. K.; Tan, F.; Hong, W.; Zhang, B. H.; Wang, X. Q.

    2016-08-01

    Imaging of DD-produced protons from an implosion hot spot region by miniature permanent magnetic quadrupole (PMQ) lens is proposed. Corresponding object-image relation is deduced and an adjust method for this imaging system is discussed. Ideal point-to-point imaging demands a monoenergetic proton source; nevertheless, we proved that the blur of image induced by proton energy spread is a second order effect therefore controllable. A proton imaging system based on miniature PMQ lens is designed for 2.8 MeV DD-protons and the adjust method in case of proton energy shift is proposed. The spatial resolution of this system is better than 10 μm when proton yield is above 109 and the spectra width is within 10%.

  2. Minimal-Drift Heading Measurement using a MEMS Gyro for Indoor Mobile Robots.

    PubMed

    Hong, Sung Kyung; Park, Sungsu

    2008-11-17

    To meet the challenges of making low-cost MEMS yaw rate gyros for the precise self-localization of indoor mobile robots, this paper examines a practical and effective method of minimizing drift on the heading angle that relies solely on integration of rate signals from a gyro. The main idea of the proposed approach is consists of two parts; 1) self-identification of calibration coefficients that affects long-term performance, and 2) threshold filter to reject the broadband noise component that affects short-term performance. Experimental results with the proposed phased method applied to Epson XV3500 gyro demonstrate that it effectively yields minimal drift heading angle measurements getting over major error sources in the MEMS gyro output.

  3. Hypothesis testing for band size detection of high-dimensional banded precision matrices.

    PubMed

    An, Baiguo; Guo, Jianhua; Liu, Yufeng

    2014-06-01

    Many statistical analysis procedures require a good estimator for a high-dimensional covariance matrix or its inverse, the precision matrix. When the precision matrix is banded, the Cholesky-based method often yields a good estimator of the precision matrix. One important aspect of this method is determination of the band size of the precision matrix. In practice, crossvalidation is commonly used; however, we show that crossvalidation not only is computationally intensive but can be very unstable. In this paper, we propose a new hypothesis testing procedure to determine the band size in high dimensions. Our proposed test statistic is shown to be asymptotically normal under the null hypothesis, and its theoretical power is studied. Numerical examples demonstrate the effectiveness of our testing procedure.

  4. Proposal for a Standard Greenhouse Method for Assessing Soybean Cyst Nematode Resistance in Soybean: SCE08 (Standardized Cyst Evaluation 2008)

    USDA-ARS?s Scientific Manuscript database

    The soybean cyst nematode (SCN) remains the most economically important pathogen of soybean in North America. Most farmers do not sample for SCN believing instead that the use of SCN-resistant varieties is sufficient to avoid yield losses due to the nematode according to surveys conducted in Illino...

  5. Ensemble density variational methods with self- and ghost-interaction-corrected functionals

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pastorczak, Ewa; Pernal, Katarzyna, E-mail: pernalk@gmail.com

    2014-05-14

    Ensemble density functional theory (DFT) offers a way of predicting excited-states energies of atomic and molecular systems without referring to a density response function. Despite a significant theoretical work, practical applications of the proposed approximations have been scarce and they do not allow for a fair judgement of the potential usefulness of ensemble DFT with available functionals. In the paper, we investigate two forms of ensemble density functionals formulated within ensemble DFT framework: the Gross, Oliveira, and Kohn (GOK) functional proposed by Gross et al. [Phys. Rev. A 37, 2809 (1988)] alongside the orbital-dependent eDFT form of the functional introducedmore » by Nagy [J. Phys. B 34, 2363 (2001)] (the acronym eDFT proposed in analogy to eHF – ensemble Hartree-Fock method). Local and semi-local ground-state density functionals are employed in both approaches. Approximate ensemble density functionals contain not only spurious self-interaction but also the so-called ghost-interaction which has no counterpart in the ground-state DFT. We propose how to correct the GOK functional for both kinds of interactions in approximations that go beyond the exact-exchange functional. Numerical applications lead to a conclusion that functionals free of the ghost-interaction by construction, i.e., eDFT, yield much more reliable results than approximate self- and ghost-interaction-corrected GOK functional. Additionally, local density functional corrected for self-interaction employed in the eDFT framework yields excitations energies of the accuracy comparable to that of the uncorrected semi-local eDFT functional.« less

  6. Accurate paleointensities - the multi-method approach

    NASA Astrophysics Data System (ADS)

    de Groot, Lennart

    2016-04-01

    The accuracy of models describing rapid changes in the geomagnetic field over the past millennia critically depends on the availability of reliable paleointensity estimates. Over the past decade methods to derive paleointensities from lavas (the only recorder of the geomagnetic field that is available all over the globe and through geologic times) have seen significant improvements and various alternative techniques were proposed. The 'classical' Thellier-style approach was optimized and selection criteria were defined in the 'Standard Paleointensity Definitions' (Paterson et al, 2014). The Multispecimen approach was validated and the importance of additional tests and criteria to assess Multispecimen results must be emphasized. Recently, a non-heating, relative paleointensity technique was proposed -the pseudo-Thellier protocol- which shows great potential in both accuracy and efficiency, but currently lacks a solid theoretical underpinning. Here I present work using all three of the aforementioned paleointensity methods on suites of young lavas taken from the volcanic islands of Hawaii, La Palma, Gran Canaria, Tenerife, and Terceira. Many of the sampled cooling units are <100 years old, the actual field strength at the time of cooling is therefore reasonably well known. Rather intuitively, flows that produce coherent results from two or more different paleointensity methods yield the most accurate estimates of the paleofield. Furthermore, the results for some flows pass the selection criteria for one method, but fail in other techniques. Scrutinizing and combing all acceptable results yielded reliable paleointensity estimates for 60-70% of all sampled cooling units - an exceptionally high success rate. This 'multi-method paleointensity approach' therefore has high potential to provide the much-needed paleointensities to improve geomagnetic field models for the Holocene.

  7. Variable screening via quantile partial correlation

    PubMed Central

    Ma, Shujie; Tsai, Chih-Ling

    2016-01-01

    In quantile linear regression with ultra-high dimensional data, we propose an algorithm for screening all candidate variables and subsequently selecting relevant predictors. Specifically, we first employ quantile partial correlation for screening, and then we apply the extended Bayesian information criterion (EBIC) for best subset selection. Our proposed method can successfully select predictors when the variables are highly correlated, and it can also identify variables that make a contribution to the conditional quantiles but are marginally uncorrelated or weakly correlated with the response. Theoretical results show that the proposed algorithm can yield the sure screening set. By controlling the false selection rate, model selection consistency can be achieved theoretically. In practice, we proposed using EBIC for best subset selection so that the resulting model is screening consistent. Simulation studies demonstrate that the proposed algorithm performs well, and an empirical example is presented. PMID:28943683

  8. Hyperspectral interventional imaging for enhanced tissue visualization and discrimination combining band selection methods.

    PubMed

    Nouri, Dorra; Lucas, Yves; Treuillet, Sylvie

    2016-12-01

    Hyperspectral imaging is an emerging technology recently introduced in medical applications inasmuch as it provides a powerful tool for noninvasive tissue characterization. In this context, a new system was designed to be easily integrated in the operating room in order to detect anatomical tissues hardly noticed by the surgeon's naked eye. Our LCTF-based spectral imaging system is operative over visible, near- and middle-infrared spectral ranges (400-1700 nm). It is dedicated to enhance critical biological tissues such as the ureter and the facial nerve. We aim to find the best three relevant bands to create a RGB image to display during the intervention with maximal contrast between the target tissue and its surroundings. A comparative study is carried out between band selection methods and band transformation methods. Combined band selection methods are proposed. All methods are compared using different evaluation criteria. Experimental results show that the proposed combined band selection methods provide the best performance with rich information, high tissue separability and short computational time. These methods yield a significant discrimination between biological tissues. We developed a hyperspectral imaging system in order to enhance some biological tissue visualization. The proposed methods provided an acceptable trade-off between the evaluation criteria especially in SWIR spectral band that outperforms the naked eye's capacities.

  9. Automatic Method for Building Indoor Boundary Models from Dense Point Clouds Collected by Laser Scanners

    PubMed Central

    Valero, Enrique; Adán, Antonio; Cerrada, Carlos

    2012-01-01

    In this paper we present a method that automatically yields Boundary Representation Models (B-rep) for indoors after processing dense point clouds collected by laser scanners from key locations through an existing facility. Our objective is particularly focused on providing single models which contain the shape, location and relationship of primitive structural elements of inhabited scenarios such as walls, ceilings and floors. We propose a discretization of the space in order to accurately segment the 3D data and generate complete B-rep models of indoors in which faces, edges and vertices are coherently connected. The approach has been tested in real scenarios with data coming from laser scanners yielding promising results. We have deeply evaluated the results by analyzing how reliably these elements can be detected and how accurately they are modeled. PMID:23443369

  10. Anisotropy of the neutron fluence from a plasma focus.

    NASA Technical Reports Server (NTRS)

    Lee, J. H.; Shomo, L. P.; Kim, K. H.

    1972-01-01

    The fluence of neutrons from a plasma focus was measured by gamma spectrometry of an activated silver target. This method results in a significant increase in accuracy over the beta-counting method. Multiple detectors were used in order to measure the anisotropy of the fluence of neutrons. The fluence was found to be concentrated in a cone with a half-angle of 30 deg about the axis, and to drop off rapidly outside of this cone; the anisotropy was found to depend upon the total yield of neutrons. This dependence was strongest on the axis. Neither the axial concentration of the fluence of neutrons nor its dependence on the total yield of neutrons is explained by any of the currently proposed models. Some other explanations, including the possibility of an axially distributed source, are considered.

  11. Toward optimal feature and time segment selection by divergence method for EEG signals classification.

    PubMed

    Wang, Jie; Feng, Zuren; Lu, Na; Luo, Jing

    2018-06-01

    Feature selection plays an important role in the field of EEG signals based motor imagery pattern classification. It is a process that aims to select an optimal feature subset from the original set. Two significant advantages involved are: lowering the computational burden so as to speed up the learning procedure and removing redundant and irrelevant features so as to improve the classification performance. Therefore, feature selection is widely employed in the classification of EEG signals in practical brain-computer interface systems. In this paper, we present a novel statistical model to select the optimal feature subset based on the Kullback-Leibler divergence measure, and automatically select the optimal subject-specific time segment. The proposed method comprises four successive stages: a broad frequency band filtering and common spatial pattern enhancement as preprocessing, features extraction by autoregressive model and log-variance, the Kullback-Leibler divergence based optimal feature and time segment selection and linear discriminate analysis classification. More importantly, this paper provides a potential framework for combining other feature extraction models and classification algorithms with the proposed method for EEG signals classification. Experiments on single-trial EEG signals from two public competition datasets not only demonstrate that the proposed method is effective in selecting discriminative features and time segment, but also show that the proposed method yields relatively better classification results in comparison with other competitive methods. Copyright © 2018 Elsevier Ltd. All rights reserved.

  12. Mysteries of TOPSe revealed: insights into quantum dot nucleation.

    PubMed

    Evans, Christopher M; Evans, Meagan E; Krauss, Todd D

    2010-08-18

    We have investigated the reaction mechanism responsible for QD nucleation using optical absorption and nuclear magnetic resonance spectroscopies. For typical II-VI and IV-VI quantum dot (QD) syntheses, pure tertiary phosphine selenide sources (e.g., trioctylphosphine selenide (TOPSe)) were surprisingly found to be unreactive with metal carboxylates and incapable of yielding QDs. Rather, small quantities of secondary phosphines, which are impurities in tertiary phosphines, are entirely responsible for the nucleation of QDs; their low concentrations account for poor synthetic conversion yields. QD yields increase to nearly quantitative levels when replacing TOPSe with a stoiciometric amount of a secondary phosphine chalcogenide such as diphenylphosphine selenide. Based on our observations, we have proposed potential monomer identities, reaction pathways, and transition states and believe this mechanism to be universal to all II-VI and IV-VI QDs synthesized using phosphine based methods.

  13. Mysteries of TOPSe Revealed: Insights into Quantum Dot Nucleation

    PubMed Central

    Evans, Christopher M.; Evans, Meagan E.

    2010-01-01

    We have investigated the reaction mechanism responsible for QD nucleation using optical absorption and nuclear magnetic resonance spectroscopies. For typical II-VI and IV-VI quantum dot (QD) syntheses, pure tertiary phosphine selenide sources (e.g. trioctylphosphine selenide (TOPSe)) were surprisingly found to be unreactive with metal carboxylates and incapable of yielding QDs. Rather, small quantities of secondary phosphines, which are impurities in tertiary phosphines, are entirely responsible for the nucleation of QDs; their low concentrations account for poor synthetic conversion yields. QD yields increase to nearly quantitative levels when replacing TOPSe with a stoiciometric amount of a secondary phosphine chalcogenide such as diphenylphosphine selenide. Based on our observations, we have proposed potential monomer identities, reaction pathways and transition states, and believe this mechanism to be universal to all II-VI and IV-VI QDs synthesized using phosphine based methods. PMID:20698646

  14. Prostate segmentation: an efficient convex optimization approach with axial symmetry using 3-D TRUS and MR images.

    PubMed

    Qiu, Wu; Yuan, Jing; Ukwatta, Eranga; Sun, Yue; Rajchl, Martin; Fenster, Aaron

    2014-04-01

    We propose a novel global optimization-based approach to segmentation of 3-D prostate transrectal ultrasound (TRUS) and T2 weighted magnetic resonance (MR) images, enforcing inherent axial symmetry of prostate shapes to simultaneously adjust a series of 2-D slice-wise segmentations in a "global" 3-D sense. We show that the introduced challenging combinatorial optimization problem can be solved globally and exactly by means of convex relaxation. In this regard, we propose a novel coherent continuous max-flow model (CCMFM), which derives a new and efficient duality-based algorithm, leading to a GPU-based implementation to achieve high computational speeds. Experiments with 25 3-D TRUS images and 30 3-D T2w MR images from our dataset, and 50 3-D T2w MR images from a public dataset, demonstrate that the proposed approach can segment a 3-D prostate TRUS/MR image within 5-6 s including 4-5 s for initialization, yielding a mean Dice similarity coefficient of 93.2%±2.0% for 3-D TRUS images and 88.5%±3.5% for 3-D MR images. The proposed method also yields relatively low intra- and inter-observer variability introduced by user manual initialization, suggesting a high reproducibility, independent of observers.

  15. Semi-supervised prediction of gene regulatory networks using machine learning algorithms.

    PubMed

    Patel, Nihir; Wang, Jason T L

    2015-10-01

    Use of computational methods to predict gene regulatory networks (GRNs) from gene expression data is a challenging task. Many studies have been conducted using unsupervised methods to fulfill the task; however, such methods usually yield low prediction accuracies due to the lack of training data. In this article, we propose semi-supervised methods for GRN prediction by utilizing two machine learning algorithms, namely, support vector machines (SVM) and random forests (RF). The semi-supervised methods make use of unlabelled data for training. We investigated inductive and transductive learning approaches, both of which adopt an iterative procedure to obtain reliable negative training data from the unlabelled data. We then applied our semi-supervised methods to gene expression data of Escherichia coli and Saccharomyces cerevisiae, and evaluated the performance of our methods using the expression data. Our analysis indicated that the transductive learning approach outperformed the inductive learning approach for both organisms. However, there was no conclusive difference identified in the performance of SVM and RF. Experimental results also showed that the proposed semi-supervised methods performed better than existing supervised methods for both organisms.

  16. Marker-free motion correction in weight-bearing cone-beam CT of the knee joint

    PubMed Central

    Berger, M.; Müller, K.; Aichert, A.; Unberath, M.; Thies, J.; Choi, J.-H.; Fahrig, R.; Maier, A.

    2016-01-01

    Purpose: To allow for a purely image-based motion estimation and compensation in weight-bearing cone-beam computed tomography of the knee joint. Methods: Weight-bearing imaging of the knee joint in a standing position poses additional requirements for the image reconstruction algorithm. In contrast to supine scans, patient motion needs to be estimated and compensated. The authors propose a method that is based on 2D/3D registration of left and right femur and tibia segmented from a prior, motion-free reconstruction acquired in supine position. Each segmented bone is first roughly aligned to the motion-corrupted reconstruction of a scan in standing or squatting position. Subsequently, a rigid 2D/3D registration is performed for each bone to each of K projection images, estimating 6 × 4 × K motion parameters. The motion of individual bones is combined into global motion fields using thin-plate-spline extrapolation. These can be incorporated into a motion-compensated reconstruction in the backprojection step. The authors performed visual and quantitative comparisons between a state-of-the-art marker-based (MB) method and two variants of the proposed method using gradient correlation (GC) and normalized gradient information (NGI) as similarity measure for the 2D/3D registration. Results: The authors evaluated their method on four acquisitions under different squatting positions of the same patient. All methods showed substantial improvement in image quality compared to the uncorrected reconstructions. Compared to NGI and MB, the GC method showed increased streaking artifacts due to misregistrations in lateral projection images. NGI and MB showed comparable image quality at the bone regions. Because the markers are attached to the skin, the MB method performed better at the surface of the legs where the authors observed slight streaking of the NGI and GC methods. For a quantitative evaluation, the authors computed the universal quality index (UQI) for all bone regions with respect to the motion-free reconstruction. The authors quantitative evaluation over regions around the bones yielded a mean UQI of 18.4 for no correction, 53.3 and 56.1 for the proposed method using GC and NGI, respectively, and 53.7 for the MB reference approach. In contrast to the authors registration-based corrections, the MB reference method caused slight nonrigid deformations at bone outlines when compared to a motion-free reference scan. Conclusions: The authors showed that their method based on the NGI similarity measure yields reconstruction quality close to the MB reference method. In contrast to the MB method, the proposed method does not require any preparation prior to the examination which will improve the clinical workflow and patient comfort. Further, the authors found that the MB method causes small, nonrigid deformations at the bone outline which indicates that markers may not accurately reflect the internal motion close to the knee joint. Therefore, the authors believe that the proposed method is a promising alternative to MB motion management. PMID:26936708

  17. Integrated bioethanol production to boost low-concentrated cellulosic ethanol without sacrificing ethanol yield.

    PubMed

    Xu, Youjie; Zhang, Meng; Roozeboom, Kraig; Wang, Donghai

    2018-02-01

    Four integrated designs were proposed to boost cellulosic ethanol titer and yield. Results indicated co-fermentation of corn flour with hydrolysate liquor from saccharified corn stover was the best integration scheme and able to boost ethanol titers from 19.9 to 123.2 g/L with biomass loading of 8% and from 36.8 to 130.2 g/L with biomass loadings of 16%, respectively, while meeting the minimal ethanol distillation requirement of 40 g/L and achieving high ethanol yields of above 90%. These results indicated integration of first and second generation ethanol production could significantly accelerate the commercialization of cellulosic biofuel production. Co-fermentation of starchy substrate with hydrolysate liquor from saccharified biomass is able to significantly enhance ethanol concentration to reduce energy cost for distillation without sacrificing ethanol yields. This novel method could be extended to any pretreatment of biomass from low to high pH pretreatment as demonstrated in this study. Copyright © 2017 Elsevier Ltd. All rights reserved.

  18. Temperature-dependent regioselectivity of nucleophilic aromatic photosubstitution. Evidence that activation energy controls reactivity.

    PubMed

    Wubbels, Gene G; Tamura, Ryo; Gannon, Emmett J

    2013-05-17

    Irradiation (λ > 330 nm) of 2-chloro-4-nitroanisole (1) at 25 °C in aqueous NaOH forms three substitution photoproducts: 2-methoxy-5-nitrophenol (2), 2-chloro-4-nitrophenol (3), and 3-chloro-4-methoxyphenol (4), in chemical yields of 69.2%, 14.3%, and 16.5%. The activation energies for the elementary steps from the triplet state at 25 °C were determined to be 1.8, 2.4, and 2.7 kcal/mol, respectively. The chemical yields of each of the three products were determined for exhaustive irradiations at 0, 35, and 70 °C. The variation with temperature of the experimental yields is reproduced almost exactly by the yields calculated with the Arrhenius equation. This indicates that activation energy is the fundamental property related to regioselectivity in nucleophilic aromatic photosubstitution of the S(N)2 Ar* type. The many methods proposed for predicting regioselectivity in reactions of this type have had limited success and have not been related to activation energy.

  19. Wind power error estimation in resource assessments.

    PubMed

    Rodríguez, Osvaldo; Del Río, Jesús A; Jaramillo, Oscar A; Martínez, Manuel

    2015-01-01

    Estimating the power output is one of the elements that determine the techno-economic feasibility of a renewable project. At present, there is a need to develop reliable methods that achieve this goal, thereby contributing to wind power penetration. In this study, we propose a method for wind power error estimation based on the wind speed measurement error, probability density function, and wind turbine power curves. This method uses the actual wind speed data without prior statistical treatment based on 28 wind turbine power curves, which were fitted by Lagrange's method, to calculate the estimate wind power output and the corresponding error propagation. We found that wind speed percentage errors of 10% were propagated into the power output estimates, thereby yielding an error of 5%. The proposed error propagation complements the traditional power resource assessments. The wind power estimation error also allows us to estimate intervals for the power production leveled cost or the investment time return. The implementation of this method increases the reliability of techno-economic resource assessment studies.

  20. Wind Power Error Estimation in Resource Assessments

    PubMed Central

    Rodríguez, Osvaldo; del Río, Jesús A.; Jaramillo, Oscar A.; Martínez, Manuel

    2015-01-01

    Estimating the power output is one of the elements that determine the techno-economic feasibility of a renewable project. At present, there is a need to develop reliable methods that achieve this goal, thereby contributing to wind power penetration. In this study, we propose a method for wind power error estimation based on the wind speed measurement error, probability density function, and wind turbine power curves. This method uses the actual wind speed data without prior statistical treatment based on 28 wind turbine power curves, which were fitted by Lagrange's method, to calculate the estimate wind power output and the corresponding error propagation. We found that wind speed percentage errors of 10% were propagated into the power output estimates, thereby yielding an error of 5%. The proposed error propagation complements the traditional power resource assessments. The wind power estimation error also allows us to estimate intervals for the power production leveled cost or the investment time return. The implementation of this method increases the reliability of techno-economic resource assessment studies. PMID:26000444

  1. Marker-free motion correction in weight-bearing cone-beam CT of the knee joint.

    PubMed

    Berger, M; Müller, K; Aichert, A; Unberath, M; Thies, J; Choi, J-H; Fahrig, R; Maier, A

    2016-03-01

    To allow for a purely image-based motion estimation and compensation in weight-bearing cone-beam computed tomography of the knee joint. Weight-bearing imaging of the knee joint in a standing position poses additional requirements for the image reconstruction algorithm. In contrast to supine scans, patient motion needs to be estimated and compensated. The authors propose a method that is based on 2D/3D registration of left and right femur and tibia segmented from a prior, motion-free reconstruction acquired in supine position. Each segmented bone is first roughly aligned to the motion-corrupted reconstruction of a scan in standing or squatting position. Subsequently, a rigid 2D/3D registration is performed for each bone to each of K projection images, estimating 6 × 4 × K motion parameters. The motion of individual bones is combined into global motion fields using thin-plate-spline extrapolation. These can be incorporated into a motion-compensated reconstruction in the backprojection step. The authors performed visual and quantitative comparisons between a state-of-the-art marker-based (MB) method and two variants of the proposed method using gradient correlation (GC) and normalized gradient information (NGI) as similarity measure for the 2D/3D registration. The authors evaluated their method on four acquisitions under different squatting positions of the same patient. All methods showed substantial improvement in image quality compared to the uncorrected reconstructions. Compared to NGI and MB, the GC method showed increased streaking artifacts due to misregistrations in lateral projection images. NGI and MB showed comparable image quality at the bone regions. Because the markers are attached to the skin, the MB method performed better at the surface of the legs where the authors observed slight streaking of the NGI and GC methods. For a quantitative evaluation, the authors computed the universal quality index (UQI) for all bone regions with respect to the motion-free reconstruction. The authors quantitative evaluation over regions around the bones yielded a mean UQI of 18.4 for no correction, 53.3 and 56.1 for the proposed method using GC and NGI, respectively, and 53.7 for the MB reference approach. In contrast to the authors registration-based corrections, the MB reference method caused slight nonrigid deformations at bone outlines when compared to a motion-free reference scan. The authors showed that their method based on the NGI similarity measure yields reconstruction quality close to the MB reference method. In contrast to the MB method, the proposed method does not require any preparation prior to the examination which will improve the clinical workflow and patient comfort. Further, the authors found that the MB method causes small, nonrigid deformations at the bone outline which indicates that markers may not accurately reflect the internal motion close to the knee joint. Therefore, the authors believe that the proposed method is a promising alternative to MB motion management.

  2. Mammogram segmentation using maximal cell strength updation in cellular automata.

    PubMed

    Anitha, J; Peter, J Dinesh

    2015-08-01

    Breast cancer is the most frequently diagnosed type of cancer among women. Mammogram is one of the most effective tools for early detection of the breast cancer. Various computer-aided systems have been introduced to detect the breast cancer from mammogram images. In a computer-aided diagnosis system, detection and segmentation of breast masses from the background tissues is an important issue. In this paper, an automatic segmentation method is proposed to identify and segment the suspicious mass regions of mammogram using a modified transition rule named maximal cell strength updation in cellular automata (CA). In coarse-level segmentation, the proposed method performs an adaptive global thresholding based on the histogram peak analysis to obtain the rough region of interest. An automatic seed point selection is proposed using gray-level co-occurrence matrix-based sum average feature in the coarse segmented image. Finally, the method utilizes CA with the identified initial seed point and the modified transition rule to segment the mass region. The proposed approach is evaluated over the dataset of 70 mammograms with mass from mini-MIAS database. Experimental results show that the proposed approach yields promising results to segment the mass region in the mammograms with the sensitivity of 92.25% and accuracy of 93.48%.

  3. Supervised segmentation of microelectrode recording artifacts using power spectral density.

    PubMed

    Bakstein, Eduard; Schneider, Jakub; Sieger, Tomas; Novak, Daniel; Wild, Jiri; Jech, Robert

    2015-08-01

    Appropriate detection of clean signal segments in extracellular microelectrode recordings (MER) is vital for maintaining high signal-to-noise ratio in MER studies. Existing alternatives to manual signal inspection are based on unsupervised change-point detection. We present a method of supervised MER artifact classification, based on power spectral density (PSD) and evaluate its performance on a database of 95 labelled MER signals. The proposed method yielded test-set accuracy of 90%, which was close to the accuracy of annotation (94%). The unsupervised methods achieved accuracy of about 77% on both training and testing data.

  4. Heuristic algorithm for optical character recognition of Arabic script

    NASA Astrophysics Data System (ADS)

    Yarman-Vural, Fatos T.; Atici, A.

    1996-02-01

    In this paper, a heuristic method is developed for segmentation, feature extraction and recognition of the Arabic script. The study is part of a large project for the transcription of the documents in Ottoman Archives. A geometrical and topological feature analysis method is developed for segmentation and feature extraction stages. Chain code transformation is applied to main strokes of the characters which are then classified by the hidden Markov model (HMM) in the recognition stage. Experimental results indicate that the performance of the proposed method is impressive, provided that the thinning process does not yield spurious branches.

  5. Efficient method of protein extraction from Theobroma cacao L. roots for two-dimensional gel electrophoresis and mass spectrometry analyses.

    PubMed

    Bertolde, F Z; Almeida, A-A F; Silva, F A C; Oliveira, T M; Pirovani, C P

    2014-07-04

    Theobroma cacao is a woody and recalcitrant plant with a very high level of interfering compounds. Standard protocols for protein extraction were proposed for various types of samples, but the presence of interfering compounds in many samples prevented the isolation of proteins suitable for two-dimensional gel electrophoresis (2-DE). An efficient method to extract root proteins for 2-DE was established to overcome these problems. The main features of this protocol are: i) precipitation with trichloroacetic acid/acetone overnight to prepare the acetone dry powder (ADP), ii) several additional steps of sonication in the ADP preparation and extractions with dense sodium dodecyl sulfate and phenol, and iii) adding two stages of phenol extractions. Proteins were extracted from roots using this new protocol (Method B) and a protocol described in the literature for T. cacao leaves and meristems (Method A). Using these methods, we obtained a protein yield of about 0.7 and 2.5 mg per 1.0 g lyophilized root, and a total of 60 and 400 spots could be separated, respectively. Through Method B, it was possible to isolate high-quality protein and a high yield of roots from T. cacao for high-quality 2-DE gels. To demonstrate the quality of the extracted proteins from roots of T. cacao using Method B, several protein spots were cut from the 2-DE gels, analyzed by tandem mass spectrometry, and identified. Method B was further tested on Citrus roots, with a protein yield of about 2.7 mg per 1.0 g lyophilized root and 800 detected spots.

  6. A novel framework to evaluate pedestrian safety at non-signalized locations.

    PubMed

    Fu, Ting; Miranda-Moreno, Luis; Saunier, Nicolas

    2018-02-01

    This paper proposes a new framework to evaluate pedestrian safety at non-signalized crosswalk locations. In the proposed framework, the yielding maneuver of a driver in response to a pedestrian is split into the reaction and braking time. Hence, the relationship of the distance required for a yielding maneuver and the approaching vehicle speed depends on the reaction time of the driver and deceleration rate that the vehicle can achieve. The proposed framework is represented in the distance-velocity (DV) diagram and referred as the DV model. The interactions between approaching vehicles and pedestrians showing the intention to cross are divided in three categories: i) situations where the vehicle cannot make a complete stop, ii) situations where the vehicle's ability to stop depends on the driver reaction time, and iii) situations where the vehicle can make a complete stop. Based on these classifications, non-yielding maneuvers are classified as "non-infraction non-yielding" maneuvers, "uncertain non-yielding" maneuvers and "non-yielding" violations, respectively. From the pedestrian perspective, crossing decisions are classified as dangerous crossings, risky crossings and safe crossings accordingly. The yielding compliance and yielding rate, as measures of the yielding behavior, are redefined based on these categories. Time to crossing and deceleration rate required for the vehicle to stop are used to measure the probability of collision. Finally, the framework is demonstrated through a case study in evaluating pedestrian safety at three different types of non-signalized crossings: a painted crosswalk, an unprotected crosswalk, and a crosswalk controlled by stop signs. Results from the case study suggest that the proposed framework works well in describing pedestrian-vehicle interactions which helps in evaluating pedestrian safety at non-signalized crosswalk locations. Copyright © 2017 Elsevier Ltd. All rights reserved.

  7. Characterization of lignins isolated with alkali from the hydrothermal or dilute-acid pretreated rapeseed straw during bioethanol production.

    PubMed

    Chen, Bo-Yang; Zhao, Bao-Cheng; Li, Ming-Fei; Sun, Run-Cang

    2018-01-01

    A better understanding of the lignin in the straw of rapeseed, Brassica campestris L., is a prerequisite for promoting the biorefinery industry of rapeseed. Two different methods for fractionating lignin from rapeseed straw were proposed in this study. Lignin in the raw material was isolated with alkaline solution and recovered by acid precipitation. A comparison between two lignin preparations obtained from two different methods has been made in terms of yield and purity. The structural features were investigated by gel permeation chromatography, FT-IR spectroscopy, 2D-HSQC NMR and 31 P NMR. Taking into consideration of the yield and purity, the proposed methods are effective for extracting lignin. NMR results showed that syringyl (S) was the predominant unit over guaiacyl (G) or p-hydroxyphenyl (H) units in the lignin preparations, and linkages β-O-4', β-β' and β-5' were also identified and quantified by NMR techniques. This study demonstrated that the combination of hydrothermal or dilute-acid pretreatment and alkaline process could efficiently isolate the lignins from the rapeseed straw to further applications for industries. It was found that the enzymatic hydrolysis of the two-step pretreated rapeseed straw increased 5.9 times than the straw without treatment, which is benefit for bioethanol production from rapeseed straw. Copyright © 2017 Elsevier B.V. All rights reserved.

  8. Three-dimensional nonrigid landmark-based magnetic resonance to transrectal ultrasound registration for image-guided prostate biopsy.

    PubMed

    Sun, Yue; Qiu, Wu; Yuan, Jing; Romagnoli, Cesare; Fenster, Aaron

    2015-04-01

    Registration of three-dimensional (3-D) magnetic resonance (MR) to 3-D transrectal ultrasound (TRUS) prostate images is an important step in the planning and guidance of 3-D TRUS guided prostate biopsy. In order to accurately and efficiently perform the registration, a nonrigid landmark-based registration method is required to account for the different deformations of the prostate when using these two modalities. We describe a nonrigid landmark-based method for registration of 3-D TRUS to MR prostate images. The landmark-based registration method first makes use of an initial rigid registration of 3-D MR to 3-D TRUS images using six manually placed approximately corresponding landmarks in each image. Following manual initialization, the two prostate surfaces are segmented from 3-D MR and TRUS images and then nonrigidly registered using the following steps: (1) rotationally reslicing corresponding segmented prostate surfaces from both 3-D MR and TRUS images around a specified axis, (2) an approach to find point correspondences on the surfaces of the segmented surfaces, and (3) deformation of the surface of the prostate in the MR image to match the surface of the prostate in the 3-D TRUS image and the interior using a thin-plate spline algorithm. The registration accuracy was evaluated using 17 patient prostate MR and 3-D TRUS images by measuring the target registration error (TRE). Experimental results showed that the proposed method yielded an overall mean TRE of [Formula: see text] for the rigid registration and [Formula: see text] for the nonrigid registration, which is favorably comparable to a clinical requirement for an error of less than 2.5 mm. A landmark-based nonrigid 3-D MR-TRUS registration approach is proposed, which takes into account the correspondences on the prostate surface, inside the prostate, as well as the centroid of the prostate. Experimental results indicate that the proposed method yields clinically sufficient accuracy.

  9. Direct optical band gap measurement in polycrystalline semiconductors: A critical look at the Tauc method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dolgonos, Alex; Mason, Thomas O.; Poeppelmeier, Kenneth R., E-mail: krp@northwestern.edu

    2016-08-15

    The direct optical band gap of semiconductors is traditionally measured by extrapolating the linear region of the square of the absorption curve to the x-axis, and a variation of this method, developed by Tauc, has also been widely used. The application of the Tauc method to crystalline materials is rooted in misconception–and traditional linear extrapolation methods are inappropriate for use on degenerate semiconductors, where the occupation of conduction band energy states cannot be ignored. A new method is proposed for extracting a direct optical band gap from absorption spectra of degenerately-doped bulk semiconductors. This method was applied to pseudo-absorption spectramore » of Sn-doped In{sub 2}O{sub 3} (ITO)—converted from diffuse-reflectance measurements on bulk specimens. The results of this analysis were corroborated by room-temperature photoluminescence excitation measurements, which yielded values of optical band gap and Burstein–Moss shift that are consistent with previous studies on In{sub 2}O{sub 3} single crystals and thin films. - Highlights: • The Tauc method of band gap measurement is re-evaluated for crystalline materials. • Graphical method proposed for extracting optical band gaps from absorption spectra. • The proposed method incorporates an energy broadening term for energy transitions. • Values for ITO were self-consistent between two different measurement methods.« less

  10. Low Energy Sputtering Experiments for Ion Engine Lifetime Assessment

    NASA Technical Reports Server (NTRS)

    Duchemin Olivier B.; Polk, James E.

    1999-01-01

    The sputtering yield of molybdenum under xenon ion bombardment was measured using a Quartz Crystal Microbalance. The measurements were made for ion kinetic energies in the range 100-1keV on molybdenum films deposited by magnetron sputtering in conditions optimized to reproduce or approach bulk-like properties. SEM micrographs for different anode bias voltages during the deposition are compared, and four different methods were implemented to estimate the density of the molybdenum films. A careful discussion of the Quartz Crystal Microbalance is proposed and it is shown that this method can be used to measure mass changes that are distributed unevenly on the crystal electrode surface, if an analytical expression is known for the differential mass-sensitivity of the crystal and the erosion profile. Finally, results are presented that are in good agreement with previously published data, and it is concluded that this method holds the promise of enabling sputtering yield measurements at energies closer to the threshold energy in the very short term.

  11. Discussion on the installation checking method of precast composite floor slab with lattice girders

    NASA Astrophysics Data System (ADS)

    Chen, Li; Jin, Xing; Wang, Yahui; Zhou, Hele; Gu, Jianing

    2018-03-01

    Based on the installation checking requirements of China’s current standards and the international norms for prefabricated structural precast components, it proposed an installation checking method for precast composite floor slab with lattice girders. By taking an equivalent composite beam consisted of a single lattice girder and the precast concrete slab as the checking object, compression instability stress of upper chords and yield stress of slab distribution reinforcement at the maximum positive moment, tensile yield stress of upper chords, slab normal section normal compression stress and shear instability stress of diagonal bars at the maximum negative moment were checked. And the bending stress and deflection of support beams, strength and compression stability bearing capacity of the vertical support, shear bearing capacity of the bolt and compression bearing capacity of steel tube wall at the bolt were checked at the same time. Every different checking object was given a specific load value and load combination. Application of installation checking method was given and testified by example.

  12. A new method to derive electronegativity from resonant inelastic x-ray scattering

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Carniato, S.; Journel, L.; Guillemin, R.

    2012-10-14

    Electronegativity is a well-known property of atoms and substituent groups. Because there is no direct way to measure it, establishing a useful scale for electronegativity often entails correlating it to another chemical parameter; a wide variety of methods have been proposed over the past 80 years to do just that. This work reports a new approach that connects electronegativity to a spectroscopic parameter derived from resonant inelastic x-ray scattering. The new method is demonstrated using a series of chlorine-containing compounds, focusing on the Cl 2p{sup -1}LUMO{sup 1} electronic states reached after Cl 1s{yields} LUMO core excitation and subsequent KL radiativemore » decay. Based on an electron-density analysis of the LUMOs, the relative weights of the Cl 2p{sub z} atomic orbital contributing to the Cl 2p{sub 3/2} molecular spin-orbit components are shown to yield a linear electronegativity scale consistent with previous approaches.« less

  13. Effect of surfactant assisted sonic pretreatment on liquefaction of fruits and vegetable residue: Characterization, acidogenesis, biomethane yield and energy ratio.

    PubMed

    Shanthi, M; Rajesh Banu, J; Sivashanmugam, P

    2018-05-15

    The present study explored the disintegration potential of fruits and vegetable residue through sodium dodecyl sulphate (SDS) assisted sonic pretreatment (SSP). In SSP method, initially the biomass barrier (lignin) was removed using SDS at different dosage, subsequently it was sonically disintegrated. The effect of SSP were assessed based on dissolved organic release (DOR) of fruits and vegetable waste and specific energy input. SSP method achieved higher DOR rate and suspended solids reduction (26% and 16%) at optimum SDS dosage of 0.035 g/g SS with least specific energy input of 5400 kJ/kg TS compared to ultrasonic pretreatment (UP) (16% and 10%). The impact of fermentation and biomethane potential assay revealed highest production of volatile fatty acid and methane yield in SSP (1950 mg/L, 0.6 g/g COD) than UP. The energy ratio obtained was 0.9 for SSP, indicating proposed method is energetically efficient. Copyright © 2018 Elsevier Ltd. All rights reserved.

  14. WE-G-18C-07: Accelerated Water/fat Separation in MRI for Radiotherapy Planning Using Multi-Band Imaging Techniques

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Crijns, S; Stemkens, B; Sbrizzi, A

    Purpose: Dixon sequences are used to characterize disease processes, obtain good fat or water separation in cases where fat suppression fails and to obtain pseudo-CT datasets. Dixon's method uses at least two images acquired with different echo times and thus requires prolonged acquisition times. To overcome associated problems (e.g., for DCE/cine-MRI), we propose to use a method for water/fat separation based on spectrally selective RF pulses. Methods: Two alternating RF pulses were used, that imposes a fat selective phase cycling over the phase encoding lines, which results in a spatial shift for fat in the reconstructed image, identical to thatmore » in CAIPIRINHA. Associated aliasing artefacts were resolved using the encoding power of a multi-element receiver array, analogous to SENSE. In vivo measurements were performed on a 1.5T clinical MR-scanner in a healthy volunteer's legs, using a four channel receiver coil. Gradient echo images were acquired with TE/TR = 2.3/4.7ms, flip angle 20°, FOV 45×22.5cm{sup 2}, matrix 480×216, slice thickness 5mm. Dixon images were acquired with TE,1/TE,2/TR=2.2/4.6/7ms. All image reconstructions were done in Matlab using the ReconFrame toolbox (Gyrotools, Zurich, CH). Results: RF pulse alternation yields a fat image offset from the water image. Hence the water and fat images fold over, which is resolved using in-plane SENSE reconstruction. Using the proposed technique, we achieved excellent water/fat separation comparable to Dixon images, while acquiring images at only one echo time. Conclusion: The proposed technique yields both inphase water and fat images at arbitrary echo times and requires only one measurement, thereby shortening the acquisition time by a factor 2. In future work the technique may be extended to a multi-band water/fat separation sequence that is able to achieve single point water/fat separation in multiple slices at once and hence yields higher speed-up factors.« less

  15. Efficient estimation of the maximum metabolic productivity of batch systems.

    PubMed

    St John, Peter C; Crowley, Michael F; Bomble, Yannick J

    2017-01-01

    Production of chemicals from engineered organisms in a batch culture involves an inherent trade-off between productivity, yield, and titer. Existing strategies for strain design typically focus on designing mutations that achieve the highest yield possible while maintaining growth viability. While these methods are computationally tractable, an optimum productivity could be achieved by a dynamic strategy in which the intracellular division of resources is permitted to change with time. New methods for the design and implementation of dynamic microbial processes, both computational and experimental, have therefore been explored to maximize productivity. However, solving for the optimal metabolic behavior under the assumption that all fluxes in the cell are free to vary is a challenging numerical task. Previous studies have therefore typically focused on simpler strategies that are more feasible to implement in practice, such as the time-dependent control of a single flux or control variable. This work presents an efficient method for the calculation of a maximum theoretical productivity of a batch culture system using a dynamic optimization framework. The proposed method follows traditional assumptions of dynamic flux balance analysis: first, that internal metabolite fluxes are governed by a pseudo-steady state, and secondly that external metabolite fluxes are dynamically bounded. The optimization is achieved via collocation on finite elements, and accounts explicitly for an arbitrary number of flux changes. The method can be further extended to calculate the complete Pareto surface of productivity as a function of yield. We apply this method to succinate production in two engineered microbial hosts, Escherichia coli and Actinobacillus succinogenes , and demonstrate that maximum productivities can be more than doubled under dynamic control regimes. The maximum theoretical yield is a measure that is well established in the metabolic engineering literature and whose use helps guide strain and pathway selection. We present a robust, efficient method to calculate the maximum theoretical productivity: a metric that will similarly help guide and evaluate the development of dynamic microbial bioconversions. Our results demonstrate that nearly optimal yields and productivities can be achieved with only two discrete flux stages, indicating that near-theoretical productivities might be achievable in practice.

  16. Dose-volume histogram prediction using density estimation.

    PubMed

    Skarpman Munter, Johanna; Sjölund, Jens

    2015-09-07

    Knowledge of what dose-volume histograms can be expected for a previously unseen patient could increase consistency and quality in radiotherapy treatment planning. We propose a machine learning method that uses previous treatment plans to predict such dose-volume histograms. The key to the approach is the framing of dose-volume histograms in a probabilistic setting.The training consists of estimating, from the patients in the training set, the joint probability distribution of some predictive features and the dose. The joint distribution immediately provides an estimate of the conditional probability of the dose given the values of the predictive features. The prediction consists of estimating, from the new patient, the distribution of the predictive features and marginalizing the conditional probability from the training over this. Integrating the resulting probability distribution for the dose yields an estimate of the dose-volume histogram.To illustrate how the proposed method relates to previously proposed methods, we use the signed distance to the target boundary as a single predictive feature. As a proof-of-concept, we predicted dose-volume histograms for the brainstems of 22 acoustic schwannoma patients treated with stereotactic radiosurgery, and for the lungs of 9 lung cancer patients treated with stereotactic body radiation therapy. Comparing with two previous attempts at dose-volume histogram prediction we find that, given the same input data, the predictions are similar.In summary, we propose a method for dose-volume histogram prediction that exploits the intrinsic probabilistic properties of dose-volume histograms. We argue that the proposed method makes up for some deficiencies in previously proposed methods, thereby potentially increasing ease of use, flexibility and ability to perform well with small amounts of training data.

  17. A model-updating procedure to stimulate piezoelectric transducers accurately.

    PubMed

    Piranda, B; Ballandras, S; Steichen, W; Hecart, B

    2001-09-01

    The use of numerical calculations based on finite element methods (FEM) has yielded significant improvements in the simulation and design of piezoelectric transducers piezoelectric transducer utilized in acoustic imaging. However, the ultimate precision of such models is directly controlled by the accuracy of material characterization. The present work is dedicated to the development of a model-updating technique adapted to the problem of piezoelectric transducer. The updating process is applied using the experimental admittance of a given structure for which a finite element analysis is performed. The mathematical developments are reported and then applied to update the entries of a FEM of a two-layer structure (a PbZrTi-PZT-ridge glued on a backing) for which measurements were available. The efficiency of the proposed approach is demonstrated, yielding the definition of a new set of constants well adapted to predict the structure response accurately. Improvement of the proposed approach, consisting of the updating of material coefficients not only on the admittance but also on the impedance data, is finally discussed.

  18. Redundant via insertion in self-aligned double patterning

    NASA Astrophysics Data System (ADS)

    Song, Youngsoo; Jung, Jinwook; Shin, Youngsoo

    2017-03-01

    Redundant via (RV) insertion is employed to enhance via manufacturability, and has been extensively studied. Self-aligned double patterning (SADP) process, brings a new challenge to RV insertion since newly created cut for each RV insertion has to be taken care of. Specifically, when a cut for RV, which we simply call RV-cut, is formed, cut conflict may occur with nearby line-end cuts, which results in a decrease in RV candidates. We introduce cut merging to reduce the number of cut conflicts; merged cuts are processed with stitch using litho-etch-litho-etch (LELE) multi-patterning method. In this paper, we propose a new RV insertion method with cut merging in SADP for the first time. In our experiments, a simple RV insertion yields 55.3% vias to receives RVs; our proposed method that considers cut merging increases that number to 69.6% on average of test circuits.

  19. A 2.5D Computational Method to Simulate Cylindrical Fluidized Beds

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, Tingwen; Benyahia, Sofiane; Dietiker, Jeff

    2015-02-17

    In this paper, the limitations of axisymmetric and Cartesian two-dimensional (2D) simulations of cylindrical gas-solid fluidized beds are discussed. A new method has been proposed to carry out pseudo-two-dimensional (2.5D) simulations of a cylindrical fluidized bed by appropriately combining computational domains of Cartesian 2D and axisymmetric simulations. The proposed method was implemented in the open-source code MFIX and applied to the simulation of a lab-scale bubbling fluidized bed with necessary sensitivity study. After a careful grid study to ensure the numerical results are grid independent, detailed comparisons of the flow hydrodynamics were presented against axisymmetric and Cartesian 2D simulations. Furthermore,more » the 2.5D simulation results have been compared to the three-dimensional (3D) simulation for evaluation. This new approach yields better agreement with the 3D simulation results than with axisymmetric and Cartesian 2D simulations.« less

  20. Accurate Modeling Method for Cu Interconnect

    NASA Astrophysics Data System (ADS)

    Yamada, Kenta; Kitahara, Hiroshi; Asai, Yoshihiko; Sakamoto, Hideo; Okada, Norio; Yasuda, Makoto; Oda, Noriaki; Sakurai, Michio; Hiroi, Masayuki; Takewaki, Toshiyuki; Ohnishi, Sadayuki; Iguchi, Manabu; Minda, Hiroyasu; Suzuki, Mieko

    This paper proposes an accurate modeling method of the copper interconnect cross-section in which the width and thickness dependence on layout patterns and density caused by processes (CMP, etching, sputtering, lithography, and so on) are fully, incorporated and universally expressed. In addition, we have developed specific test patterns for the model parameters extraction, and an efficient extraction flow. We have extracted the model parameters for 0.15μm CMOS using this method and confirmed that 10%τpd error normally observed with conventional LPE (Layout Parameters Extraction) was completely dissolved. Moreover, it is verified that the model can be applied to more advanced technologies (90nm, 65nm and 55nm CMOS). Since the interconnect delay variations due to the processes constitute a significant part of what have conventionally been treated as random variations, use of the proposed model could enable one to greatly narrow the guardbands required to guarantee a desired yield, thereby facilitating design closure.

  1. Fast Estimation of Dietary Fiber Content in Apple.

    PubMed

    Le Gall, Sophie; Even, Sonia; Lahaye, Marc

    2016-02-17

    Dietary fibers (DF) are one of the nutritional benefits of fleshy fruit consumption that is becoming a quality criterion for genetic selection by breeders. However, the AOAC total DF content determination is not readily amenable for screening large fruit collections. A new screening method of DF content in an apple collection based on the automated preparation of cell wall material as an alcohol-insoluble residue (AIR) is proposed. The yield of AIR from 27 apple genotypes was compared with DF measured according to AOAC method 985.29. Although residual protein content in AIRs did not affect DF measurement, subtraction of starch content above 3% dry weight in AIRs was needed to agree with AOAC measured DF. A fast colorimetric screening of starch in AIR was developed to detect samples needing correction. The proposed method may prove useful for the rapid determination of DF in collections of other fleshy fruit besides apple.

  2. Element free Galerkin formulation of composite beam with longitudinal slip

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ahmad, Dzulkarnain; Mokhtaram, Mokhtazul Haizad; Badli, Mohd Iqbal

    2015-05-15

    Behaviour between two materials in composite beam is assumed partially interact when longitudinal slip at its interfacial surfaces is considered. Commonly analysed by the mesh-based formulation, this study used meshless formulation known as Element Free Galerkin (EFG) method in the beam partial interaction analysis, numerically. As meshless formulation implies that the problem domain is discretised only by nodes, the EFG method is based on Moving Least Square (MLS) approach for shape functions formulation with its weak form is developed using variational method. The essential boundary conditions are enforced by Langrange multipliers. The proposed EFG formulation gives comparable results, after beenmore » verified by analytical solution, thus signify its application in partial interaction problems. Based on numerical test results, the Cubic Spline and Quartic Spline weight functions yield better accuracy for the EFG formulation, compares to other proposed weight functions.« less

  3. Virus Particle Detection by Convolutional Neural Network in Transmission Electron Microscopy Images.

    PubMed

    Ito, Eisuke; Sato, Takaaki; Sano, Daisuke; Utagawa, Etsuko; Kato, Tsuyoshi

    2018-06-01

    A new computational method for the detection of virus particles in transmission electron microscopy (TEM) images is presented. Our approach is to use a convolutional neural network that transforms a TEM image to a probabilistic map that indicates where virus particles exist in the image. Our proposed approach automatically and simultaneously learns both discriminative features and classifier for virus particle detection by machine learning, in contrast to existing methods that are based on handcrafted features that yield many false positives and require several postprocessing steps. The detection performance of the proposed method was assessed against a dataset of TEM images containing feline calicivirus particles and compared with several existing detection methods, and the state-of-the-art performance of the developed method for detecting virus was demonstrated. Since our method is based on supervised learning that requires both the input images and their corresponding annotations, it is basically used for detection of already-known viruses. However, the method is highly flexible, and the convolutional networks can adapt themselves to any virus particles by learning automatically from an annotated dataset.

  4. Efficient methods for joint estimation of multiple fundamental frequencies in music signals

    NASA Astrophysics Data System (ADS)

    Pertusa, Antonio; Iñesta, José M.

    2012-12-01

    This study presents efficient techniques for multiple fundamental frequency estimation in music signals. The proposed methodology can infer harmonic patterns from a mixture considering interactions with other sources and evaluate them in a joint estimation scheme. For this purpose, a set of fundamental frequency candidates are first selected at each frame, and several hypothetical combinations of them are generated. Combinations are independently evaluated, and the most likely is selected taking into account the intensity and spectral smoothness of its inferred patterns. The method is extended considering adjacent frames in order to smooth the detection in time, and a pitch tracking stage is finally performed to increase the temporal coherence. The proposed algorithms were evaluated in MIREX contests yielding state of the art results with a very low computational burden.

  5. A generalized simplest equation method and its application to the Boussinesq-Burgers equation.

    PubMed

    Sudao, Bilige; Wang, Xiaomin

    2015-01-01

    In this paper, a generalized simplest equation method is proposed to seek exact solutions of nonlinear evolution equations (NLEEs). In the method, we chose a solution expression with a variable coefficient and a variable coefficient ordinary differential auxiliary equation. This method can yield a Bäcklund transformation between NLEEs and a related constraint equation. By dealing with the constraint equation, we can derive infinite number of exact solutions for NLEEs. These solutions include the traveling wave solutions, non-traveling wave solutions, multi-soliton solutions, rational solutions, and other types of solutions. As applications, we obtained wide classes of exact solutions for the Boussinesq-Burgers equation by using the generalized simplest equation method.

  6. A Generalized Simplest Equation Method and Its Application to the Boussinesq-Burgers Equation

    PubMed Central

    Sudao, Bilige; Wang, Xiaomin

    2015-01-01

    In this paper, a generalized simplest equation method is proposed to seek exact solutions of nonlinear evolution equations (NLEEs). In the method, we chose a solution expression with a variable coefficient and a variable coefficient ordinary differential auxiliary equation. This method can yield a Bäcklund transformation between NLEEs and a related constraint equation. By dealing with the constraint equation, we can derive infinite number of exact solutions for NLEEs. These solutions include the traveling wave solutions, non-traveling wave solutions, multi-soliton solutions, rational solutions, and other types of solutions. As applications, we obtained wide classes of exact solutions for the Boussinesq-Burgers equation by using the generalized simplest equation method. PMID:25973605

  7. A new method for measuring low resistivity contacts between silver and YBa2Cu3O(7-x) superconductor

    NASA Technical Reports Server (NTRS)

    Hsi, Chi-Shiung; Haertling, Gene H.; Sherrill, Max D.

    1991-01-01

    Several methods of measuring contact resistivity between silver electrodes and YBa2Cu3O(7-x) superconductors were investigated; including the two-point, the three point, and the lap-joint methods. The lap-joint method was found to yield the most consistent and reliable results and is proposed as a new technique for this measurement. Painting, embedding, and melting methods were used to apply the electrodes to the superconductor. Silver electrodes produced good ohmic contacts to YBa2Cu3O(7-x) superconductors with contact resistivities as low as 1.9 x 10 to the -9th ohm sq cm.

  8. Experimental study on secondary electron emission characteristics of Cu

    NASA Astrophysics Data System (ADS)

    Liu, Shenghua; Liu, Yudong; Wang, Pengcheng; Liu, Weibin; Pei, Guoxi; Zeng, Lei; Sun, Xiaoyang

    2018-02-01

    Secondary electron emission (SEE) of a surface is the origin of the multipacting effect which could seriously deteriorate beam quality and even perturb the normal operation of particle accelerators. Experimental measurements on secondary electron yield (SEY) for different materials and coatings have been developed in many accelerator laboratories. In fact, the SEY is just one parameter of secondary electron emission characteristics which include spatial and energy distribution of emitted electrons. A novel experimental apparatus was set up in China Spallation Neutron Source, and an innovative method was applied to obtain the whole characteristics of SEE. Taking Cu as the sample, secondary electron yield, its dependence on beam injection angle, and the spatial and energy distribution of secondary electrons were achieved with this measurement device. The method for spatial distribution measurement was first proposed and verified experimentally. This contribution also tries to give all the experimental results a reasonable theoretical analysis and explanation.

  9. Factorizing the factorization - a spectral-element solver for elliptic equations with linear operation count

    NASA Astrophysics Data System (ADS)

    Huismann, Immo; Stiller, Jörg; Fröhlich, Jochen

    2017-10-01

    The paper proposes a novel factorization technique for static condensation of a spectral-element discretization matrix that yields a linear operation count of just 13N multiplications for the residual evaluation, where N is the total number of unknowns. In comparison to previous work it saves a factor larger than 3 and outpaces unfactored variants for all polynomial degrees. Using the new technique as a building block for a preconditioned conjugate gradient method yields linear scaling of the runtime with N which is demonstrated for polynomial degrees from 2 to 32. This makes the spectral-element method cost effective even for low polynomial degrees. Moreover, the dependence of the iterative solution on the element aspect ratio is addressed, showing only a slight increase in the number of iterations for aspect ratios up to 128. Hence, the solver is very robust for practical applications.

  10. Geometric analysis and restitution of digital multispectral scanner data arrays

    NASA Technical Reports Server (NTRS)

    Baker, J. R.; Mikhail, E. M.

    1975-01-01

    An investigation was conducted to define causes of geometric defects within digital multispectral scanner (MSS) data arrays, to analyze the resulting geometric errors, and to investigate restitution methods to correct or reduce these errors. Geometric transformation relationships for scanned data, from which collinearity equations may be derived, served as the basis of parametric methods of analysis and restitution of MSS digital data arrays. The linearization of these collinearity equations is presented. Algorithms considered for use in analysis and restitution included the MSS collinearity equations, piecewise polynomials based on linearized collinearity equations, and nonparametric algorithms. A proposed system for geometric analysis and restitution of MSS digital data arrays was used to evaluate these algorithms, utilizing actual MSS data arrays. It was shown that collinearity equations and nonparametric algorithms both yield acceptable results, but nonparametric algorithms possess definite advantages in computational efficiency. Piecewise polynomials were found to yield inferior results.

  11. Image enhancement based on in vivo hyperspectral gastroscopic images: a case study

    NASA Astrophysics Data System (ADS)

    Gu, Xiaozhou; Han, Zhimin; Yao, Liqing; Zhong, Yunshi; Shi, Qiang; Fu, Ye; Liu, Changsheng; Wang, Xiguang; Xie, Tianyu

    2016-10-01

    Hyperspectral imaging (HSI) has been recognized as a powerful tool for noninvasive disease detection in the gastrointestinal field. However, most of the studies on HSI in this field have involved ex vivo biopsies or resected tissues. We proposed an image enhancement method based on in vivo hyperspectral gastroscopic images. First, we developed a flexible gastroscopy system capable of obtaining in vivo hyperspectral images of different types of stomach disease mucosa. Then, depending on a specific object, an appropriate band selection algorithm based on dependence of information was employed to determine a subset of spectral bands that would yield useful spatial information. Finally, these bands were assigned to be the color components of an enhanced image of the object. A gastric ulcer case study demonstrated that our method yields higher color tone contrast, which enhanced the displays of the gastric ulcer regions, and that it will be valuable in clinical applications.

  12. Colors of the Sublunar

    PubMed Central

    van Doorn, Andrea

    2017-01-01

    Generic red, green, and blue images can be regarded as data sources of coarse (three bins) local spectra, typical data volumes are 104 to 107 spectra. Image data bases often yield hundreds or thousands of images, yielding data sources of 109 to 1010 spectra. There is usually no calibration, and there often are various nonlinear image transformations involved. However, we argue that sheer numbers make up for such ambiguity. We propose a model of spectral data mining that applies to the sublunar realm, spectra due to the scattering of daylight by objects from the generic terrestrial environment. The model involves colorimetry and ecological physics. Whereas the colorimetry is readily dealt with, one needs to handle the ecological physics with heuristic methods. The results suggest evolutionary causes of the human visual system. We also suggest effective methods to generate red, green, and blue color gamuts for various terrains. PMID:28989697

  13. Space-time encoding for high frame rate ultrasound imaging.

    PubMed

    Misaridis, Thanassis X; Jensen, Jørgen A

    2002-05-01

    Frame rate in ultrasound imaging can be dramatically increased by using sparse synthetic transmit aperture (STA) beamforming techniques. The two main drawbacks of the method are the low signal-to-noise ratio (SNR) and the motion artifacts, that degrade the image quality. In this paper we propose a spatio-temporal encoding for STA imaging based on simultaneous transmission of two quasi-orthogonal tapered linear FM signals. The excitation signals are an up- and a down-chirp with frequency division and a cross-talk of -55 dB. The received signals are first cross-correlated with the appropriate code, then spatially decoded and finally beamformed for each code, yielding two images per emission. The spatial encoding is a Hadamard encoding previously suggested by Chiao et al. [in: Proceedings of the IEEE Ultrasonics Symposium, 1997, p. 1679]. The Hadamard matrix has half the size of the transmit element groups, due to the orthogonality of the temporal encoded wavefronts. Thus, with this method, the frame rate is doubled compared to previous systems. Another advantage is the utilization of temporal codes which are more robust to attenuation. With the proposed technique it is possible to obtain images dynamically focused in both transmit and receive with only two firings. This reduces the problem of motion artifacts. The method has been tested with extensive simulations using Field II. Resolution and SNR are compared with uncoded STA imaging and conventional phased-array imaging. The range resolution remains the same for coded STA imaging with four emissions and is slightly degraded for STA imaging with two emissions due to the -55 dB cross-talk between the signals. The additional proposed temporal encoding adds more than 15 dB on the SNR gain, yielding a SNR at the same order as in phased-array imaging.

  14. Bearing Fault Diagnosis under Variable Speed Using Convolutional Neural Networks and the Stochastic Diagonal Levenberg-Marquardt Algorithm

    PubMed Central

    Tra, Viet; Kim, Jaeyoung; Kim, Jong-Myon

    2017-01-01

    This paper presents a novel method for diagnosing incipient bearing defects under variable operating speeds using convolutional neural networks (CNNs) trained via the stochastic diagonal Levenberg-Marquardt (S-DLM) algorithm. The CNNs utilize the spectral energy maps (SEMs) of the acoustic emission (AE) signals as inputs and automatically learn the optimal features, which yield the best discriminative models for diagnosing incipient bearing defects under variable operating speeds. The SEMs are two-dimensional maps that show the distribution of energy across different bands of the AE spectrum. It is hypothesized that the variation of a bearing’s speed would not alter the overall shape of the AE spectrum rather, it may only scale and translate it. Thus, at different speeds, the same defect would yield SEMs that are scaled and shifted versions of each other. This hypothesis is confirmed by the experimental results, where CNNs trained using the S-DLM algorithm yield significantly better diagnostic performance under variable operating speeds compared to existing methods. In this work, the performance of different training algorithms is also evaluated to select the best training algorithm for the CNNs. The proposed method is used to diagnose both single and compound defects at six different operating speeds. PMID:29211025

  15. Linear and non-linear dynamic models of a geared rotor-bearing system

    NASA Technical Reports Server (NTRS)

    Kahraman, Ahmet; Singh, Rajendra

    1990-01-01

    A three degree of freedom non-linear model of a geared rotor-bearing system with gear backlash and radial clearances in rolling element bearings is proposed here. This reduced order model can be used to describe the transverse-torsional motion of the system. It is justified by comparing the eigen solutions yielded by corresponding linear model with the finite element method results. Nature of nonlinearities in bearings is examined and two approximate nonlinear stiffness functions are proposed. These approximate bearing models are verified by comparing their frequency responses with the results given by the exact form of nonlinearity. The proposed nonlinear dynamic model of the geared rotor-bearing system can be used to investigate the dynamic behavior and chaos.

  16. Thraustochytrids can be grown in low-salt media without affecting PUFA production.

    PubMed

    Shabala, Lana; McMeekin, Tom; Shabala, Sergey

    2013-08-01

    Marine microheterotrophs thraustochytrids are emerging as a potential source for commercial production of polyunsaturated fatty acids (PUFA) that have nutritional and pharmacological values. With prospective demand for PUFAs increasing, biotechnological companies are looking for potential increases in those valuable products. However, high levels of NaCl in the culture media required for optimal thraustochytrid growth and PUFA production poses a significant problem to the biotechnological industry due to corrosion of fermenters calling for a need to reduce the amount of NaCl in the culture media, without imposing penalties on growth and yield of cultured organisms. Earlier, as reported by Shabala et al. (Environ Microbiol 11:1835-1843, 2009), we have shown that thraustochytrids use sodium predominantly for osmotic adjustment purposes and, as such, can be grown in low-salt environment without growth penalties, providing the media osmolality is adjusted. In this study, we verify if that conclusion, made for one specific strain and osmolyte only, is applicable to the larger number of strains and organic osmotica, as well as address the issue of yield quality (e.g., PUFA production in low-saline media). Using mannitol and sucrose for osmotic adjustment of the growth media enabled us to reduce NaCl concentration down to 1 mM; this is 15-100-fold lower than any method proposed so far. At the same time, the yield of essential PUFAs was increased by 15 to 20 %. Taken together, these results suggest that the proposed method can be used in industrial fermenters for commercial PUFA production.

  17. SCS-CN based time-distributed sediment yield model

    NASA Astrophysics Data System (ADS)

    Tyagi, J. V.; Mishra, S. K.; Singh, Ranvir; Singh, V. P.

    2008-05-01

    SummaryA sediment yield model is developed to estimate the temporal rates of sediment yield from rainfall events on natural watersheds. The model utilizes the SCS-CN based infiltration model for computation of rainfall-excess rate, and the SCS-CN-inspired proportionality concept for computation of sediment-excess. For computation of sedimentographs, the sediment-excess is routed to the watershed outlet using a single linear reservoir technique. Analytical development of the model shows the ratio of the potential maximum erosion (A) to the potential maximum retention (S) of the SCS-CN method is constant for a watershed. The model is calibrated and validated on a number of events using the data of seven watersheds from India and the USA. Representative values of the A/S ratio computed for the watersheds from calibration are used for the validation of the model. The encouraging results of the proposed simple four parameter model exhibit its potential in field application.

  18. Measuring Gravitation Using Polarization Spectroscopy

    NASA Technical Reports Server (NTRS)

    Matsko, Andrey; Yu, Nan; Maleki, Lute

    2004-01-01

    A proposed method of measuring gravitational acceleration would involve the application of polarization spectroscopy to an ultracold, vertically moving cloud of atoms (an atomic fountain). A related proposed method involving measurements of absorption of light pulses like those used in conventional atomic interferometry would yield an estimate of the number of atoms participating in the interferometric interaction. The basis of the first-mentioned proposed method is that the rotation of polarization of light is affected by the acceleration of atoms along the path of propagation of the light. The rotation of polarization is associated with a phase shift: When an atom moving in a laboratory reference interacts with an electromagnetic wave, the energy levels of the atom are Doppler-shifted, relative to where they would be if the atom were stationary. The Doppler shift gives rise to changes in the detuning of the light from the corresponding atomic transitions. This detuning, in turn, causes the electromagnetic wave to undergo a phase shift that can be measured by conventional means. One would infer the gravitational acceleration and/or the gradient of the gravitational acceleration from the phase measurements.

  19. Optimal placement of water-lubricated rubber bearings for vibration reduction of flexible multistage rotor systems

    NASA Astrophysics Data System (ADS)

    Liu, Shibing; Yang, Bingen

    2017-10-01

    Flexible multistage rotor systems with water-lubricated rubber bearings (WLRBs) have a variety of engineering applications. Filling a technical gap in the literature, this effort proposes a method of optimal bearing placement that minimizes the vibration amplitude of a WLRB-supported flexible rotor system with a minimum number of bearings. In the development, a new model of WLRBs and a distributed transfer function formulation are used to define a mixed continuous-and-discrete optimization problem. To deal with the case of uncertain number of WLRBs in rotor design, a virtual bearing method is devised. Solution of the optimization problem by a real-coded genetic algorithm yields the locations and lengths of water-lubricated rubber bearings, by which the prescribed operational requirements for the rotor system are satisfied. The proposed method is applicable either to preliminary design of a new rotor system with the number of bearings unforeknown or to redesign of an existing rotor system with a given number of bearings. Numerical examples show that the proposed optimal bearing placement is efficient, accurate and versatile in different design cases.

  20. Effective-field renormalization-group method for Ising systems

    NASA Astrophysics Data System (ADS)

    Fittipaldi, I. P.; De Albuquerque, D. F.

    1992-02-01

    A new applicable effective-field renormalization-group (ERFG) scheme for computing critical properties of Ising spins systems is proposed and used to study the phase diagrams of a quenched bond-mixed spin Ising model on square and Kagomé lattices. The present EFRG approach yields results which improves substantially on those obtained from standard mean-field renormalization-group (MFRG) method. In particular, it is shown that the EFRG scheme correctly distinguishes the geometry of the lattice structure even when working with the smallest possible clusters, namely N'=1 and N=2.

  1. Epigenome-wide association studies without the need for cell-type composition.

    PubMed

    Zou, James; Lippert, Christoph; Heckerman, David; Aryee, Martin; Listgarten, Jennifer

    2014-03-01

    In epigenome-wide association studies, cell-type composition often differs between cases and controls, yielding associations that simply tag cell type rather than reveal fundamental biology. Current solutions require actual or estimated cell-type composition--information not easily obtainable for many samples of interest. We propose a method, FaST-LMM-EWASher, that automatically corrects for cell-type composition without the need for explicit knowledge of it, and then validate our method by comparison with the state-of-the-art approach. Corresponding software is available from http://www.microsoft.com/science/.

  2. How can we improve crop genotypes to increase stress resilience and productivity in a future climate? A new crop screening method based on productivity and resistance to abiotic stress

    PubMed Central

    Thiry, Arnauld A.; Chavez Dulanto, Perla N.; Reynolds, Matthew P.; Davies, William J.

    2016-01-01

    The need to accelerate the selection of crop genotypes that are both resistant to and productive under abiotic stress is enhanced by global warming and the increase in demand for food by a growing world population. In this paper, we propose a new method for evaluation of wheat genotypes in terms of their resilience to stress and their production capacity. The method quantifies the components of a new index related to yield under abiotic stress based on previously developed stress indices, namely the stress susceptibility index, the stress tolerance index, the mean production index, the geometric mean production index, and the tolerance index, which were created originally to evaluate drought adaptation. The method, based on a scoring scale, offers simple and easy visualization and identification of resilient, productive and/or contrasting genotypes according to grain yield. This new selection method could help breeders and researchers by defining clear and strong criteria to identify genotypes with high resilience and high productivity and provide a clear visualization of contrasts in terms of grain yield production under stress. It is also expected that this methodology will reduce the time required for first selection and the number of first-selected genotypes for further evaluation by breeders and provide a basis for appropriate comparisons of genotypes that would help reveal the biology behind high stress productivity of crops. PMID:27677299

  3. Quiescent period respiratory gating for PET∕CT

    PubMed Central

    Liu, Chi; Alessio, Adam; Pierce, Larry; Thielemans, Kris; Wollenweber, Scott; Ganin, Alexander; Kinahan, Paul

    2010-01-01

    Purpose: To minimize respiratory motion artifacts, this work proposes quiescent period gating (QPG) methods that extract PET data from the end-expiration quiescent period and form a single PET frame with reduced motion and improved signal-to-noise properties. Methods: Two QPG methods are proposed and evaluated. Histogram-based quiescent period gating (H-QPG) extracts a fraction of PET data determined by a window of the respiratory displacement signal histogram. Cycle-based quiescent period gating (C-QPG) extracts data with a respiratory displacement signal below a specified threshold of the maximum amplitude of each individual respiratory cycle. Performances of both QPG methods were compared to ungated and five-bin phase-gated images across 21 FDG-PET∕CT patient data sets containing 31 thorax and abdomen lesions as well as with computer simulations driven by 1295 different patient respiratory traces. Image quality was evaluated in terms of the lesion SUVmax and the fraction of counts included in each gate as a surrogate for image noise. Results: For all the gating methods, image noise artifactually increases SUVmax when the fraction of counts included in each gate is less than 50%. While simulation data show that H-QPG is superior to C-QPG, the H-QPG and C-QPG methods lead to similar quantification-noise tradeoffs in patient data. Compared to ungated images, both QPG methods yield significantly higher lesion SUVmax. Compared to five-bin phase gating, the QPG methods yield significantly larger fraction of counts with similar SUVmax improvement. Both QPG methods result in increased lesion SUVmax for patients whose lesions have longer quiescent periods. Conclusions: Compared to ungated and phase-gated images, the QPG methods lead to images with less motion blurring and an improved compromise between SUVmax and fraction of counts. The QPG methods for respiratory motion compensation could effectively improve tumor quantification with minimal noise increase. PMID:20964223

  4. Empirical Likelihood-Based Estimation of the Treatment Effect in a Pretest-Posttest Study.

    PubMed

    Huang, Chiung-Yu; Qin, Jing; Follmann, Dean A

    2008-09-01

    The pretest-posttest study design is commonly used in medical and social science research to assess the effect of a treatment or an intervention. Recently, interest has been rising in developing inference procedures that improve efficiency while relaxing assumptions used in the pretest-posttest data analysis, especially when the posttest measurement might be missing. In this article we propose a semiparametric estimation procedure based on empirical likelihood (EL) that incorporates the common baseline covariate information to improve efficiency. The proposed method also yields an asymptotically unbiased estimate of the response distribution. Thus functions of the response distribution, such as the median, can be estimated straightforwardly, and the EL method can provide a more appealing estimate of the treatment effect for skewed data. We show that, compared with existing methods, the proposed EL estimator has appealing theoretical properties, especially when the working model for the underlying relationship between the pretest and posttest measurements is misspecified. A series of simulation studies demonstrates that the EL-based estimator outperforms its competitors when the working model is misspecified and the data are missing at random. We illustrate the methods by analyzing data from an AIDS clinical trial (ACTG 175).

  5. Empirical Likelihood-Based Estimation of the Treatment Effect in a Pretest–Posttest Study

    PubMed Central

    Huang, Chiung-Yu; Qin, Jing; Follmann, Dean A.

    2013-01-01

    The pretest–posttest study design is commonly used in medical and social science research to assess the effect of a treatment or an intervention. Recently, interest has been rising in developing inference procedures that improve efficiency while relaxing assumptions used in the pretest–posttest data analysis, especially when the posttest measurement might be missing. In this article we propose a semiparametric estimation procedure based on empirical likelihood (EL) that incorporates the common baseline covariate information to improve efficiency. The proposed method also yields an asymptotically unbiased estimate of the response distribution. Thus functions of the response distribution, such as the median, can be estimated straightforwardly, and the EL method can provide a more appealing estimate of the treatment effect for skewed data. We show that, compared with existing methods, the proposed EL estimator has appealing theoretical properties, especially when the working model for the underlying relationship between the pretest and posttest measurements is misspecified. A series of simulation studies demonstrates that the EL-based estimator outperforms its competitors when the working model is misspecified and the data are missing at random. We illustrate the methods by analyzing data from an AIDS clinical trial (ACTG 175). PMID:23729942

  6. Probabilistic peak detection in CE-LIF for STR DNA typing.

    PubMed

    Woldegebriel, Michael; van Asten, Arian; Kloosterman, Ate; Vivó-Truyols, Gabriel

    2017-07-01

    In this work, we present a novel probabilistic peak detection algorithm based on a Bayesian framework for forensic DNA analysis. The proposed method aims at an exhaustive use of raw electropherogram data from a laser-induced fluorescence multi-CE system. As the raw data are informative up to a single data point, the conventional threshold-based approaches discard relevant forensic information early in the data analysis pipeline. Our proposed method assigns a posterior probability reflecting the data point's relevance with respect to peak detection criteria. Peaks of low intensity generated from a truly existing allele can thus constitute evidential value instead of fully discarding them and contemplating a potential allele drop-out. This way of working utilizes the information available within each individual data point and thus avoids making early (binary) decisions on the data analysis that can lead to error propagation. The proposed method was tested and compared to the application of a set threshold as is current practice in forensic STR DNA profiling. The new method was found to yield a significant improvement in the number of alleles identified, regardless of the peak heights and deviation from Gaussian shape. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  7. Causal inference with measurement error in outcomes: Bias analysis and estimation methods.

    PubMed

    Shu, Di; Yi, Grace Y

    2017-01-01

    Inverse probability weighting estimation has been popularly used to consistently estimate the average treatment effect. Its validity, however, is challenged by the presence of error-prone variables. In this paper, we explore the inverse probability weighting estimation with mismeasured outcome variables. We study the impact of measurement error for both continuous and discrete outcome variables and reveal interesting consequences of the naive analysis which ignores measurement error. When a continuous outcome variable is mismeasured under an additive measurement error model, the naive analysis may still yield a consistent estimator; when the outcome is binary, we derive the asymptotic bias in a closed-form. Furthermore, we develop consistent estimation procedures for practical scenarios where either validation data or replicates are available. With validation data, we propose an efficient method for estimation of average treatment effect; the efficiency gain is substantial relative to usual methods of using validation data. To provide protection against model misspecification, we further propose a doubly robust estimator which is consistent even when either the treatment model or the outcome model is misspecified. Simulation studies are reported to assess the performance of the proposed methods. An application to a smoking cessation dataset is presented.

  8. Water Quality Sensing and Spatio-Temporal Monitoring Structure with Autocorrelation Kernel Methods.

    PubMed

    Vizcaíno, Iván P; Carrera, Enrique V; Muñoz-Romero, Sergio; Cumbal, Luis H; Rojo-Álvarez, José Luis

    2017-10-16

    Pollution on water resources is usually analyzed with monitoring campaigns, which consist of programmed sampling, measurement, and recording of the most representative water quality parameters. These campaign measurements yields a non-uniform spatio-temporal sampled data structure to characterize complex dynamics phenomena. In this work, we propose an enhanced statistical interpolation method to provide water quality managers with statistically interpolated representations of spatial-temporal dynamics. Specifically, our proposal makes efficient use of the a priori available information of the quality parameter measurements through Support Vector Regression (SVR) based on Mercer's kernels. The methods are benchmarked against previously proposed methods in three segments of the Machángara River and one segment of the San Pedro River in Ecuador, and their different dynamics are shown by statistically interpolated spatial-temporal maps. The best interpolation performance in terms of mean absolute error was the SVR with Mercer's kernel given by either the Mahalanobis spatial-temporal covariance matrix or by the bivariate estimated autocorrelation function. In particular, the autocorrelation kernel provides with significant improvement of the estimation quality, consistently for all the six water quality variables, which points out the relevance of including a priori knowledge of the problem.

  9. Water Quality Sensing and Spatio-Temporal Monitoring Structure with Autocorrelation Kernel Methods

    PubMed Central

    Vizcaíno, Iván P.; Muñoz-Romero, Sergio; Cumbal, Luis H.

    2017-01-01

    Pollution on water resources is usually analyzed with monitoring campaigns, which consist of programmed sampling, measurement, and recording of the most representative water quality parameters. These campaign measurements yields a non-uniform spatio-temporal sampled data structure to characterize complex dynamics phenomena. In this work, we propose an enhanced statistical interpolation method to provide water quality managers with statistically interpolated representations of spatial-temporal dynamics. Specifically, our proposal makes efficient use of the a priori available information of the quality parameter measurements through Support Vector Regression (SVR) based on Mercer’s kernels. The methods are benchmarked against previously proposed methods in three segments of the Machángara River and one segment of the San Pedro River in Ecuador, and their different dynamics are shown by statistically interpolated spatial-temporal maps. The best interpolation performance in terms of mean absolute error was the SVR with Mercer’s kernel given by either the Mahalanobis spatial-temporal covariance matrix or by the bivariate estimated autocorrelation function. In particular, the autocorrelation kernel provides with significant improvement of the estimation quality, consistently for all the six water quality variables, which points out the relevance of including a priori knowledge of the problem. PMID:29035333

  10. Genomic Bayesian functional regression models with interactions for predicting wheat grain yield using hyper-spectral image data.

    PubMed

    Montesinos-López, Abelardo; Montesinos-López, Osval A; Cuevas, Jaime; Mata-López, Walter A; Burgueño, Juan; Mondal, Sushismita; Huerta, Julio; Singh, Ravi; Autrique, Enrique; González-Pérez, Lorena; Crossa, José

    2017-01-01

    Modern agriculture uses hyperspectral cameras that provide hundreds of reflectance data at discrete narrow bands in many environments. These bands often cover the whole visible light spectrum and part of the infrared and ultraviolet light spectra. With the bands, vegetation indices are constructed for predicting agronomically important traits such as grain yield and biomass. However, since vegetation indices only use some wavelengths (referred to as bands), we propose using all bands simultaneously as predictor variables for the primary trait grain yield; results of several multi-environment maize (Aguate et al. in Crop Sci 57(5):1-8, 2017) and wheat (Montesinos-López et al. in Plant Methods 13(4):1-23, 2017) breeding trials indicated that using all bands produced better prediction accuracy than vegetation indices. However, until now, these prediction models have not accounted for the effects of genotype × environment (G × E) and band × environment (B × E) interactions incorporating genomic or pedigree information. In this study, we propose Bayesian functional regression models that take into account all available bands, genomic or pedigree information, the main effects of lines and environments, as well as G × E and B × E interaction effects. The data set used is comprised of 976 wheat lines evaluated for grain yield in three environments (Drought, Irrigated and Reduced Irrigation). The reflectance data were measured in 250 discrete narrow bands ranging from 392 to 851 nm (nm). The proposed Bayesian functional regression models were implemented using two types of basis: B-splines and Fourier. Results of the proposed Bayesian functional regression models, including all the wavelengths for predicting grain yield, were compared with results from conventional models with and without bands. We observed that the models with B × E interaction terms were the most accurate models, whereas the functional regression models (with B-splines and Fourier basis) and the conventional models performed similarly in terms of prediction accuracy. However, the functional regression models are more parsimonious and computationally more efficient because the number of beta coefficients to be estimated is 21 (number of basis), rather than estimating the 250 regression coefficients for all bands. In this study adding pedigree or genomic information did not increase prediction accuracy.

  11. Improvement of red pepper yield and soil environment by summer catch aquatic crops in greenhouses

    NASA Astrophysics Data System (ADS)

    Du, X. F.; Wang, L. Z.; Peng, J.; Wang, G. L.; Guo, X. S.; Wen, T. G.; Gu, D. L.; Wang, W. Z.; Wu, C. W.

    2016-08-01

    To investigate effects of the rotation of summer catch crops on remediation retrogressed soils in continuous cropping, a field experiment was conducted. Rice, water spinach, or cress were selected as summer catch crops; bare fallow during summer fallow was used as the control group. Results showed that aquatic crops grown in summer fallow period could effectively reduce soil bulk density and pH, facilitate soil nutrient release, and improve soil physical and chemical properties compared with those grown in fallow period. Paddy-upland rotation could improve soil microbial members and increase bacterial and actinomycete populations; by contrast, paddy-upland rotation could reduce fungal populations and enhance bacterium-to-fungus ratio. Paddy-upland rotation could also actively promote activities of soil enzymes, such as urease, phosphatase, invertase, and catalase. The proposed paddy-upland rotation significantly affected the growth of red pepper; the yield and quality of the grown red pepper were enhanced. Summer catch crops, such as rice, water spinach, and cress significantly increased pepper yield in the following growing season by 15.4%, 10.2% and 14.0%, respectively, compared with those grown in fallow treatment. Therefore, the proposed paddy-upland crop rotation could be a useful method to alleviate continuous cropping problems involved in cultivating red pepper in greenhouses.

  12. Dynamic non-equilibrium wall-modeling for large eddy simulation at high Reynolds numbers

    NASA Astrophysics Data System (ADS)

    Kawai, Soshi; Larsson, Johan

    2013-01-01

    A dynamic non-equilibrium wall-model for large-eddy simulation at arbitrarily high Reynolds numbers is proposed and validated on equilibrium boundary layers and a non-equilibrium shock/boundary-layer interaction problem. The proposed method builds on the prior non-equilibrium wall-models of Balaras et al. [AIAA J. 34, 1111-1119 (1996)], 10.2514/3.13200 and Wang and Moin [Phys. Fluids 14, 2043-2051 (2002)], 10.1063/1.1476668: the failure of these wall-models to accurately predict the skin friction in equilibrium boundary layers is shown and analyzed, and an improved wall-model that solves this issue is proposed. The improvement stems directly from reasoning about how the turbulence length scale changes with wall distance in the inertial sublayer, the grid resolution, and the resolution-characteristics of numerical methods. The proposed model yields accurate resolved turbulence, both in terms of structure and statistics for both the equilibrium and non-equilibrium flows without the use of ad hoc corrections. Crucially, the model accurately predicts the skin friction, something that existing non-equilibrium wall-models fail to do robustly.

  13. EEG Sleep Stages Classification Based on Time Domain Features and Structural Graph Similarity.

    PubMed

    Diykh, Mohammed; Li, Yan; Wen, Peng

    2016-11-01

    The electroencephalogram (EEG) signals are commonly used in diagnosing and treating sleep disorders. Many existing methods for sleep stages classification mainly depend on the analysis of EEG signals in time or frequency domain to obtain a high classification accuracy. In this paper, the statistical features in time domain, the structural graph similarity and the K-means (SGSKM) are combined to identify six sleep stages using single channel EEG signals. Firstly, each EEG segment is partitioned into sub-segments. The size of a sub-segment is determined empirically. Secondly, statistical features are extracted, sorted into different sets of features and forwarded to the SGSKM to classify EEG sleep stages. We have also investigated the relationships between sleep stages and the time domain features of the EEG data used in this paper. The experimental results show that the proposed method yields better classification results than other four existing methods and the support vector machine (SVM) classifier. A 95.93% average classification accuracy is achieved by using the proposed method.

  14. The expectancy-value muddle in the theory of planned behaviour - and some proposed solutions.

    PubMed

    French, David P; Hankins, Matthew

    2003-02-01

    The authors of the Theories of Reasoned Action and Planned Behaviour recommended a method for statistically analysing the relationships between beliefs and the Attitude, Subjective Norm, and Perceived Behavioural Control constructs. This method has been used in the overwhelming majority of studies using these theories. However, there is a growing awareness that this method yields statistically uninterpretable results (Evans, 1991). Despite this, the use of this method is continuing, as is uninformed interpretation of this problematic research literature. This is probably due to the lack of a simple account of where the problem lies, and the large number of alternatives available. This paper therefore summarizes the problem as simply as possible, gives consideration to the conclusions that can be validly drawn from studies that contain this problem, and critically reviews the many alternatives that have been proposed to address this problem. Different techniques are identified as being suitable, according to the purpose of the specific research project.

  15. Particle detection for patterned wafers of 100nm design rule by evanescent light illumination: analysis of evanescent light scattering using Finite-Difference Time-Domain (FDTD) method

    NASA Astrophysics Data System (ADS)

    Yoshioka, Toshie; Miyoshi, Takashi; Takaya, Yasuhiro

    2005-12-01

    To realize high productivity and reliability of the semiconductor, patterned wafers inspection technology to maintain high yield becomes essential in modern semiconductor manufacturing processes. As circuit feature is scaled below 100nm, the conventional imaging and light scattering methods are impossible to apply to the patterned wafers inspection technique, because of diffraction limit and lower S/N ratio. So, we propose a new particle detection method using annular evanescent light illumination. In this method, a converging annular light used as a light source is incident on a micro-hemispherical lens. When the converging angle is larger than critical angle, annular evanescent light is generated under the bottom surface of the hemispherical lens. Evanescent light is localized near by the bottom surface and decays exponentially away from the bottom surface. So, the evanescent light selectively illuminates the particles on the patterned wafer surface, because it can't illuminate the patterned wafer surface. The proposed method evaluates particles on a patterned wafer surface by detecting scattered evanescent light distribution from particles. To analyze the fundamental characteristics of the proposed method, the computer simulation was performed using FDTD method. The simulation results show that the proposed method is effective for detecting 100nm size particle on patterned wafer of 100nm lines and spaces, particularly under the condition that the evanescent light illumination with p-polarization and parallel incident to the line orientation. Finally, the experiment results suggest that 220nm size particle on patterned wafer of about 200nm lines and spaces can be detected.

  16. Sampling for Patient Exit Interviews: Assessment of Methods Using Mathematical Derivation and Computer Simulations.

    PubMed

    Geldsetzer, Pascal; Fink, Günther; Vaikath, Maria; Bärnighausen, Till

    2018-02-01

    (1) To evaluate the operational efficiency of various sampling methods for patient exit interviews; (2) to discuss under what circumstances each method yields an unbiased sample; and (3) to propose a new, operationally efficient, and unbiased sampling method. Literature review, mathematical derivation, and Monte Carlo simulations. Our simulations show that in patient exit interviews it is most operationally efficient if the interviewer, after completing an interview, selects the next patient exiting the clinical consultation. We demonstrate mathematically that this method yields a biased sample: patients who spend a longer time with the clinician are overrepresented. This bias can be removed by selecting the next patient who enters, rather than exits, the consultation room. We show that this sampling method is operationally more efficient than alternative methods (systematic and simple random sampling) in most primary health care settings. Under the assumption that the order in which patients enter the consultation room is unrelated to the length of time spent with the clinician and the interviewer, selecting the next patient entering the consultation room tends to be the operationally most efficient unbiased sampling method for patient exit interviews. © 2016 The Authors. Health Services Research published by Wiley Periodicals, Inc. on behalf of Health Research and Educational Trust.

  17. Methodology for constructing a colour-difference acceptability scale.

    PubMed

    Laborie, Baptiste; Viénot, Françoise; Langlois, Sabine

    2010-09-01

    Observers were invited to report their degree of satisfaction on a 6-point semantic scale with respect to the conformity of a test colour with a white reference colour, simultaneously presented on a PDP display. Eight test patches were chosen along each of the +a*, -a*, +b*, -b* axes of the CIELAB chromaticity plane, at Y = 80 ± 2 cd.m(-2) . Experimental conditions reliably represented the automotive environment (patch size, angular distance between patches) and observers could move their head and eyes freely. We have compared several methods of category scaling, the Torgerson-DMT method (Torgerson, W. S. (1958). Theory and methods of scaling. Wiley, New York, USA); two versions of the regression method i.e. Bonnet's (Bonnet, C. (1986). Manuel pratique de psychophysique. Armand Colin, Paris, France) and logistic regression; and the medians method. We describe in detail a case where all methods yield substantial but slightly different results. The solution proposed by the regression method which works with incomplete matrices and yields results directly on a colorimetric scale is probably the most useful in this industrial context. Finally we summarize the implementation of the logistic regression method over four hues and for one experimental condition. © 2010 The Authors, Ophthalmic and Physiological Optics © 2010 The College of Optometrists.

  18. Reducing Interpolation Artifacts for Mutual Information Based Image Registration

    PubMed Central

    Soleimani, H.; Khosravifard, M.A.

    2011-01-01

    Medical image registration methods which use mutual information as similarity measure have been improved in recent decades. Mutual Information is a basic concept of Information theory which indicates the dependency of two random variables (or two images). In order to evaluate the mutual information of two images their joint probability distribution is required. Several interpolation methods, such as Partial Volume (PV) and bilinear, are used to estimate joint probability distribution. Both of these two methods yield some artifacts on mutual information function. Partial Volume-Hanning window (PVH) and Generalized Partial Volume (GPV) methods are introduced to remove such artifacts. In this paper we show that the acceptable performance of these methods is not due to their kernel function. It's because of the number of pixels which incorporate in interpolation. Since using more pixels requires more complex and time consuming interpolation process, we propose a new interpolation method which uses only four pixels (the same as PV and bilinear interpolations) and removes most of the artifacts. Experimental results of the registration of Computed Tomography (CT) images show superiority of the proposed scheme. PMID:22606673

  19. An Eclectic Professional Development Proposal for English Language Teachers (Una propuesta ecléctica de formación docente para profesores de inglés)

    ERIC Educational Resources Information Center

    Chaves, Orlando; Guapacha, Maria Eugenia

    2016-01-01

    This article reports a mixed-method research project aimed at improving the practices of public sector English teachers in Cali (Colombia) through a professional development program. At the diagnostic stage surveys, documentary analysis, and a focus group yielded the teachers' profile and professional needs. The action phase measured the program's…

  20. The use of magnetic resonance sounding for quantifying specific yield and transmissivity in hard rock aquifers: The example of Benin

    NASA Astrophysics Data System (ADS)

    Vouillamoz, J. M.; Lawson, F. M. A.; Yalo, N.; Descloitres, M.

    2014-08-01

    Hundreds of thousands of boreholes have been drilled in hard rocks of Africa and Asia for supplying human communities with drinking water. Despite the common use of geophysics for improving the siting of boreholes, a significant number of drilled holes does not deliver enough water to be equipped (e.g. 40% on average in Benin). As compared to other non-invasive geophysical methods, magnetic resonance sounding (MRS) is selective to groundwater. However, this distinctive feature has not been fully used in previous published studies for quantifying the drainable groundwater in hard rocks (i.e. the specific yield) and the short-term productivity of aquifer (i.e. the transmissivity). We present in this paper a comparison of MRS results (i.e. the water content and pore-size parameter) with both specific yield and transmissivity calculated from long duration pumping tests. We conducted our experiments in six sites located in different hard rock groups in Benin, thus providing a unique data set to assess the usefulness of MRS in hard rock aquifers. We found that the MRS water content is about twice the specific yield. We also found that the MRS pore-size parameter is well correlated with the specific yield. Thus we proposed two linear equations for calculating the specific yield from the MRS water content (with an uncertainty of about 10%) and from the pore-size parameter (with an uncertainty of about 20%). The later has the advantage of defining a so-named MRS cutoff time value for indentifying non-drainable MRS water content and thus low groundwater reserve. We eventually propose a nonlinear equation for calculating the specific yield using jointly the MRS water content and the pore-size parameters, but this approach has to be confirmed with further investigations. This study also confirmed that aquifer transmissivity can be estimated from MRS results with an uncertainty of about 70%. We conclude that MRS can be usefully applied for estimating aquifer specific yield and transmissivity in weathered hard rock aquifers. Our result will contribute to the improvement of well siting and groundwater management in hard rocks.

  1. Crop yield monitoring in the Sahel using root zone soil moisture anomalies derived from SMOS soil moisture data assimilation

    NASA Astrophysics Data System (ADS)

    Gibon, François; Pellarin, Thierry; Alhassane, Agali; Traoré, Seydou; Baron, Christian

    2017-04-01

    West Africa is greatly vulnerable, especially in terms of food sustainability. Mainly based on rainfed agriculture, the high variability of the rainy season strongly impacts the crop production driven by the soil water availability in the soil. To monitor this water availability, classical methods are based on daily precipitation measurements. However, the raingauge network suffers from the poor network density in Africa (1/10000km2). Alternatively, real-time satellite-derived precipitations can be used, but they are known to suffer from large uncertainties which produce significant error on crop yield estimations. The present study proposes to use root soil moisture rather than precipitation to evaluate crop yield variations. First, a local analysis of the spatiotemporal impact of water deficit on millet crop production in Niger was done, from in-situ soil moisture measurements (AMMA-CATCH/OZCAR (French Critical Zone exploration network)) and in-situ millet yield survey. Crop yield measurements were obtained for 10 villages located in the Niamey region from 2005 to 2012. The mean production (over 8 years) is 690 kg/ha, and ranges from 381 to 872 kg/ha during this period. Various statistical relationships based on soil moisture estimates were tested, and the most promising one (R>0.9) linked the 30-cm soil moisture anomalies from mid-August to mid-September (grain filling period) to the crop yield anomalies. Based on this local study, it was proposed to derive regional statistical relationships using 30-cm soil moisture maps over West Africa. The selected approach was to use a simple hydrological model, the Antecedent Precipitation Index (API), forced by real-time satellite-based precipitation (CMORPH, PERSIANN, TRMM3B42). To reduce uncertainties related to the quality of real-time rainfall satellite products, SMOS soil moisture measurements were assimilated into the API model through a Particular Filter algorithm. Then, obtained soil moisture anomalies were compared to 17 years of crop yield estimates from the FAOSTAT database (1998-2014). Results showed that the 30-cm soil moisture anomalies explained 89% of the crop yield variation in Niger, 72% in Burkina Faso, 82% in Mali and 84% in Senegal.

  2. A PCA aided cross-covariance scheme for discriminative feature extraction from EEG signals.

    PubMed

    Zarei, Roozbeh; He, Jing; Siuly, Siuly; Zhang, Yanchun

    2017-07-01

    Feature extraction of EEG signals plays a significant role in Brain-computer interface (BCI) as it can significantly affect the performance and the computational time of the system. The main aim of the current work is to introduce an innovative algorithm for acquiring reliable discriminating features from EEG signals to improve classification performances and to reduce the time complexity. This study develops a robust feature extraction method combining the principal component analysis (PCA) and the cross-covariance technique (CCOV) for the extraction of discriminatory information from the mental states based on EEG signals in BCI applications. We apply the correlation based variable selection method with the best first search on the extracted features to identify the best feature set for characterizing the distribution of mental state signals. To verify the robustness of the proposed feature extraction method, three machine learning techniques: multilayer perceptron neural networks (MLP), least square support vector machine (LS-SVM), and logistic regression (LR) are employed on the obtained features. The proposed methods are evaluated on two publicly available datasets. Furthermore, we evaluate the performance of the proposed methods by comparing it with some recently reported algorithms. The experimental results show that all three classifiers achieve high performance (above 99% overall classification accuracy) for the proposed feature set. Among these classifiers, the MLP and LS-SVM methods yield the best performance for the obtained feature. The average sensitivity, specificity and classification accuracy for these two classifiers are same, which are 99.32%, 100%, and 99.66%, respectively for the BCI competition dataset IVa and 100%, 100%, and 100%, for the BCI competition dataset IVb. The results also indicate the proposed methods outperform the most recently reported methods by at least 0.25% average accuracy improvement in dataset IVa. The execution time results show that the proposed method has less time complexity after feature selection. The proposed feature extraction method is very effective for getting representatives information from mental states EEG signals in BCI applications and reducing the computational complexity of classifiers by reducing the number of extracted features. Copyright © 2017 Elsevier B.V. All rights reserved.

  3. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Peng, Jialin, E-mail: 2004pjl@163.com; Zhang, Hongbo; Hu, Peijun

    Purpose: Efficient and accurate 3D liver segmentations from contrast-enhanced computed tomography (CT) images play an important role in therapeutic strategies for hepatic diseases. However, inhomogeneous appearances, ambiguous boundaries, and large variance in shape often make it a challenging task. The existence of liver abnormalities poses further difficulty. Despite the significant intensity difference, liver tumors should be segmented as part of the liver. This study aims to address these challenges, especially when the target livers contain subregions with distinct appearances. Methods: The authors propose a novel multiregion-appearance based approach with graph cuts to delineate the liver surface. For livers with multiplemore » subregions, a geodesic distance based appearance selection scheme is introduced to utilize proper appearance constraint for each subregion. A special case of the proposed method, which uses only one appearance constraint to segment the liver, is also presented. The segmentation process is modeled with energy functions incorporating both boundary and region information. Rather than a simple fixed combination, an adaptive balancing weight is introduced and learned from training sets. The proposed method only calls initialization inside the liver surface. No additional constraints from user interaction are utilized. Results: The proposed method was validated on 50 3D CT images from three datasets, i.e., Medical Image Computing and Computer Assisted Intervention (MICCAI) training and testing set, and local dataset. On MICCAI testing set, the proposed method achieved a total score of 83.4 ± 3.1, outperforming nonexpert manual segmentation (average score of 75.0). When applying their method to MICCAI training set and local dataset, it yielded a mean Dice similarity coefficient (DSC) of 97.7% ± 0.5% and 97.5% ± 0.4%, respectively. These results demonstrated the accuracy of the method when applied to different computed tomography (CT) datasets. In addition, user operator variability experiments showed its good reproducibility. Conclusions: A multiregion-appearance based method is proposed and evaluated to segment liver. This approach does not require prior model construction and so eliminates the burdens associated with model construction and matching. The proposed method provides comparable results with state-of-the-art methods. Validation results suggest that it may be suitable for the clinical use.« less

  4. Morphable Word Clouds for Time-Varying Text Data Visualization.

    PubMed

    Chi, Ming-Te; Lin, Shih-Syun; Chen, Shiang-Yi; Lin, Chao-Hung; Lee, Tong-Yee

    2015-12-01

    A word cloud is a visual representation of a collection of text documents that uses various font sizes, colors, and spaces to arrange and depict significant words. The majority of previous studies on time-varying word clouds focuses on layout optimization and temporal trend visualization. However, they do not fully consider the spatial shapes and temporal motions of word clouds, which are important factors for attracting people's attention and are also important cues for human visual systems in capturing information from time-varying text data. This paper presents a novel method that uses rigid body dynamics to arrange multi-temporal word-tags in a specific shape sequence under various constraints. Each word-tag is regarded as a rigid body in dynamics. With the aid of geometric, aesthetic, and temporal coherence constraints, the proposed method can generate a temporally morphable word cloud that not only arranges word-tags in their corresponding shapes but also smoothly transforms the shapes of word clouds over time, thus yielding a pleasing time-varying visualization. Using the proposed frame-by-frame and morphable word clouds, people can observe the overall story of a time-varying text data from the shape transition, and people can also observe the details from the word clouds in frames. Experimental results on various data demonstrate the feasibility and flexibility of the proposed method in morphable word cloud generation. In addition, an application that uses the proposed word clouds in a simulated exhibition demonstrates the usefulness of the proposed method.

  5. An efficient depth map preprocessing method based on structure-aided domain transform smoothing for 3D view generation

    PubMed Central

    Ma, Liyan; Qiu, Bo; Cui, Mingyue; Ding, Jianwei

    2017-01-01

    Depth image-based rendering (DIBR), which is used to render virtual views with a color image and the corresponding depth map, is one of the key techniques in the 2D to 3D conversion process. Due to the absence of knowledge about the 3D structure of a scene and its corresponding texture, DIBR in the 2D to 3D conversion process, inevitably leads to holes in the resulting 3D image as a result of newly-exposed areas. In this paper, we proposed a structure-aided depth map preprocessing framework in the transformed domain, which is inspired by recently proposed domain transform for its low complexity and high efficiency. Firstly, our framework integrates hybrid constraints including scene structure, edge consistency and visual saliency information in the transformed domain to improve the performance of depth map preprocess in an implicit way. Then, adaptive smooth localization is cooperated and realized in the proposed framework to further reduce over-smoothness and enhance optimization in the non-hole regions. Different from the other similar methods, the proposed method can simultaneously achieve the effects of hole filling, edge correction and local smoothing for typical depth maps in a united framework. Thanks to these advantages, it can yield visually satisfactory results with less computational complexity for high quality 2D to 3D conversion. Numerical experimental results demonstrate the excellent performances of the proposed method. PMID:28407027

  6. Apply lightweight recognition algorithms in optical music recognition

    NASA Astrophysics Data System (ADS)

    Pham, Viet-Khoi; Nguyen, Hai-Dang; Nguyen-Khac, Tung-Anh; Tran, Minh-Triet

    2015-02-01

    The problems of digitalization and transformation of musical scores into machine-readable format are necessary to be solved since they help people to enjoy music, to learn music, to conserve music sheets, and even to assist music composers. However, the results of existing methods still require improvements for higher accuracy. Therefore, the authors propose lightweight algorithms for Optical Music Recognition to help people to recognize and automatically play musical scores. In our proposal, after removing staff lines and extracting symbols, each music symbol is represented as a grid of identical M ∗ N cells, and the features are extracted and classified with multiple lightweight SVM classifiers. Through experiments, the authors find that the size of 10 ∗ 12 cells yields the highest precision value. Experimental results on the dataset consisting of 4929 music symbols taken from 18 modern music sheets in the Synthetic Score Database show that our proposed method is able to classify printed musical scores with accuracy up to 99.56%.

  7. An interactive medical image segmentation framework using iterative refinement.

    PubMed

    Kalshetti, Pratik; Bundele, Manas; Rahangdale, Parag; Jangra, Dinesh; Chattopadhyay, Chiranjoy; Harit, Gaurav; Elhence, Abhay

    2017-04-01

    Segmentation is often performed on medical images for identifying diseases in clinical evaluation. Hence it has become one of the major research areas. Conventional image segmentation techniques are unable to provide satisfactory segmentation results for medical images as they contain irregularities. They need to be pre-processed before segmentation. In order to obtain the most suitable method for medical image segmentation, we propose MIST (Medical Image Segmentation Tool), a two stage algorithm. The first stage automatically generates a binary marker image of the region of interest using mathematical morphology. This marker serves as the mask image for the second stage which uses GrabCut to yield an efficient segmented result. The obtained result can be further refined by user interaction, which can be done using the proposed Graphical User Interface (GUI). Experimental results show that the proposed method is accurate and provides satisfactory segmentation results with minimum user interaction on medical as well as natural images. Copyright © 2017 Elsevier Ltd. All rights reserved.

  8. Isobaric yield ratio difference in heavy-ion collisions, and comparison to isoscaling

    NASA Astrophysics Data System (ADS)

    Ma, Chun-Wang; Wang, Shan-Shan; Zhang, Yan-Li; Wei, Hui-Ling

    2013-03-01

    An isobaric yield ratio difference (IBD) method is proposed to study the ratio of the difference between the chemical potential of neutron and proton to temperature (Δμ/T) in heavy-ion collisions. The Δμ/T determined by the IBD method (IB-Δμ/T) is compared to the results of the isoscaling method (IS-Δμ/T), which uses the isotopic or the isotonic yield ratio. Similar distributions of the IB- and IS-Δμ/T are found in the measured 140A MeV 40,48Ca+9Be and the 58,64Ni+9Be reactions. The IB- and IS-Δμ/T both have a distribution with a plateau in the small mass fragments plus an increasing part in the fragments of relatively larger mass. The IB- and IS-Δμ/T plateaus show dependence on the n/p ratio of the projectile. It is suggested that the height of the plateau is decided by the difference between the neutron density (ρn) and the proton density (ρp) distributions of the projectiles, and the width shows the overlapping volume of the projectiles in which ρn and ρp change very little. The difference between the IB- and IS-Δμ/T is explained by the isoscaling parameters being constrained by the many isotopes and isotones, while the IBD method only uses the yields of two isobars. It is suggested that the IB-Δμ/T is more reasonable than the IS-Δμ/T, especially when the isotopic or isotonic ratio disobeys the isoscaling. As to the question whether the Δμ/T depends on the density or the temperature, the density dependence is preferred since the low density can result in low temperature in the peripheral reactions.

  9. Texture analysis with statistical methods for wheat ear extraction

    NASA Astrophysics Data System (ADS)

    Bakhouche, M.; Cointault, F.; Gouton, P.

    2007-01-01

    In agronomic domain, the simplification of crop counting, necessary for yield prediction and agronomic studies, is an important project for technical institutes such as Arvalis. Although the main objective of our global project is to conceive a mobile robot for natural image acquisition directly in a field, Arvalis has proposed us first to detect by image processing the number of wheat ears in images before to count them, which will allow to obtain the first component of the yield. In this paper we compare different texture image segmentation techniques based on feature extraction by first and higher order statistical methods which have been applied on our images. The extracted features are used for unsupervised pixel classification to obtain the different classes in the image. So, the K-means algorithm is implemented before the choice of a threshold to highlight the ears. Three methods have been tested in this feasibility study with very average error of 6%. Although the evaluation of the quality of the detection is visually done, automatic evaluation algorithms are currently implementing. Moreover, other statistical methods of higher order will be implemented in the future jointly with methods based on spatio-frequential transforms and specific filtering.

  10. On NUFFT-based gridding for non-Cartesian MRI

    NASA Astrophysics Data System (ADS)

    Fessler, Jeffrey A.

    2007-10-01

    For MRI with non-Cartesian sampling, the conventional approach to reconstructing images is to use the gridding method with a Kaiser-Bessel (KB) interpolation kernel. Recently, Sha et al. [L. Sha, H. Guo, A.W. Song, An improved gridding method for spiral MRI using nonuniform fast Fourier transform, J. Magn. Reson. 162(2) (2003) 250-258] proposed an alternative method based on a nonuniform FFT (NUFFT) with least-squares (LS) design of the interpolation coefficients. They described this LS_NUFFT method as shift variant and reported that it yielded smaller reconstruction approximation errors than the conventional shift-invariant KB approach. This paper analyzes the LS_NUFFT approach in detail. We show that when one accounts for a certain linear phase factor, the core of the LS_NUFFT interpolator is in fact real and shift invariant. Furthermore, we find that the KB approach yields smaller errors than the original LS_NUFFT approach. We show that optimizing certain scaling factors can lead to a somewhat improved LS_NUFFT approach, but the high computation cost seems to outweigh the modest reduction in reconstruction error. We conclude that the standard KB approach, with appropriate parameters as described in the literature, remains the practical method of choice for gridding reconstruction in MRI.

  11. Visual conspicuity: a new simple standard, its reliability, validity and applicability.

    PubMed

    Wertheim, A H

    2010-03-01

    A general standard for quantifying conspicuity is described. It derives from a simple and easy method to quantitatively measure the visual conspicuity of an object. The method stems from the theoretical view that the conspicuity of an object is not a property of that object, but describes the degree to which the object is perceptually embedded in, i.e. laterally masked by, its visual environment. First, three variations of a simple method to measure the strength of such lateral masking are described and empirical evidence for its reliability and its validity is presented, as are several tests of predictions concerning the effects of viewing distance and ambient light. It is then shown how this method yields a conspicuity standard, expressed as a number, which can be made part of a rule of law, and which can be used to test whether or not, and to what extent, the conspicuity of a particular object, e.g. a traffic sign, meets a predetermined criterion. An additional feature is that, when used under different ambient light conditions, the method may also yield an index of the amount of visual clutter in the environment. Taken together the evidence illustrates the methods' applicability in both the laboratory and in real-life situations. STATEMENT OF RELEVANCE: This paper concerns a proposal for a new method to measure visual conspicuity, yielding a numerical index that can be used in a rule of law. It is of importance to ergonomists and human factor specialists who are asked to measure the conspicuity of an object, such as a traffic or rail-road sign, or any other object. The new method is simple and circumvents the need to perform elaborate (search) experiments and thus has great relevance as a simple tool for applied research.

  12. A MAP-based image interpolation method via Viterbi decoding of Markov chains of interpolation functions.

    PubMed

    Vedadi, Farhang; Shirani, Shahram

    2014-01-01

    A new method of image resolution up-conversion (image interpolation) based on maximum a posteriori sequence estimation is proposed. Instead of making a hard decision about the value of each missing pixel, we estimate the missing pixels in groups. At each missing pixel of the high resolution (HR) image, we consider an ensemble of candidate interpolation methods (interpolation functions). The interpolation functions are interpreted as states of a Markov model. In other words, the proposed method undergoes state transitions from one missing pixel position to the next. Accordingly, the interpolation problem is translated to the problem of estimating the optimal sequence of interpolation functions corresponding to the sequence of missing HR pixel positions. We derive a parameter-free probabilistic model for this to-be-estimated sequence of interpolation functions. Then, we solve the estimation problem using a trellis representation and the Viterbi algorithm. Using directional interpolation functions and sequence estimation techniques, we classify the new algorithm as an adaptive directional interpolation using soft-decision estimation techniques. Experimental results show that the proposed algorithm yields images with higher or comparable peak signal-to-noise ratios compared with some benchmark interpolation methods in the literature while being efficient in terms of implementation and complexity considerations.

  13. User-friendly freehand ultrasound calibration using Lego bricks and automatic registration.

    PubMed

    Xiao, Yiming; Yan, Charles Xiao Bo; Drouin, Simon; De Nigris, Dante; Kochanowska, Anna; Collins, D Louis

    2016-09-01

    As an inexpensive, noninvasive, and portable clinical imaging modality, ultrasound (US) has been widely employed in many interventional procedures for monitoring potential tissue deformation, surgical tool placement, and locating surgical targets. The application requires the spatial mapping between 2D US images and 3D coordinates of the patient. Although positions of the devices (i.e., ultrasound transducer) and the patient can be easily recorded by a motion tracking system, the spatial relationship between the US image and the tracker attached to the US transducer needs to be estimated through an US calibration procedure. Previously, various calibration techniques have been proposed, where a spatial transformation is computed to match the coordinates of corresponding features in a physical phantom and those seen in the US scans. However, most of these methods are difficult to use for novel users. We proposed an ultrasound calibration method by constructing a phantom from simple Lego bricks and applying an automated multi-slice 2D-3D registration scheme without volumetric reconstruction. The method was validated for its calibration accuracy and reproducibility. Our method yields a calibration accuracy of [Formula: see text] mm and a calibration reproducibility of 1.29 mm. We have proposed a robust, inexpensive, and easy-to-use ultrasound calibration method.

  14. Segmentation of Hyperacute Cerebral Infarcts Based on Sparse Representation of Diffusion Weighted Imaging.

    PubMed

    Zhang, Xiaodong; Jing, Shasha; Gao, Peiyi; Xue, Jing; Su, Lu; Li, Weiping; Ren, Lijie; Hu, Qingmao

    2016-01-01

    Segmentation of infarcts at hyperacute stage is challenging as they exhibit substantial variability which may even be hard for experts to delineate manually. In this paper, a sparse representation based classification method is explored. For each patient, four volumetric data items including three volumes of diffusion weighted imaging and a computed asymmetry map are employed to extract patch features which are then fed to dictionary learning and classification based on sparse representation. Elastic net is adopted to replace the traditional L 0 -norm/ L 1 -norm constraints on sparse representation to stabilize sparse code. To decrease computation cost and to reduce false positives, regions-of-interest are determined to confine candidate infarct voxels. The proposed method has been validated on 98 consecutive patients recruited within 6 hours from onset. It is shown that the proposed method could handle well infarcts with intensity variability and ill-defined edges to yield significantly higher Dice coefficient (0.755 ± 0.118) than the other two methods and their enhanced versions by confining their segmentations within the regions-of-interest (average Dice coefficient less than 0.610). The proposed method could provide a potential tool to quantify infarcts from diffusion weighted imaging at hyperacute stage with accuracy and speed to assist the decision making especially for thrombolytic therapy.

  15. A Bayesian hierarchical model to detect differentially methylated loci from single nucleotide resolution sequencing data

    PubMed Central

    Feng, Hao; Conneely, Karen N.; Wu, Hao

    2014-01-01

    DNA methylation is an important epigenetic modification that has essential roles in cellular processes including gene regulation, development and disease and is widely dysregulated in most types of cancer. Recent advances in sequencing technology have enabled the measurement of DNA methylation at single nucleotide resolution through methods such as whole-genome bisulfite sequencing and reduced representation bisulfite sequencing. In DNA methylation studies, a key task is to identify differences under distinct biological contexts, for example, between tumor and normal tissue. A challenge in sequencing studies is that the number of biological replicates is often limited by the costs of sequencing. The small number of replicates leads to unstable variance estimation, which can reduce accuracy to detect differentially methylated loci (DML). Here we propose a novel statistical method to detect DML when comparing two treatment groups. The sequencing counts are described by a lognormal-beta-binomial hierarchical model, which provides a basis for information sharing across different CpG sites. A Wald test is developed for hypothesis testing at each CpG site. Simulation results show that the proposed method yields improved DML detection compared to existing methods, particularly when the number of replicates is low. The proposed method is implemented in the Bioconductor package DSS. PMID:24561809

  16. Optical configuration with fixed transverse magnification for self-interference incoherent digital holography.

    PubMed

    Imbe, Masatoshi

    2018-03-20

    The optical configuration proposed in this paper consists of a 4-f optical setup with the wavefront modulation device on the Fourier plane, such as a concave mirror and a spatial light modulator. The transverse magnification of reconstructed images with the proposed configuration is independent of locations of an object and an image sensor; therefore, reconstructed images of object(s) at different distances can be scaled with a fixed transverse magnification. It is yielded based on Fourier optics and mathematically verified with the optical matrix method. Numerical simulation results and experimental results are also given to confirm the fixity of the reconstructed images.

  17. A new method of creating high intensity neutron source

    NASA Astrophysics Data System (ADS)

    Masuda, T.; Yoshimi, A.; Yoshimura, M.

    We propose a new scheme of producing an intense neutron beam whose yields may exceed those of the existing facilities by a few to several orders of magnitude in the sub-eV region. This scheme employs a MeV gamma beam extracted from circulating quantum ions, which has been recently proposed. The gamma beam is directed to a deuteron target and the photo-disintegration process generates a neutron beam. The calculated neutron energy spectrum is nearly flat down to the neV range, and thus there exists a possibility to utilize a good quality of neutrons especially in sub-eV energy region without using a moderator.

  18. Synthesis of Commercial Products from Copper Wire-Drawing Waste

    NASA Astrophysics Data System (ADS)

    Ayala, J.; Fernández, B.

    2014-06-01

    Copper powder and copper sulfate pentahydrate were obtained from copper wire-drawing scale. The hydrometallurgical recycling process proposed in this article yields a high-purity copper powder and analytical grade copper sulfate pentahydrate. In the first stage of this process, the copper is dissolved in sulfuric acid media via dismutation of the scale. In the second stage, copper sulfate pentahydrate is precipitated using ethanol. Effects such as pH, reaction times, stirring speed, initial copper concentration, and ethanol/solution volume ratio were studied during the precipitation from solution reaction. The proposed method is technically straightforward and provides efficient recovery of Cu from wire-drawing scale.

  19. Stable and low diffusive hybrid upwind splitting methods

    NASA Technical Reports Server (NTRS)

    Coquel, Frederic; Liou, Meng-Sing

    1992-01-01

    We introduce in this paper a new concept for upwinding: the Hybrid Upwind Splitting (HUS). This original strategy for upwinding is achieved by combining the two previous existing approaches, the Flux Vector (FVS) and Flux Difference Splittings (FDS), while retaining their own interesting features. Indeed, our approach yields upwind methods that share the robustness of FVS schemes in the capture of nonlinear waves and the accuracy of some FDS schemes in the capture of linear waves. We describe here some examples of such HUS methods obtained by hybridizing the Osher approach with FVS schemes. Numerical illustrations are displayed and will prove in particular the relevance of the HUS methods we propose for viscous calculations.

  20. 3D optic disc reconstruction via a global fundus stereo algorithm.

    PubMed

    Bansal, M; Sizintsev, M; Eledath, J; Sawhney, H; Pearson, D J; Stone, R A

    2013-01-01

    This paper presents a novel method to recover 3D structure of the optic disc in the retina from two uncalibrated fundus images. Retinal images are commonly uncalibrated when acquired clinically, creating rectification challenges as well as significant radiometric and blur differences within the stereo pair. By exploiting structural peculiarities of the retina, we modified the Graph Cuts computational stereo method (one of current state-of-the-art methods) to yield a high quality algorithm for fundus stereo reconstruction. Extensive qualitative and quantitative experimental evaluation (where OCT scans are used as 3D ground truth) on our and publicly available datasets shows the superiority of the proposed method in comparison to other alternatives.

  1. Determination of niobium in rocks by an isotope dilution spectrophotometric method

    USGS Publications Warehouse

    Greenland, L.P.; Campbell, E.Y.

    1970-01-01

    Rocks and minerals are fused with sodium peroxide in the presence of carrierfree 95Nb. The fusion cake is leached with water and the precipitate dissolved in hydrofluoric-sulfuric acid mixture. Niobium is extracted into methyl isobutyl ketone and further purified by ion exchange. The amount of niobium is determined spectrophotometrically with 4-(2-pyridylazo)-resorcinol, and the chemical yield of the separations determined by counting 95Nb. This procedure is faster and less sensitive to interferences than previously proposed methods for determining niobium in rocks.The high purity of the separated niobium makes the method applicable to nearly all matrices. ?? 1970.

  2. Aerosol analysis with the Coastal Zone Color Scanner - A simple method for including multiple scattering effects

    NASA Technical Reports Server (NTRS)

    Gordon, Howard R.; Castano, Diego J.

    1989-01-01

    A method for studying aerosols over the ocean using Nimbus-7 CZCS data is proposed which circumvents having to perform radiative transfer computations involving the aerosol properties. The method is applied to the CZCS band 4 at 670 nm, and yields the total radiance (L sub t) backscattered from the top of a stratified atmosphere containing both stratospheric and tropospheric aerosols and the the Rayleigh scattered radiance (L sub r). The radiance which the aerosol would produce in the single scattering approximation is retrieved from (L sub t) - (L sub r) with an error of not greater than 5-7 percent.

  3. Bootstrap Enhanced Penalized Regression for Variable Selection with Neuroimaging Data.

    PubMed

    Abram, Samantha V; Helwig, Nathaniel E; Moodie, Craig A; DeYoung, Colin G; MacDonald, Angus W; Waller, Niels G

    2016-01-01

    Recent advances in fMRI research highlight the use of multivariate methods for examining whole-brain connectivity. Complementary data-driven methods are needed for determining the subset of predictors related to individual differences. Although commonly used for this purpose, ordinary least squares (OLS) regression may not be ideal due to multi-collinearity and over-fitting issues. Penalized regression is a promising and underutilized alternative to OLS regression. In this paper, we propose a nonparametric bootstrap quantile (QNT) approach for variable selection with neuroimaging data. We use real and simulated data, as well as annotated R code, to demonstrate the benefits of our proposed method. Our results illustrate the practical potential of our proposed bootstrap QNT approach. Our real data example demonstrates how our method can be used to relate individual differences in neural network connectivity with an externalizing personality measure. Also, our simulation results reveal that the QNT method is effective under a variety of data conditions. Penalized regression yields more stable estimates and sparser models than OLS regression in situations with large numbers of highly correlated neural predictors. Our results demonstrate that penalized regression is a promising method for examining associations between neural predictors and clinically relevant traits or behaviors. These findings have important implications for the growing field of functional connectivity research, where multivariate methods produce numerous, highly correlated brain networks.

  4. Bootstrap Enhanced Penalized Regression for Variable Selection with Neuroimaging Data

    PubMed Central

    Abram, Samantha V.; Helwig, Nathaniel E.; Moodie, Craig A.; DeYoung, Colin G.; MacDonald, Angus W.; Waller, Niels G.

    2016-01-01

    Recent advances in fMRI research highlight the use of multivariate methods for examining whole-brain connectivity. Complementary data-driven methods are needed for determining the subset of predictors related to individual differences. Although commonly used for this purpose, ordinary least squares (OLS) regression may not be ideal due to multi-collinearity and over-fitting issues. Penalized regression is a promising and underutilized alternative to OLS regression. In this paper, we propose a nonparametric bootstrap quantile (QNT) approach for variable selection with neuroimaging data. We use real and simulated data, as well as annotated R code, to demonstrate the benefits of our proposed method. Our results illustrate the practical potential of our proposed bootstrap QNT approach. Our real data example demonstrates how our method can be used to relate individual differences in neural network connectivity with an externalizing personality measure. Also, our simulation results reveal that the QNT method is effective under a variety of data conditions. Penalized regression yields more stable estimates and sparser models than OLS regression in situations with large numbers of highly correlated neural predictors. Our results demonstrate that penalized regression is a promising method for examining associations between neural predictors and clinically relevant traits or behaviors. These findings have important implications for the growing field of functional connectivity research, where multivariate methods produce numerous, highly correlated brain networks. PMID:27516732

  5. A digital image-based method for determining of total acidity in red wines using acid-base titration without indicator.

    PubMed

    Tôrres, Adamastor Rodrigues; Lyra, Wellington da Silva; de Andrade, Stéfani Iury Evangelista; Andrade, Renato Allan Navarro; da Silva, Edvan Cirino; Araújo, Mário César Ugulino; Gaião, Edvaldo da Nóbrega

    2011-05-15

    This work proposes the use of digital image-based method for determination of total acidity in red wines by means of acid-base titration without using an external indicator or any pre-treatment of the sample. Digital images present the colour of the emergent radiation which is complementary to the radiation absorbed by anthocyanines present in wines. Anthocyanines change colour depending on the pH of the medium, and from the variation of colour in the images obtained during titration, the end point can be localized with accuracy and precision. RGB-based values were employed to build titration curves, and end points were localized by second derivative curves. The official method recommends potentiometric titration with a NaOH standard solution, and sample dilution until the pH reaches 8.2-8.4. In order to illustrate the feasibility of the proposed method, titrations of ten red wines were carried out. Results were compared with the reference method, and no statistically significant difference was observed between the results by applying the paired t-test at the 95% confidence level. The proposed method yielded more precise results than the official method. This is due to the trivariate nature of the measurements (RGB), associated with digital images. Copyright © 2011 Elsevier B.V. All rights reserved.

  6. Electrical resisitivity of mechancially stablized earth wall backfill

    NASA Astrophysics Data System (ADS)

    Snapp, Michael; Tucker-Kulesza, Stacey; Koehn, Weston

    2017-06-01

    Mechanically stabilized earth (MSE) retaining walls utilized in transportation projects are typically backfilled with coarse aggregate. One of the current testing procedures to select backfill material for construction of MSE walls is the American Association of State Highway and Transportation Officials standard T 288: ;Standard Method of Test for Determining Minimum Laboratory Soil Resistivity.; T 288 is designed to test a soil sample's electrical resistivity which correlates to its corrosive potential. The test is run on soil material passing the No. 10 sieve and believed to be inappropriate for coarse aggregate. Therefore, researchers have proposed new methods to measure the electrical resistivity of coarse aggregate samples in the laboratory. There is a need to verify that the proposed methods yield results representative of the in situ conditions; however, no in situ measurement of the electrical resistivity of MSE wall backfill is established. Electrical resistivity tomography (ERT) provides a two-dimensional (2D) profile of the bulk resistivity of backfill material in situ. The objective of this study was to characterize bulk resistivity of in-place MSE wall backfill aggregate using ERT. Five MSE walls were tested via ERT to determine the bulk resistivity of the backfill. Three of the walls were reinforced with polymeric geogrid, one wall was reinforced with metallic strips, and one wall was a gravity retaining wall with no reinforcement. Variability of the measured resistivity distribution within the backfill may be a result of non-uniform particle sizes, thoroughness of compaction, and the presence of water. A quantitative post processing algorithm was developed to calculate mean bulk resistivity of in-situ backfill. Recommendations of the study were that the ERT data be used to verify proposed testing methods for coarse aggregate that are designed to yield data representative of in situ conditions. A preliminary analysis suggests that ERT may be utilized as construction quality assurance for thoroughness of compaction in MSE construction; however more data are needed at this time.

  7. Derivative free Davidon-Fletcher-Powell (DFP) for solving symmetric systems of nonlinear equations

    NASA Astrophysics Data System (ADS)

    Mamat, M.; Dauda, M. K.; Mohamed, M. A. bin; Waziri, M. Y.; Mohamad, F. S.; Abdullah, H.

    2018-03-01

    Research from the work of engineers, economist, modelling, industry, computing, and scientist are mostly nonlinear equations in nature. Numerical solution to such systems is widely applied in those areas of mathematics. Over the years, there has been significant theoretical study to develop methods for solving such systems, despite these efforts, unfortunately the methods developed do have deficiency. In a contribution to solve systems of the form F(x) = 0, x ∈ Rn , a derivative free method via the classical Davidon-Fletcher-Powell (DFP) update is presented. This is achieved by simply approximating the inverse Hessian matrix with {Q}k+1-1 to θkI. The modified method satisfied the descent condition and possess local superlinear convergence properties. Interestingly, without computing any derivative, the proposed method never fail to converge throughout the numerical experiments. The output is based on number of iterations and CPU time, different initial starting points were used on a solve 40 benchmark test problems. With the aid of the squared norm merit function and derivative-free line search technique, the approach yield a method of solving symmetric systems of nonlinear equations that is capable of significantly reducing the CPU time and number of iteration, as compared to its counterparts. A comparison between the proposed method and classical DFP update were made and found that the proposed methodis the top performer and outperformed the existing method in almost all the cases. In terms of number of iterations, out of the 40 problems solved, the proposed method solved 38 successfully, (95%) while classical DFP solved 2 problems (i.e. 05%). In terms of CPU time, the proposed method solved 29 out of the 40 problems given, (i.e.72.5%) successfully whereas classical DFP solves 11 (27.5%). The method is valid in terms of derivation, reliable in terms of number of iterations and accurate in terms of CPU time. Thus, suitable and achived the objective.

  8. Estimating rice yield from MODIS-Landsat fusion data in Taiwan

    NASA Astrophysics Data System (ADS)

    Chen, C. R.; Chen, C. F.; Nguyen, S. T.

    2017-12-01

    Rice production monitoring with remote sensing is an important activity in Taiwan due to official initiatives. Yield estimation is a challenge in Taiwan because rice fields are small and fragmental. High spatiotemporal satellite data providing phenological information of rice crops is thus required for this monitoring purpose. This research aims to develop data fusion approaches to integrate daily Moderate Resolution Imaging Spectroradiometer (MODIS) and Landsat data for rice yield estimation in Taiwan. In this study, the low-resolution MODIS LST and emissivity data are used as reference data sources to obtain the high-resolution LST from Landsat data using the mixed-pixel analysis technique, and the time-series EVI data were derived the fusion of MODIS and Landsat spectral band data using STARFM method. The LST and EVI simulated results showed the close agreement between the LST and EVI obtained by the proposed methods with the reference data. The rice-yield model was established using EVI and LST data based on information of rice crop phenology collected from 371 ground survey sites across the country in 2014. The results achieved from the fusion datasets compared with the reference data indicated the close relationship between the two datasets with the correlation coefficient (R2) of 0.75 and root mean square error (RMSE) of 338.7 kgs, which were more accurate than those using the coarse-resolution MODIS LST data (R2 = 0.71 and RMSE = 623.82 kgs). For the comparison of total production, 64 towns located in the west part of Taiwan were used. The results also confirmed that the model using fusion datasets produced more accurate results (R2 = 0.95 and RMSE = 1,243 tons) than that using the course-resolution MODIS data (R2 = 0.91 and RMSE = 1,749 tons). This study demonstrates the application of MODIS-Landsat fusion data for rice yield estimation at the township level in Taiwan. The results obtained from the methods used in this study could be useful to policymakers; and thus, the methods can be transferable to other regions in the world for rice yield estimation.

  9. Object detection system based on multimodel saliency maps

    NASA Astrophysics Data System (ADS)

    Guo, Ya'nan; Luo, Chongfan; Ma, Yide

    2017-03-01

    Detection of visually salient image regions is extensively applied in computer vision and computer graphics, such as object detection, adaptive compression, and object recognition, but any single model always has its limitations to various images, so in our work, we establish a method based on multimodel saliency maps to detect the object, which intelligently absorbs the merits of various individual saliency detection models to achieve promising results. The method can be roughly divided into three steps: in the first step, we propose a decision-making system to evaluate saliency maps obtained by seven competitive methods and merely select the three most valuable saliency maps; in the second step, we introduce heterogeneous PCNN algorithm to obtain three prime foregrounds; and then a self-designed nonlinear fusion method is proposed to merge these saliency maps; at last, the adaptive improved and simplified PCNN model is used to detect the object. Our proposed method can constitute an object detection system for different occasions, which requires no training, is simple, and highly efficient. The proposed saliency fusion technique shows better performance over a broad range of images and enriches the applicability range by fusing different individual saliency models, this proposed system is worthy enough to be called a strong model. Moreover, the proposed adaptive improved SPCNN model is stemmed from the Eckhorn's neuron model, which is skilled in image segmentation because of its biological background, and in which all the parameters are adaptive to image information. We extensively appraise our algorithm on classical salient object detection database, and the experimental results demonstrate that the aggregation of saliency maps outperforms the best saliency model in all cases, yielding highest precision of 89.90%, better recall rates of 98.20%, greatest F-measure of 91.20%, and lowest mean absolute error value of 0.057, the value of proposed saliency evaluation EHA reaches to 215.287. We deem our method can be wielded to diverse applications in the future.

  10. Using Tensor Completion Method to Achieving Better Coverage of Traffic State Estimation from Sparse Floating Car Data

    PubMed Central

    Ran, Bin; Song, Li; Cheng, Yang; Tan, Huachun

    2016-01-01

    Traffic state estimation from the floating car system is a challenging problem. The low penetration rate and random distribution make available floating car samples usually cover part space and time points of the road networks. To obtain a wide range of traffic state from the floating car system, many methods have been proposed to estimate the traffic state for the uncovered links. However, these methods cannot provide traffic state of the entire road networks. In this paper, the traffic state estimation is transformed to solve a missing data imputation problem, and the tensor completion framework is proposed to estimate missing traffic state. A tensor is constructed to model traffic state in which observed entries are directly derived from floating car system and unobserved traffic states are modeled as missing entries of constructed tensor. The constructed traffic state tensor can represent spatial and temporal correlations of traffic data and encode the multi-way properties of traffic state. The advantage of the proposed approach is that it can fully mine and utilize the multi-dimensional inherent correlations of traffic state. We tested the proposed approach on a well calibrated simulation network. Experimental results demonstrated that the proposed approach yield reliable traffic state estimation from very sparse floating car data, particularly when dealing with the floating car penetration rate is below 1%. PMID:27448326

  11. Using Tensor Completion Method to Achieving Better Coverage of Traffic State Estimation from Sparse Floating Car Data.

    PubMed

    Ran, Bin; Song, Li; Zhang, Jian; Cheng, Yang; Tan, Huachun

    2016-01-01

    Traffic state estimation from the floating car system is a challenging problem. The low penetration rate and random distribution make available floating car samples usually cover part space and time points of the road networks. To obtain a wide range of traffic state from the floating car system, many methods have been proposed to estimate the traffic state for the uncovered links. However, these methods cannot provide traffic state of the entire road networks. In this paper, the traffic state estimation is transformed to solve a missing data imputation problem, and the tensor completion framework is proposed to estimate missing traffic state. A tensor is constructed to model traffic state in which observed entries are directly derived from floating car system and unobserved traffic states are modeled as missing entries of constructed tensor. The constructed traffic state tensor can represent spatial and temporal correlations of traffic data and encode the multi-way properties of traffic state. The advantage of the proposed approach is that it can fully mine and utilize the multi-dimensional inherent correlations of traffic state. We tested the proposed approach on a well calibrated simulation network. Experimental results demonstrated that the proposed approach yield reliable traffic state estimation from very sparse floating car data, particularly when dealing with the floating car penetration rate is below 1%.

  12. Traffic Sign Detection System for Locating Road Intersections and Roundabouts: The Chilean Case.

    PubMed

    Villalón-Sepúlveda, Gabriel; Torres-Torriti, Miguel; Flores-Calero, Marco

    2017-05-25

    This paper presents a traffic sign detection method for signs close to road intersections and roundabouts, such as stop and yield (give way) signs. The proposed method relies on statistical templates built using color information for both segmentation and classification. The segmentation method uses the RGB-normalized (ErEgEb) color space for ROIs (Regions of Interest) generation based on a chromaticity filter, where templates at 10 scales are applied to the entire image. Templates consider the mean and standard deviation of normalized color of the traffic signs to build thresholding intervals where the expected color should lie for a given sign. The classification stage employs the information of the statistical templates over YCbCr and ErEgEb color spaces, for which the background has been previously removed by using a probability function that models the probability that the pixel corresponds to a sign given its chromaticity values. This work includes an analysis of the detection rate as a function of the distance between the vehicle and the sign. Such information is useful to validate the robustness of the approach and is often not included in the existing literature. The detection rates, as a function of distance, are compared to those of the well-known Viola-Jones method. The results show that for distances less than 48 m, the proposed method achieves a detection rate of 87.5 % and 95.4 % for yield and stop signs, respectively. For distances less than 30 m, the detection rate is 100 % for both signs. The Viola-Jones approach has detection rates below 20 % for distances between 30 and 48 m, and barely improves in the 20-30 m range with detection rates of up to 60 % . Thus, the proposed method provides a robust alternative for intersection detection that relies on statistical color-based templates instead of shape information. The experiments employed videos of traffic signs taken in several streets of Santiago, Chile, using a research platform implemented at the Robotics and Automation Laboratory of PUC to develop driver assistance systems.

  13. Traffic Sign Detection System for Locating Road Intersections and Roundabouts: The Chilean Case

    PubMed Central

    Villalón-Sepúlveda, Gabriel; Torres-Torriti, Miguel; Flores-Calero, Marco

    2017-01-01

    This paper presents a traffic sign detection method for signs close to road intersections and roundabouts, such as stop and yield (give way) signs. The proposed method relies on statistical templates built using color information for both segmentation and classification. The segmentation method uses the RGB-normalized (ErEgEb) color space for ROIs (Regions of Interest) generation based on a chromaticity filter, where templates at 10 scales are applied to the entire image. Templates consider the mean and standard deviation of normalized color of the traffic signs to build thresholding intervals where the expected color should lie for a given sign. The classification stage employs the information of the statistical templates over YCbCr and ErEgEb color spaces, for which the background has been previously removed by using a probability function that models the probability that the pixel corresponds to a sign given its chromaticity values. This work includes an analysis of the detection rate as a function of the distance between the vehicle and the sign. Such information is useful to validate the robustness of the approach and is often not included in the existing literature. The detection rates, as a function of distance, are compared to those of the well-known Viola–Jones method. The results show that for distances less than 48 m, the proposed method achieves a detection rate of 87.5% and 95.4% for yield and stop signs, respectively. For distances less than 30 m, the detection rate is 100% for both signs. The Viola–Jones approach has detection rates below 20% for distances between 30 and 48 m, and barely improves in the 20–30 m range with detection rates of up to 60%. Thus, the proposed method provides a robust alternative for intersection detection that relies on statistical color-based templates instead of shape information. The experiments employed videos of traffic signs taken in several streets of Santiago, Chile, using a research platform implemented at the Robotics and Automation Laboratory of PUC to develop driver assistance systems. PMID:28587071

  14. Improved Satellite-based Crop Yield Mapping by Spatially Explicit Parameterization of Crop Phenology

    NASA Astrophysics Data System (ADS)

    Jin, Z.; Azzari, G.; Lobell, D. B.

    2016-12-01

    Field-scale mapping of crop yields with satellite data often relies on the use of crop simulation models. However, these approaches can be hampered by inaccuracies in the simulation of crop phenology. Here we present and test an approach to use dense time series of Landsat 7 and 8 acquisitions data to calibrate various parameters related to crop phenology simulation, such as leaf number and leaf appearance rates. These parameters are then mapped across the Midwestern United States for maize and soybean, and for two different simulation models. We then implement our recently developed Scalable satellite-based Crop Yield Mapper (SCYM) with simulations reflecting the improved phenology parameterizations, and compare to prior estimates based on default phenology routines. Our preliminary results show that the proposed method can effectively alleviate the underestimation of early-season LAI by the default Agricultural Production Systems sIMulator (APSIM), and that spatially explicit parameterization for the phenology model substantially improves the SCYM performance in capturing the spatiotemporal variation in maize and soybean yield. The scheme presented in our study thus preserves the scalability of SCYM, while significantly reducing its uncertainty.

  15. Revealing how network structure affects accuracy of link prediction

    NASA Astrophysics Data System (ADS)

    Yang, Jin-Xuan; Zhang, Xiao-Dong

    2017-08-01

    Link prediction plays an important role in network reconstruction and network evolution. The network structure affects the accuracy of link prediction, which is an interesting problem. In this paper we use common neighbors and the Gini coefficient to reveal the relation between them, which can provide a good reference for the choice of a suitable link prediction algorithm according to the network structure. Moreover, the statistical analysis reveals correlation between the common neighbors index, Gini coefficient index and other indices to describe the network structure, such as Laplacian eigenvalues, clustering coefficient, degree heterogeneity, and assortativity of network. Furthermore, a new method to predict missing links is proposed. The experimental results show that the proposed algorithm yields better prediction accuracy and robustness to the network structure than existing currently used methods for a variety of real-world networks.

  16. Automatic 3D liver segmentation based on deep learning and globally optimized surface evolution

    NASA Astrophysics Data System (ADS)

    Hu, Peijun; Wu, Fa; Peng, Jialin; Liang, Ping; Kong, Dexing

    2016-12-01

    The detection and delineation of the liver from abdominal 3D computed tomography (CT) images are fundamental tasks in computer-assisted liver surgery planning. However, automatic and accurate segmentation, especially liver detection, remains challenging due to complex backgrounds, ambiguous boundaries, heterogeneous appearances and highly varied shapes of the liver. To address these difficulties, we propose an automatic segmentation framework based on 3D convolutional neural network (CNN) and globally optimized surface evolution. First, a deep 3D CNN is trained to learn a subject-specific probability map of the liver, which gives the initial surface and acts as a shape prior in the following segmentation step. Then, both global and local appearance information from the prior segmentation are adaptively incorporated into a segmentation model, which is globally optimized in a surface evolution way. The proposed method has been validated on 42 CT images from the public Sliver07 database and local hospitals. On the Sliver07 online testing set, the proposed method can achieve an overall score of 80.3+/- 4.5 , yielding a mean Dice similarity coefficient of 97.25+/- 0.65 % , and an average symmetric surface distance of 0.84+/- 0.25 mm. The quantitative validations and comparisons show that the proposed method is accurate and effective for clinical application.

  17. A method of minimum volume simplex analysis constrained unmixing for hyperspectral image

    NASA Astrophysics Data System (ADS)

    Zou, Jinlin; Lan, Jinhui; Zeng, Yiliang; Wu, Hongtao

    2017-07-01

    The signal recorded by a low resolution hyperspectral remote sensor from a given pixel, letting alone the effects of the complex terrain, is a mixture of substances. To improve the accuracy of classification and sub-pixel object detection, hyperspectral unmixing(HU) is a frontier-line in remote sensing area. Unmixing algorithm based on geometric has become popular since the hyperspectral image possesses abundant spectral information and the mixed model is easy to understand. However, most of the algorithms are based on pure pixel assumption, and since the non-linear mixed model is complex, it is hard to obtain the optimal endmembers especially under a highly mixed spectral data. To provide a simple but accurate method, we propose a minimum volume simplex analysis constrained (MVSAC) unmixing algorithm. The proposed approach combines the algebraic constraints that are inherent to the convex minimum volume with abundance soft constraint. While considering abundance fraction, we can obtain the pure endmember set and abundance fraction correspondingly, and the final unmixing result is closer to reality and has better accuracy. We illustrate the performance of the proposed algorithm in unmixing simulated data and real hyperspectral data, and the result indicates that the proposed method can obtain the distinct signatures correctly without redundant endmember and yields much better performance than the pure pixel based algorithm.

  18. Binarization of Gray-Scaled Digital Images Via Fuzzy Reasoning

    NASA Technical Reports Server (NTRS)

    Dominquez, Jesus A.; Klinko, Steve; Voska, Ned (Technical Monitor)

    2002-01-01

    A new fast-computational technique based on fuzzy entropy measure has been developed to find an optimal binary image threshold. In this method, the image pixel membership functions are dependent on the threshold value and reflect the distribution of pixel values in two classes; thus, this technique minimizes the classification error. This new method is compared with two of the best-known threshold selection techniques, Otsu and Huang-Wang. The performance of the proposed method supersedes the performance of Huang- Wang and Otsu methods when the image consists of textured background and poor printing quality. The three methods perform well but yield different binarization approaches if the background and foreground of the image have well-separated gray-level ranges.

  19. Binarization of Gray-Scaled Digital Images Via Fuzzy Reasoning

    NASA Technical Reports Server (NTRS)

    Dominquez, Jesus A.; Klinko, Steve; Voska, Ned (Technical Monitor)

    2002-01-01

    A new fast-computational technique based on fuzzy entropy measure has been developed to find an optimal binary image threshold. In this method, the image pixel membership functions are dependent on the threshold value and reflect the distribution of pixel values in two classes; thus, this technique minimizes the classification error. This new method is compared with two of the best-known threshold selection techniques, Otsu and Huang-Wang. The performance of the proposed method supersedes the performance of Huang-Wang and Otsu methods when the image consists of textured background and poor printing quality. The three methods perform well but yield different binarization approaches if the background and foreground of the image have well-separated gray-level ranges.

  20. An alternative approach for modeling strength differential effect in sheet metals with symmetric yield functions

    NASA Astrophysics Data System (ADS)

    Kurukuri, Srihari; Worswick, Michael J.

    2013-12-01

    An alternative approach is proposed to utilize symmetric yield functions for modeling the tension-compression asymmetry commonly observed in hcp materials. In this work, the strength differential (SD) effect is modeled by choosing separate symmetric plane stress yield functions (for example, Barlat Yld 2000-2d) for the tension i.e., in the first quadrant of principal stress space, and compression i.e., third quadrant of principal stress space. In the second and fourth quadrants, the yield locus is constructed by adopting interpolating functions between uniaxial tensile and compressive stress states. In this work, different interpolating functions are chosen and the predictive capability of each approach is discussed. The main advantage of this proposed approach is that the yield locus parameters are deterministic and relatively easy to identify when compared to the Cazacu family of yield functions commonly used for modeling SD effect observed in hcp materials.

  1. Improved protein-protein interactions prediction via weighted sparse representation model combining continuous wavelet descriptor and PseAA composition.

    PubMed

    Huang, Yu-An; You, Zhu-Hong; Chen, Xing; Yan, Gui-Ying

    2016-12-23

    Protein-protein interactions (PPIs) are essential to most biological processes. Since bioscience has entered into the era of genome and proteome, there is a growing demand for the knowledge about PPI network. High-throughput biological technologies can be used to identify new PPIs, but they are expensive, time-consuming, and tedious. Therefore, computational methods for predicting PPIs have an important role. For the past years, an increasing number of computational methods such as protein structure-based approaches have been proposed for predicting PPIs. The major limitation in principle of these methods lies in the prior information of the protein to infer PPIs. Therefore, it is of much significance to develop computational methods which only use the information of protein amino acids sequence. Here, we report a highly efficient approach for predicting PPIs. The main improvements come from the use of a novel protein sequence representation by combining continuous wavelet descriptor and Chou's pseudo amino acid composition (PseAAC), and from adopting weighted sparse representation based classifier (WSRC). This method, cross-validated on the PPIs datasets of Saccharomyces cerevisiae, Human and H. pylori, achieves an excellent results with accuracies as high as 92.50%, 95.54% and 84.28% respectively, significantly better than previously proposed methods. Extensive experiments are performed to compare the proposed method with state-of-the-art Support Vector Machine (SVM) classifier. The outstanding results yield by our model that the proposed feature extraction method combing two kinds of descriptors have strong expression ability and are expected to provide comprehensive and effective information for machine learning-based classification models. In addition, the prediction performance in the comparison experiments shows the well cooperation between the combined feature and WSRC. Thus, the proposed method is a very efficient method to predict PPIs and may be a useful supplementary tool for future proteomics studies.

  2. Iterative algorithm for joint zero diagonalization with application in blind source separation.

    PubMed

    Zhang, Wei-Tao; Lou, Shun-Tian

    2011-07-01

    A new iterative algorithm for the nonunitary joint zero diagonalization of a set of matrices is proposed for blind source separation applications. On one hand, since the zero diagonalizer of the proposed algorithm is constructed iteratively by successive multiplications of an invertible matrix, the singular solutions that occur in the existing nonunitary iterative algorithms are naturally avoided. On the other hand, compared to the algebraic method for joint zero diagonalization, the proposed algorithm requires fewer matrices to be zero diagonalized to yield even better performance. The extension of the algorithm to the complex and nonsquare mixing cases is also addressed. Numerical simulations on both synthetic data and blind source separation using time-frequency distributions illustrate the performance of the algorithm and provide a comparison to the leading joint zero diagonalization schemes.

  3. Accurate formula for gaseous transmittance in the infrared.

    PubMed

    Gibson, G A; Pierluissi, J H

    1971-07-01

    By considering the infrared transmittance model of Zachor as the equation for an elliptic cone, a quadratic generalization is proposed that yields significantly greater computational accuracy. The strong-band parameters are obtained by iterative nonlinear, curve-fitting methods using a digital computer. The remaining parameters are determined with a linear least-squares technique and a weighting function that yields better results than the one adopted by Zachor. The model is applied to CO(2) over intervals of 50 cm(-1) between 550 cm(-1) and 9150 cm(-1) and to water vapor over similar intervals between 1050 cm(-1) and 9950 cm(-1), with mean rms deviations from the original data being 2.30 x 10(-3) and 1.83 x 10(-3), respectively.

  4. Brain tumor segmentation with Vander Lugt correlator based active contour.

    PubMed

    Essadike, Abdelaziz; Ouabida, Elhoussaine; Bouzid, Abdenbi

    2018-07-01

    The manual segmentation of brain tumors from medical images is an error-prone, sensitive, and time-absorbing process. This paper presents an automatic and fast method of brain tumor segmentation. In the proposed method, a numerical simulation of the optical Vander Lugt correlator is used for automatically detecting the abnormal tissue region. The tumor filter, used in the simulated optical correlation, is tailored to all the brain tumor types and especially to the Glioblastoma, which considered to be the most aggressive cancer. The simulated optical correlation, computed between Magnetic Resonance Images (MRI) and this filter, estimates precisely and automatically the initial contour inside the tumorous tissue. Further, in the segmentation part, the detected initial contour is used to define an active contour model and presenting the problematic as an energy minimization problem. As a result, this initial contour assists the algorithm to evolve an active contour model towards the exact tumor boundaries. Equally important, for a comparison purposes, we considered different active contour models and investigated their impact on the performance of the segmentation task. Several images from BRATS database with tumors anywhere in images and having different sizes, contrast, and shape, are used to test the proposed system. Furthermore, several performance metrics are computed to present an aggregate overview of the proposed method advantages. The proposed method achieves a high accuracy in detecting the tumorous tissue by a parameter returned by the simulated optical correlation. In addition, the proposed method yields better performance compared to the active contour based methods with the averages of Sensitivity=0.9733, Dice coefficient = 0.9663, Hausdroff distance = 2.6540, Specificity = 0.9994, and faster with a computational time average of 0.4119 s per image. Results reported on BRATS database reveal that our proposed system improves over the recently published state-of-the-art methods in brain tumor detection and segmentation. Copyright © 2018 Elsevier B.V. All rights reserved.

  5. Vectorial finite elements for solving the radiative transfer equation

    NASA Astrophysics Data System (ADS)

    Badri, M. A.; Jolivet, P.; Rousseau, B.; Le Corre, S.; Digonnet, H.; Favennec, Y.

    2018-06-01

    The discrete ordinate method coupled with the finite element method is often used for the spatio-angular discretization of the radiative transfer equation. In this paper we attempt to improve upon such a discretization technique. Instead of using standard finite elements, we reformulate the radiative transfer equation using vectorial finite elements. In comparison to standard finite elements, this reformulation yields faster timings for the linear system assemblies, as well as for the solution phase when using scattering media. The proposed vectorial finite element discretization for solving the radiative transfer equation is cross-validated against a benchmark problem available in literature. In addition, we have used the method of manufactured solutions to verify the order of accuracy for our discretization technique within different absorbing, scattering, and emitting media. For solving large problems of radiation on parallel computers, the vectorial finite element method is parallelized using domain decomposition. The proposed domain decomposition method scales on large number of processes, and its performance is unaffected by the changes in optical thickness of the medium. Our parallel solver is used to solve a large scale radiative transfer problem of the Kelvin-cell radiation.

  6. Influence of the distribution of PWO crystal radiation hardness on electromagnetic calorimeter performance

    NASA Astrophysics Data System (ADS)

    Drobychev, Gleb Yu.; Borisevich, Andrei E.; Korjik, Mikhail V.; Lecoq, Paul; Moroz, Valeri I.; Peigneux, Jean-Pierre

    2002-06-01

    The distribution of about 5000 mass-produced PWO crystals by their light yield and radiation hardness is analysed. The correlation between results of radiation hardness measurements at low and saturating dose rates is refined. A method for the evaluation of the energy resolution of the electromagnetic calorimeter accounting for a distribution of individual PWO crystal characteristics is proposed. A preliminary analysis of the PWO crystal recovery kinetics is also performed.

  7. A controlled genetic algorithm by fuzzy logic and belief functions for job-shop scheduling.

    PubMed

    Hajri, S; Liouane, N; Hammadi, S; Borne, P

    2000-01-01

    Most scheduling problems are highly complex combinatorial problems. However, stochastic methods such as genetic algorithm yield good solutions. In this paper, we present a controlled genetic algorithm (CGA) based on fuzzy logic and belief functions to solve job-shop scheduling problems. For better performance, we propose an efficient representational scheme, heuristic rules for creating the initial population, and a new methodology for mixing and computing genetic operator probabilities.

  8. Low-Resistivity Zinc Selenide for Heterojunctions

    NASA Technical Reports Server (NTRS)

    Stirn, R. J.

    1986-01-01

    Magnetron reactive sputtering enables doping of this semiconductor. Proposed method of reactive sputtering combined with doping shows potential for yielding low-resistivity zinc selenide films. Zinc selenide attractive material for forming heterojunctions with other semiconductor compounds as zinc phosphide, cadmium telluride, and gallium arsenide. Semiconductor junctions promising for future optoelectronic devices, including solar cells and electroluminescent displays. Resistivities of zinc selenide layers deposited by evaporation or chemical vapor deposition too high to form practical heterojunctions.

  9. Estimating nonrigid motion from inconsistent intensity with robust shape features

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, Wenyang; Ruan, Dan, E-mail: druan@mednet.ucla.edu; Department of Radiation Oncology, University of California, Los Angeles, California 90095

    2013-12-15

    Purpose: To develop a nonrigid motion estimation method that is robust to heterogeneous intensity inconsistencies amongst the image pairs or image sequence. Methods: Intensity and contrast variations, as in dynamic contrast enhanced magnetic resonance imaging, present a considerable challenge to registration methods based on general discrepancy metrics. In this study, the authors propose and validate a novel method that is robust to such variations by utilizing shape features. The geometry of interest (GOI) is represented with a flexible zero level set, segmented via well-behaved regularized optimization. The optimization energy drives the zero level set to high image gradient regions, andmore » regularizes it with area and curvature priors. The resulting shape exhibits high consistency even in the presence of intensity or contrast variations. Subsequently, a multiscale nonrigid registration is performed to seek a regular deformation field that minimizes shape discrepancy in the vicinity of GOIs. Results: To establish the working principle, realistic 2D and 3D images were subject to simulated nonrigid motion and synthetic intensity variations, so as to enable quantitative evaluation of registration performance. The proposed method was benchmarked against three alternative registration approaches, specifically, optical flow, B-spline based mutual information, and multimodality demons. When intensity consistency was satisfied, all methods had comparable registration accuracy for the GOIs. When intensities among registration pairs were inconsistent, however, the proposed method yielded pronounced improvement in registration accuracy, with an approximate fivefold reduction in mean absolute error (MAE = 2.25 mm, SD = 0.98 mm), compared to optical flow (MAE = 9.23 mm, SD = 5.36 mm), B-spline based mutual information (MAE = 9.57 mm, SD = 8.74 mm) and mutimodality demons (MAE = 10.07 mm, SD = 4.03 mm). Applying the proposed method on a real MR image sequence also provided qualitatively appealing results, demonstrating good feasibility and applicability of the proposed method. Conclusions: The authors have developed a novel method to estimate the nonrigid motion of GOIs in the presence of spatial intensity and contrast variations, taking advantage of robust shape features. Quantitative analysis and qualitative evaluation demonstrated good promise of the proposed method. Further clinical assessment and validation is being performed.« less

  10. Novel inter-crystal scattering event identification method for PET detectors

    NASA Astrophysics Data System (ADS)

    Lee, Min Sun; Kang, Seung Kwan; Lee, Jae Sung

    2018-06-01

    Here, we propose a novel method to identify inter-crystal scattering (ICS) events from a PET detector that is even applicable to light-sharing designs. In the proposed method, the detector observation was considered as a linear problem and ICS events were identified by solving this problem. Two ICS identification methods were suggested for solving the linear problem, pseudoinverse matrix calculation and convex constrained optimization. The proposed method was evaluated based on simulation and experimental studies. For the simulation study, an 8  ×  8 photo sensor was coupled to 8  ×  8, 10  ×  10 and 12  ×  12 crystal arrays to simulate a one-to-one coupling and two light-sharing detectors, respectively. The identification rate, the rate that the identified ICS events correctly include the true first interaction position and the energy linearity were evaluated for the proposed ICS identification methods. For the experimental study, a digital silicon photomultiplier was coupled with 8  ×  8 and 10  ×  10 arrays of 3  ×  3  ×  20 mm3 LGSO crystals to construct the one-to-one coupling and light-sharing detectors, respectively. Intrinsic spatial resolutions were measured for two detector types. The proposed ICS identification methods were implemented, and intrinsic resolutions were compared with and without ICS recovery. As a result, the simulation study showed that the proposed convex optimization method yielded robust energy estimation and high ICS identification rates of 0.93 and 0.87 for the one-to-one and light-sharing detectors, respectively. The experimental study showed a resolution improvement after recovering the identified ICS events into the first interaction position. The average intrinsic spatial resolutions for the one-to-one and light-sharing detector were 1.95 and 2.25 mm in the FWHM without ICS recovery, respectively. These values improved to 1.72 and 1.83 mm after ICS recovery, respectively. In conclusion, our proposed method showed good ICS identification in both one-to-one coupling and light-sharing detectors. We experimentally validated that the ICS recovery based on the proposed identification method led to an improved resolution.

  11. Deformable segmentation of 3D MR prostate images via distributed discriminative dictionary and ensemble learning

    PubMed Central

    Guo, Yanrong; Gao, Yaozong; Shao, Yeqin; Price, True; Oto, Aytekin; Shen, Dinggang

    2014-01-01

    Purpose: Automatic prostate segmentation from MR images is an important task in various clinical applications such as prostate cancer staging and MR-guided radiotherapy planning. However, the large appearance and shape variations of the prostate in MR images make the segmentation problem difficult to solve. Traditional Active Shape/Appearance Model (ASM/AAM) has limited accuracy on this problem, since its basic assumption, i.e., both shape and appearance of the targeted organ follow Gaussian distributions, is invalid in prostate MR images. To this end, the authors propose a sparse dictionary learning method to model the image appearance in a nonparametric fashion and further integrate the appearance model into a deformable segmentation framework for prostate MR segmentation. Methods: To drive the deformable model for prostate segmentation, the authors propose nonparametric appearance and shape models. The nonparametric appearance model is based on a novel dictionary learning method, namely distributed discriminative dictionary (DDD) learning, which is able to capture fine distinctions in image appearance. To increase the differential power of traditional dictionary-based classification methods, the authors' DDD learning approach takes three strategies. First, two dictionaries for prostate and nonprostate tissues are built, respectively, using the discriminative features obtained from minimum redundancy maximum relevance feature selection. Second, linear discriminant analysis is employed as a linear classifier to boost the optimal separation between prostate and nonprostate tissues, based on the representation residuals from sparse representation. Third, to enhance the robustness of the authors' classification method, multiple local dictionaries are learned for local regions along the prostate boundary (each with small appearance variations), instead of learning one global classifier for the entire prostate. These discriminative dictionaries are located on different patches of the prostate surface and trained to adaptively capture the appearance in different prostate zones, thus achieving better local tissue differentiation. For each local region, multiple classifiers are trained based on the randomly selected samples and finally assembled by a specific fusion method. In addition to this nonparametric appearance model, a prostate shape model is learned from the shape statistics using a novel approach, sparse shape composition, which can model nonGaussian distributions of shape variation and regularize the 3D mesh deformation by constraining it within the observed shape subspace. Results: The proposed method has been evaluated on two datasets consisting of T2-weighted MR prostate images. For the first (internal) dataset, the classification effectiveness of the authors' improved dictionary learning has been validated by comparing it with three other variants of traditional dictionary learning methods. The experimental results show that the authors' method yields a Dice Ratio of 89.1% compared to the manual segmentation, which is more accurate than the three state-of-the-art MR prostate segmentation methods under comparison. For the second dataset, the MICCAI 2012 challenge dataset, the authors' proposed method yields a Dice Ratio of 87.4%, which also achieves better segmentation accuracy than other methods under comparison. Conclusions: A new magnetic resonance image prostate segmentation method is proposed based on the combination of deformable model and dictionary learning methods, which achieves more accurate segmentation performance on prostate T2 MR images. PMID:24989402

  12. Accuracy-enhanced constitutive parameter identification using virtual fields method and special stereo-digital image correlation

    NASA Astrophysics Data System (ADS)

    Zhang, Zhongya; Pan, Bing; Grédiac, Michel; Song, Weidong

    2018-04-01

    The virtual fields method (VFM) is generally used with two-dimensional digital image correlation (2D-DIC) or grid method (GM) for identifying constitutive parameters. However, when small out-of-plane translation/rotation occurs to the test specimen, 2D-DIC and GM are prone to yield inaccurate measurements, which further lessen the accuracy of the parameter identification using VFM. In this work, an easy-to-implement but effective "special" stereo-DIC (SS-DIC) method is proposed for accuracy-enhanced VFM identification. The SS-DIC can not only deliver accurate deformation measurement without being affected by unavoidable out-of-plane movement/rotation of a test specimen, but can also ensure evenly distributed calculation data in space, which leads to simple data processing. Based on the accurate kinematics fields with evenly distributed measured points determined by SS-DIC method, constitutive parameters can be identified by VFM with enhanced accuracy. Uniaxial tensile tests of a perforated aluminum plate and pure shear tests of a prismatic aluminum specimen verified the effectiveness and accuracy of the proposed method. Experimental results show that the constitutive parameters identified by VFM using SS-DIC are more accurate and stable than those identified by VFM using 2D-DIC. It is suggested that the proposed SS-DIC can be used as a standard measuring tool for mechanical identification using VFM.

  13. 77 FR 71860 - Self-Regulatory Organizations; EDGX Exchange, Inc.; Notice of Filing and Immediate Effectiveness...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-12-04

    ... RLP Approval Order, which account for the difference of assumed information and sophistication level... Members that utilize Retail Orders. Flag ZA is proposed to be yielded for those Members that use Retail... is proposed to be yielded for those Members that use Retail Orders that remove liquidity from EDGX...

  14. Prediction of submarine scattered noise by the acoustic analogy

    NASA Astrophysics Data System (ADS)

    Testa, C.; Greco, L.

    2018-07-01

    The prediction of the noise scattered by a submarine subject to the propeller tonal noise is here addressed through a non-standard frequency-domain formulation that extends the use of the acoustic analogy to scattering problems. A boundary element method yields the scattered pressure upon the hull surface by the solution of a boundary integral equation, whereas the noise radiated in the fluid domain is evaluated by the corresponding boundary integral representation. Propeller-induced incident pressure field on the scatterer is detected by combining an unsteady three-dimensional panel method with the Bernoulli equation. For each frequency of interest, numerical results concern with sound pressure levels upon the hull and in the flowfield. The validity of the results is established by a comparison with a time-marching hydrodynamic panel method that solves propeller and hull jointly. Within the framework of potential-flow hydrodynamics, it is found out that the scattering formulation herein proposed is appropriate to successfully capture noise magnitude and directivity both on the hull surface and in the flowfield, yielding a computationally efficient solution procedure that may be useful in preliminary design/multidisciplinary optimization applications.

  15. Accounting for range uncertainties in the optimization of intensity modulated proton therapy.

    PubMed

    Unkelbach, Jan; Chan, Timothy C Y; Bortfeld, Thomas

    2007-05-21

    Treatment plans optimized for intensity modulated proton therapy (IMPT) may be sensitive to range variations. The dose distribution may deteriorate substantially when the actual range of a pencil beam does not match the assumed range. We present two treatment planning concepts for IMPT which incorporate range uncertainties into the optimization. The first method is a probabilistic approach. The range of a pencil beam is assumed to be a random variable, which makes the delivered dose and the value of the objective function a random variable too. We then propose to optimize the expectation value of the objective function. The second approach is a robust formulation that applies methods developed in the field of robust linear programming. This approach optimizes the worst case dose distribution that may occur, assuming that the ranges of the pencil beams may vary within some interval. Both methods yield treatment plans that are considerably less sensitive to range variations compared to conventional treatment plans optimized without accounting for range uncertainties. In addition, both approaches--although conceptually different--yield very similar results on a qualitative level.

  16. Optimization of microwave-assisted enzymatic extraction of polysaccharides from the fruit of Schisandra chinensis Baill.

    PubMed

    Cheng, Zhenyu; Song, Haiyan; Yang, Yingjie; Liu, Yan; Liu, Zhigang; Hu, Haobin; Zhang, Yang

    2015-05-01

    A microwave-assisted enzymatic extraction (MAEE) method had been developed, which was optimized by response surface methodology (RSM) and orthogonal test design, to enhance the extraction of crude polysaccharides (CPS) from the fruit of Schisandra chinensis Baill. The optimum conditions were as follows: microwave irradiation time of 10 min, extraction pH of 4.21, extraction temperature of 47.58°C, extraction time of 3h and enzyme concentration of 1.5% (wt% of S. chinensis powder) for cellulase, papain and pectinase, respectively. Under these conditions, the extraction yield of CPS was 7.38 ± 0.21%, which was well in close agreement with the value predicted by the model. The three methods including heat-refluxing extraction (HRE), ultrasonic-assisted extraction (UAE) and enzyme-assisted extraction (EAE) for extracting CPS by RSM were further compared. Results indicated MAEE method had the highest extraction yields of CPS at lower temperature. It was indicated that the proposed approach in this study was a simple and efficient technique for extraction of CPS in S. chinensis Baill. Copyright © 2015 Elsevier B.V. All rights reserved.

  17. A singular-value method for reconstruction of nonradial and lossy objects.

    PubMed

    Jiang, Wei; Astheimer, Jeffrey; Waag, Robert

    2012-03-01

    Efficient inverse scattering algorithms for nonradial lossy objects are presented using singular-value decomposition to form reduced-rank representations of the scattering operator. These algorithms extend eigenfunction methods that are not applicable to nonradial lossy scattering objects because the scattering operators for these objects do not have orthonormal eigenfunction decompositions. A method of local reconstruction by segregation of scattering contributions from different local regions is also presented. Scattering from each region is isolated by forming a reduced-rank representation of the scattering operator that has domain and range spaces comprised of far-field patterns with retransmitted fields that focus on the local region. Methods for the estimation of the boundary, average sound speed, and average attenuation slope of the scattering object are also given. These methods yielded approximations of scattering objects that were sufficiently accurate to allow residual variations to be reconstructed in a single iteration. Calculated scattering from a lossy elliptical object with a random background, internal features, and white noise is used to evaluate the proposed methods. Local reconstruction yielded images with spatial resolution that is finer than a half wavelength of the center frequency and reproduces sound speed and attenuation slope with relative root-mean-square errors of 1.09% and 11.45%, respectively.

  18. Application of a transonic potential flow code to the static aeroelastic analysis of three-dimensional wings

    NASA Technical Reports Server (NTRS)

    Whitlow, W., Jr.; Bennett, R. M.

    1982-01-01

    Since the aerodynamic theory is nonlinear, the method requires the coupling of two iterative processes - an aerodynamic analysis and a structural analysis. A full potential analysis code, FLO22, is combined with a linear structural analysis to yield aerodynamic load distributions on and deflections of elastic wings. This method was used to analyze an aeroelastically-scaled wind tunnel model of a proposed executive-jet transport wing and an aeroelastic research wing. The results are compared with the corresponding rigid-wing analyses, and some effects of elasticity on the aerodynamic loading are noted.

  19. Phonon-magnon interaction in low dimensional quantum magnets observed by dynamic heat transport measurements.

    PubMed

    Montagnese, Matteo; Otter, Marian; Zotos, Xenophon; Fishman, Dmitry A; Hlubek, Nikolai; Mityashkin, Oleg; Hess, Christian; Saint-Martin, Romuald; Singh, Surjeet; Revcolevschi, Alexandre; van Loosdrecht, Paul H M

    2013-04-05

    Thirty-five years ago, Sanders and Walton [Phys. Rev. B 15, 1489 (1977)] proposed a method to measure the phonon-magnon interaction in antiferromagnets through thermal transport which so far has not been verified experimentally. We show that a dynamical variant of this approach allows direct extraction of the phonon-magnon equilibration time, yielding 400 μs for the cuprate spin-ladder system Ca(9)La(5)Cu(24)O(41). The present work provides a general method to directly address the spin-phonon interaction by means of dynamical transport experiments.

  20. Online Farsi digit recognition using their upper half structure

    NASA Astrophysics Data System (ADS)

    Ghods, Vahid; Sohrabi, Mohammad Karim

    2015-03-01

    In this paper, we investigated the efficiency of upper half Farsi numerical digit structure. In other words, half of data (upper half of the digit shapes) was exploited for the recognition of Farsi numerical digits. This method can be used for both offline and online recognition. Half of data is more effective in speed process, data transfer and in this application accuracy. Hidden Markov model (HMM) was used to classify online Farsi digits. Evaluation was performed by TMU dataset. This dataset contains more than 1200 samples of online handwritten Farsi digits. The proposed method yielded more accuracy in recognition rate.

  1. Enantioselective Synthesis of α-Mercapto-β-amino Esters via Rh(II)/Chiral Phosphoric Acid-Cocatalyzed Three-Component Reaction of Diazo Compounds, Thiols, and Imines.

    PubMed

    Xiao, Guolan; Ma, Chaoqun; Xing, Dong; Hu, Wenhao

    2016-12-02

    An enantioselective method for the synthesis of α-mercapto-β-amino esters has been developed via a rhodium(II)/chiral phosphoric acid-cocatalyzed three-component reaction of diazo compounds, thiols, and imines. This transformation is proposed to proceed through enantioselective trapping of the sulfonium ylide intermediate generated in situ from the diazo compound and thiol by the phosphoric acid-activated imine. With this method, a series of α-mercapto-β-amino esters were obtained in good yields with moderate to good stereoselectivities.

  2. Alcoholism detection in magnetic resonance imaging by Haar wavelet transform and back propagation neural network

    NASA Astrophysics Data System (ADS)

    Yu, Yali; Wang, Mengxia; Lima, Dimas

    2018-04-01

    In order to develop a novel alcoholism detection method, we proposed a magnetic resonance imaging (MRI)-based computer vision approach. We first use contrast equalization to increase the contrast of brain slices. Then, we perform Haar wavelet transform and principal component analysis. Finally, we use back propagation neural network (BPNN) as the classification tool. Our method yields a sensitivity of 81.71±4.51%, a specificity of 81.43±4.52%, and an accuracy of 81.57±2.18%. The Haar wavelet gives better performance than db4 wavelet and sym3 wavelet.

  3. First-principles method for calculating the rate constants of internal-conversion and intersystem-crossing transitions.

    PubMed

    Valiev, R R; Cherepanov, V N; Baryshnikov, G V; Sundholm, D

    2018-02-28

    A method for calculating the rate constants for internal-conversion (k IC ) and intersystem-crossing (k ISC ) processes within the adiabatic and Franck-Condon (FC) approximations is proposed. The applicability of the method is demonstrated by calculation of k IC and k ISC for a set of organic and organometallic compounds with experimentally known spectroscopic properties. The studied molecules were pyrromethene-567 dye, psoralene, hetero[8]circulenes, free-base porphyrin, naphthalene, and larger polyacenes. We also studied fac-Alq 3 and fac-Ir(ppy) 3 , which are important molecules in organic light emitting diodes (OLEDs). The excitation energies were calculated at the multi-configuration quasi-degenerate second-order perturbation theory (XMC-QDPT2) level, which is found to yield excitation energies in good agreement with experimental data. Spin-orbit coupling matrix elements, non-adiabatic coupling matrix elements, Huang-Rhys factors, and vibrational energies were calculated at the time-dependent density functional theory (TDDFT) and complete active space self-consistent field (CASSCF) levels. The computed fluorescence quantum yields for the pyrromethene-567 dye, psoralene, hetero[8]circulenes, fac-Alq 3 and fac-Ir(ppy) 3 agree well with experimental data, whereas for the free-base porphyrin, naphthalene, and the polyacenes, the obtained quantum yields significantly differ from the experimental values, because the FC and adiabatic approximations are not accurate for these molecules.

  4. Ion pair-based dispersive liquid-liquid microextraction followed by high performance liquid chromatography as a new method for determining five folate derivatives in foodstuffs.

    PubMed

    Nojavan, Yones; Kamankesh, Marzieh; Shahraz, Farzaneh; Hashemi, Maryam; Mohammadi, Abdorreza

    2015-05-01

    A novel technique for simultaneous determination of five folate derivatives in various food matrices was developed by ion pair-based dispersive liquid-liquid microextraction (IP-DLLME) combined with high-performance liquid chromatography (HPLC). In the proposed method, N-methyl-N,N-dioctyloctan-1-ammonium chloride (aliquat-336) was used as an ion-pair reagent. Effective variables of microextraction process were optimized. Under optimum conditions, the method yielded a linear calibration curve ranging from 1-200 ng g(-1) with correlation coefficients (r(2)) higher than 0.98. The relative standard deviation for the seven analyses was 5.2-7.4%. Enrichment factors for the five folates ranged between 108-135. Limits of detection were 2-4.1 ng g(-1). A comparison of this method with other methods described that the new proposed method is rapid and accurate, and gives very good enrichment factors and detection limits for determining five folate derivatives. The newly developed method was successfully applied for the determination of five folate derivatives in wheat flour, egg yolk and orange juice samples. Copyright © 2015 Elsevier B.V. All rights reserved.

  5. Bulk nuclear properties from dynamical description of heavy-ion collisions

    NASA Astrophysics Data System (ADS)

    Hong, Jun

    Mapping out the equation of state (EOS) of nuclear matter is a long standing problem in nuclear physics. Both experimentalists and theoretical physicists spare no effort in improving understanding of the EOS. In this thesis, we examine observables sensitive to the EOS within the pBUU transport model based on the Boltzmann equation. By comparing theoretical predictions with experimental data, we arrive at new constraints for the EOS. Further we propose novel promising observables for analysis of future experimental data. One set of observables that we examine within the pBUU model are pion yields. First, we find that net pion yields in central heavy-ion collisions (HIC) are strongly sensitive to the momentum dependence of the isoscalar nuclear mean field. We reexamine the momentum dependence that is assumed in the Boltzmann equation model for the collisions and optimize that dependence to describe the FOPI measurements of pion yields from the Au+Au collisions at different beam energies. Alas such optimized dependence yields a somewhat weaker baryonic elliptic flow than seen in measurements. Subsequently, we use the same pBUU model to generate predictions for baryonic elliptic flow observable in HIC, while varying the incompressibility of nuclear matter. In parallel, we test the sensitivity of pion multiplicity to the density dependence of EOS, and in particular to incompressibility, and optimize that dependence to describe both the elliptic flow and pion yields. Upon arriving at acceptable regions of density dependence of pressure and energy, we compare our constraints on EOS with those recently arrived at by the joint experiment and theory effort FOPI-IQMD. We should mention that, for the more advanced observables from HIC, there remain discrepancies of up to 30%, depending on energy, between the theory and experiment, indicating the limitations of the transport theory. Next, we explore the impact of the density dependence of the symmetry energy on observables, motivated by experiments aiming at constraining the symmetry energy. In contradiction to IBUU and ImIQMD models in the literature, that claim sensitivity of net charged pion yields to the density dependence of the symmetry energy, albeit in direction opposite from each other, we find practically no such sensitivity in pBUU. However, we find a rather dramatic sensitivity of differential high-energy charged-pion yield ratio to that density dependence, which can be qualitatively understood, and we propose that differential ratio be used in future experiments to constrain the symmetry energy. Finally, we present Gaussian phase-space representation method for studying strongly correlated systems. This approach allows to follow time evolution of quantum many-body systems with large Hilbert spaces through stochastic sampling, provided the interactions are two-body in nature. We demonstrate the advantage of the Gaussian phase-space representation method in coping with the notorious numerical sign problem for fermion systems. Lastly, we discuss the difficulty in trying to stabilize the system during its time evolution, within the Gaussian phase-space method.

  6. A method of searching for related literature on protein structure analysis by considering a user's intention

    PubMed Central

    2015-01-01

    Background In recent years, with advances in techniques for protein structure analysis, the knowledge about protein structure and function has been published in a vast number of articles. A method to search for specific publications from such a large pool of articles is needed. In this paper, we propose a method to search for related articles on protein structure analysis by using an article itself as a query. Results Each article is represented as a set of concepts in the proposed method. Then, by using similarities among concepts formulated from databases such as Gene Ontology, similarities between articles are evaluated. In this framework, the desired search results vary depending on the user's search intention because a variety of information is included in a single article. Therefore, the proposed method provides not only one input article (primary article) but also additional articles related to it as an input query to determine the search intention of the user, based on the relationship between two query articles. In other words, based on the concepts contained in the input article and additional articles, we actualize a relevant literature search that considers user intention by varying the degree of attention given to each concept and modifying the concept hierarchy graph. Conclusions We performed an experiment to retrieve relevant papers from articles on protein structure analysis registered in the Protein Data Bank by using three query datasets. The experimental results yielded search results with better accuracy than when user intention was not considered, confirming the effectiveness of the proposed method. PMID:25952498

  7. Deformable segmentation of 3D MR prostate images via distributed discriminative dictionary and ensemble learning.

    PubMed

    Guo, Yanrong; Gao, Yaozong; Shao, Yeqin; Price, True; Oto, Aytekin; Shen, Dinggang

    2014-07-01

    Automatic prostate segmentation from MR images is an important task in various clinical applications such as prostate cancer staging and MR-guided radiotherapy planning. However, the large appearance and shape variations of the prostate in MR images make the segmentation problem difficult to solve. Traditional Active Shape/Appearance Model (ASM/AAM) has limited accuracy on this problem, since its basic assumption, i.e., both shape and appearance of the targeted organ follow Gaussian distributions, is invalid in prostate MR images. To this end, the authors propose a sparse dictionary learning method to model the image appearance in a nonparametric fashion and further integrate the appearance model into a deformable segmentation framework for prostate MR segmentation. To drive the deformable model for prostate segmentation, the authors propose nonparametric appearance and shape models. The nonparametric appearance model is based on a novel dictionary learning method, namely distributed discriminative dictionary (DDD) learning, which is able to capture fine distinctions in image appearance. To increase the differential power of traditional dictionary-based classification methods, the authors' DDD learning approach takes three strategies. First, two dictionaries for prostate and nonprostate tissues are built, respectively, using the discriminative features obtained from minimum redundancy maximum relevance feature selection. Second, linear discriminant analysis is employed as a linear classifier to boost the optimal separation between prostate and nonprostate tissues, based on the representation residuals from sparse representation. Third, to enhance the robustness of the authors' classification method, multiple local dictionaries are learned for local regions along the prostate boundary (each with small appearance variations), instead of learning one global classifier for the entire prostate. These discriminative dictionaries are located on different patches of the prostate surface and trained to adaptively capture the appearance in different prostate zones, thus achieving better local tissue differentiation. For each local region, multiple classifiers are trained based on the randomly selected samples and finally assembled by a specific fusion method. In addition to this nonparametric appearance model, a prostate shape model is learned from the shape statistics using a novel approach, sparse shape composition, which can model nonGaussian distributions of shape variation and regularize the 3D mesh deformation by constraining it within the observed shape subspace. The proposed method has been evaluated on two datasets consisting of T2-weighted MR prostate images. For the first (internal) dataset, the classification effectiveness of the authors' improved dictionary learning has been validated by comparing it with three other variants of traditional dictionary learning methods. The experimental results show that the authors' method yields a Dice Ratio of 89.1% compared to the manual segmentation, which is more accurate than the three state-of-the-art MR prostate segmentation methods under comparison. For the second dataset, the MICCAI 2012 challenge dataset, the authors' proposed method yields a Dice Ratio of 87.4%, which also achieves better segmentation accuracy than other methods under comparison. A new magnetic resonance image prostate segmentation method is proposed based on the combination of deformable model and dictionary learning methods, which achieves more accurate segmentation performance on prostate T2 MR images.

  8. Estimating nonrigid motion from inconsistent intensity with robust shape features.

    PubMed

    Liu, Wenyang; Ruan, Dan

    2013-12-01

    To develop a nonrigid motion estimation method that is robust to heterogeneous intensity inconsistencies amongst the image pairs or image sequence. Intensity and contrast variations, as in dynamic contrast enhanced magnetic resonance imaging, present a considerable challenge to registration methods based on general discrepancy metrics. In this study, the authors propose and validate a novel method that is robust to such variations by utilizing shape features. The geometry of interest (GOI) is represented with a flexible zero level set, segmented via well-behaved regularized optimization. The optimization energy drives the zero level set to high image gradient regions, and regularizes it with area and curvature priors. The resulting shape exhibits high consistency even in the presence of intensity or contrast variations. Subsequently, a multiscale nonrigid registration is performed to seek a regular deformation field that minimizes shape discrepancy in the vicinity of GOIs. To establish the working principle, realistic 2D and 3D images were subject to simulated nonrigid motion and synthetic intensity variations, so as to enable quantitative evaluation of registration performance. The proposed method was benchmarked against three alternative registration approaches, specifically, optical flow, B-spline based mutual information, and multimodality demons. When intensity consistency was satisfied, all methods had comparable registration accuracy for the GOIs. When intensities among registration pairs were inconsistent, however, the proposed method yielded pronounced improvement in registration accuracy, with an approximate fivefold reduction in mean absolute error (MAE = 2.25 mm, SD = 0.98 mm), compared to optical flow (MAE = 9.23 mm, SD = 5.36 mm), B-spline based mutual information (MAE = 9.57 mm, SD = 8.74 mm) and mutimodality demons (MAE = 10.07 mm, SD = 4.03 mm). Applying the proposed method on a real MR image sequence also provided qualitatively appealing results, demonstrating good feasibility and applicability of the proposed method. The authors have developed a novel method to estimate the nonrigid motion of GOIs in the presence of spatial intensity and contrast variations, taking advantage of robust shape features. Quantitative analysis and qualitative evaluation demonstrated good promise of the proposed method. Further clinical assessment and validation is being performed.

  9. Improving nondestructive characterization of dual phase steels using data fusion

    NASA Astrophysics Data System (ADS)

    Kahrobaee, Saeed; Haghighi, Mehdi Salkhordeh; Akhlaghi, Iman Ahadi

    2018-07-01

    The aim of this paper is to introduce a novel methodology for nondestructive determination of microstructural and mechanical properties (due to the various heat treatments), as well as thickness variations (as a result of corrosion effect) of dual phase steels. The characterizations are based on the variations in the electromagnetic properties extracted from magnetic hysteresis loop and eddy current methods which are coupled with a data fusion system. This study was conducted on six groups of samples (with different thicknesses, from 1 mm to 4 mm) subjected to the various intercritical annealing processes to produce different fractions of martensite/ferrite phases and consequently, changes in hardness, yield strength and ultra tensile strength (UTS). This study proposes a novel soft computing technique to increase accuracy of nondestructive measurements and resolving overlapped NDE outputs related to the various samples. The empirical results indicate that applying the proposed data fusion technique on the two electromagnetic NDE data sets nondestructively, causes an increase in the accuracy and reliability of determining material features including ferrite fraction, hardness, yield strength, UTS, as well as thickness variations.

  10. Inverse probability weighting for covariate adjustment in randomized studies

    PubMed Central

    Li, Xiaochun; Li, Lingling

    2013-01-01

    SUMMARY Covariate adjustment in randomized clinical trials has the potential benefit of precision gain. It also has the potential pitfall of reduced objectivity as it opens the possibility of selecting “favorable” model that yields strong treatment benefit estimate. Although there is a large volume of statistical literature targeting on the first aspect, realistic solutions to enforce objective inference and improve precision are rare. As a typical randomized trial needs to accommodate many implementation issues beyond statistical considerations, maintaining the objectivity is at least as important as precision gain if not more, particularly from the perspective of the regulatory agencies. In this article, we propose a two-stage estimation procedure based on inverse probability weighting to achieve better precision without compromising objectivity. The procedure is designed in a way such that the covariate adjustment is performed before seeing the outcome, effectively reducing the possibility of selecting a “favorable” model that yields a strong intervention effect. Both theoretical and numerical properties of the estimation procedure are presented. Application of the proposed method to a real data example is presented. PMID:24038458

  11. Inverse probability weighting for covariate adjustment in randomized studies.

    PubMed

    Shen, Changyu; Li, Xiaochun; Li, Lingling

    2014-02-20

    Covariate adjustment in randomized clinical trials has the potential benefit of precision gain. It also has the potential pitfall of reduced objectivity as it opens the possibility of selecting a 'favorable' model that yields strong treatment benefit estimate. Although there is a large volume of statistical literature targeting on the first aspect, realistic solutions to enforce objective inference and improve precision are rare. As a typical randomized trial needs to accommodate many implementation issues beyond statistical considerations, maintaining the objectivity is at least as important as precision gain if not more, particularly from the perspective of the regulatory agencies. In this article, we propose a two-stage estimation procedure based on inverse probability weighting to achieve better precision without compromising objectivity. The procedure is designed in a way such that the covariate adjustment is performed before seeing the outcome, effectively reducing the possibility of selecting a 'favorable' model that yields a strong intervention effect. Both theoretical and numerical properties of the estimation procedure are presented. Application of the proposed method to a real data example is presented. Copyright © 2013 John Wiley & Sons, Ltd.

  12. A practical material decomposition method for x-ray dual spectral computed tomography.

    PubMed

    Hu, Jingjing; Zhao, Xing

    2016-03-17

    X-ray dual spectral CT (DSCT) scans the measured object with two different x-ray spectra, and the acquired rawdata can be used to perform the material decomposition of the object. Direct calibration methods allow a faster material decomposition for DSCT and can be separated in two groups: image-based and rawdata-based. The image-based method is an approximative method, and beam hardening artifacts remain in the resulting material-selective images. The rawdata-based method generally obtains better image quality than the image-based method, but this method requires geometrically consistent rawdata. However, today's clinical dual energy CT scanners usually measure different rays for different energy spectra and acquire geometrically inconsistent rawdata sets, and thus cannot meet the requirement. This paper proposes a practical material decomposition method to perform rawdata-based material decomposition in the case of inconsistent measurement. This method first yields the desired consistent rawdata sets from the measured inconsistent rawdata sets, and then employs rawdata-based technique to perform material decomposition and reconstruct material-selective images. The proposed method was evaluated by use of simulated FORBILD thorax phantom rawdata and dental CT rawdata, and simulation results indicate that this method can produce highly quantitative DSCT images in the case of inconsistent DSCT measurements.

  13. 4D numerical observer for lesion detection in respiratory-gated PET

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lorsakul, Auranuch; Li, Quanzheng; Ouyang, Jinsong

    2014-10-15

    Purpose: Respiratory-gated positron emission tomography (PET)/computed tomography protocols reduce lesion smearing and improve lesion detection through a synchronized acquisition of emission data. However, an objective assessment of image quality of the improvement gained from respiratory-gated PET is mainly limited to a three-dimensional (3D) approach. This work proposes a 4D numerical observer that incorporates both spatial and temporal informations for detection tasks in pulmonary oncology. Methods: The authors propose a 4D numerical observer constructed with a 3D channelized Hotelling observer for the spatial domain followed by a Hotelling observer for the temporal domain. Realistic {sup 18}F-fluorodeoxyglucose activity distributions were simulated usingmore » a 4D extended cardiac torso anthropomorphic phantom including 12 spherical lesions at different anatomical locations (lower, upper, anterior, and posterior) within the lungs. Simulated data based on Monte Carlo simulation were obtained using GEANT4 application for tomographic emission (GATE). Fifty noise realizations of six respiratory-gated PET frames were simulated by GATE using a model of the Siemens Biograph mMR scanner geometry. PET sinograms of the thorax background and pulmonary lesions that were simulated separately were merged to generate different conditions of the lesions to the background (e.g., lesion contrast and motion). A conventional ordered subset expectation maximization (OSEM) reconstruction (5 iterations and 6 subsets) was used to obtain: (1) gated, (2) nongated, and (3) motion-corrected image volumes (a total of 3200 subimage volumes: 2400 gated, 400 nongated, and 400 motion-corrected). Lesion-detection signal-to-noise ratios (SNRs) were measured in different lesion-to-background contrast levels (3.5, 8.0, 9.0, and 20.0), lesion diameters (10.0, 13.0, and 16.0 mm), and respiratory motion displacements (17.6–31.3 mm). The proposed 4D numerical observer applied on multiple-gated images was compared to the conventional 3D approach applied on the nongated and motion-corrected images. Results: On average, the proposed 4D numerical observer improved the detection SNR by 48.6% (p < 0.005), whereas the 3D methods on motion-corrected images improved by 31.0% (p < 0.005) as compared to the nongated method. For all different conditions of the lesions, the relative SNR measurement (Gain = SNR{sub Observed}/SNR{sub Nongated}) of the 4D method was significantly higher than one from the motion-corrected 3D method by 13.8% (p < 0.02), where Gain{sub 4D} was 1.49 ± 0.21 and Gain{sub 3D} was 1.31 ± 0.15. For the lesion with the highest amplitude of motion, the 4D numerical observer yielded the highest observer-performance improvement (176%). For the lesion undergoing the smallest motion amplitude, the 4D method provided superior lesion detectability compared with the 3D method, which provided a detection SNR close to the nongated method. The investigation on a structure of the 4D numerical observer showed that a Laguerre–Gaussian channel matrix with a volumetric 3D function yielded higher lesion-detection performance than one with a 2D-stack-channelized function, whereas a different kind of channels that have the ability to mimic the human visual system, i.e., difference-of-Gaussian, showed similar performance in detecting uniform and spherical lesions. The investigation of the detection performance when increasing noise levels yielded decreasing detection SNR by 27.6% and 41.5% for the nongated and gated methods, respectively. The investigation of lesion contrast and diameter showed that the proposed 4D observer preserved the linearity property of an optimal-linear observer while the motion was present. Furthermore, the investigation of the iteration and subset numbers of the OSEM algorithm demonstrated that these parameters had impact on the lesion detectability and the selection of the optimal parameters could provide the maximum lesion-detection performance. The proposed 4D numerical observer outperformed the other observers for the lesion-detection task in various lesion conditions and motions. Conclusions: The 4D numerical observer shows substantial improvement in lesion detectability over the 3D observer method. The proposed 4D approach could potentially provide a more reliable objective assessment of the impact of respiratory-gated PET improvement for lesion-detection tasks. On the other hand, the 4D approach may be used as an upper bound to investigate the performance of the motion correction method. In future work, the authors will validate the proposed 4D approach on clinical data for detection tasks in pulmonary oncology.« less

  14. Sparse kernel methods for high-dimensional survival data.

    PubMed

    Evers, Ludger; Messow, Claudia-Martina

    2008-07-15

    Sparse kernel methods like support vector machines (SVM) have been applied with great success to classification and (standard) regression settings. Existing support vector classification and regression techniques however are not suitable for partly censored survival data, which are typically analysed using Cox's proportional hazards model. As the partial likelihood of the proportional hazards model only depends on the covariates through inner products, it can be 'kernelized'. The kernelized proportional hazards model however yields a solution that is dense, i.e. the solution depends on all observations. One of the key features of an SVM is that it yields a sparse solution, depending only on a small fraction of the training data. We propose two methods. One is based on a geometric idea, where-akin to support vector classification-the margin between the failed observation and the observations currently at risk is maximised. The other approach is based on obtaining a sparse model by adding observations one after another akin to the Import Vector Machine (IVM). Data examples studied suggest that both methods can outperform competing approaches. Software is available under the GNU Public License as an R package and can be obtained from the first author's website http://www.maths.bris.ac.uk/~maxle/software.html.

  15. Modified dwell time optimization model and its applications in subaperture polishing.

    PubMed

    Dong, Zhichao; Cheng, Haobo; Tam, Hon-Yuen

    2014-05-20

    The optimization of dwell time is an important procedure in deterministic subaperture polishing. We present a modified optimization model of dwell time by iterative and numerical method, assisted by extended surface forms and tool paths for suppressing the edge effect. Compared with discrete convolution and linear equation models, the proposed model has essential compatibility with arbitrary tool paths, multiple tool influence functions (TIFs) in one optimization, and asymmetric TIFs. The emulational fabrication of a Φ200  mm workpiece by the proposed model yields a smooth, continuous, and non-negative dwell time map with a root-mean-square (RMS) convergence rate of 99.6%, and the optimization costs much less time. By the proposed model, influences of TIF size and path interval to convergence rate and polishing time are optimized, respectively, for typical low and middle spatial-frequency errors. Results show that (1) the TIF size is nonlinear inversely proportional to convergence rate and polishing time. A TIF size of ~1/7 workpiece size is preferred; (2) the polishing time is less sensitive to path interval, but increasing the interval markedly reduces the convergence rate. A path interval of ~1/8-1/10 of the TIF size is deemed to be appropriate. The proposed model is deployed on a JR-1800 and MRF-180 machine. Figuring results of Φ920  mm Zerodur paraboloid and Φ100  mm Zerodur plane by them yield RMS of 0.016λ and 0.013λ (λ=632.8  nm), respectively, and thereby validate the feasibility of proposed dwell time model used for subaperture polishing.

  16. An efficient data mining framework for the characterization of symptomatic and asymptomatic carotid plaque using bidimensional empirical mode decomposition technique.

    PubMed

    Molinari, Filippo; Raghavendra, U; Gudigar, Anjan; Meiburger, Kristen M; Rajendra Acharya, U

    2018-02-23

    Atherosclerosis is a type of cardiovascular disease which may cause stroke. It is due to the deposition of fatty plaque in the artery walls resulting in the reduction of elasticity gradually and hence restricting the blood flow to the heart. Hence, an early prediction of carotid plaque deposition is important, as it can save lives. This paper proposes a novel data mining framework for the assessment of atherosclerosis in its early stage using ultrasound images. In this work, we are using 1353 symptomatic and 420 asymptomatic carotid plaque ultrasound images. Our proposed method classifies the symptomatic and asymptomatic carotid plaques using bidimensional empirical mode decomposition (BEMD) and entropy features. The unbalanced data samples are compensated using adaptive synthetic sampling (ADASYN), and the developed method yielded a promising accuracy of 91.43%, sensitivity of 97.26%, and specificity of 83.22% using fourteen features. Hence, the proposed method can be used as an assisting tool during the regular screening of carotid arteries in hospitals. Graphical abstract Outline for our efficient data mining framework for the characterization of symptomatic and asymptomatic carotid plaques.

  17. Trust-region based return mapping algorithm for implicit integration of elastic-plastic constitutive models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lester, Brian; Scherzinger, William

    2017-01-19

    Here, a new method for the solution of the non-linear equations forming the core of constitutive model integration is proposed. Specifically, the trust-region method that has been developed in the numerical optimization community is successfully modified for use in implicit integration of elastic-plastic models. Although attention here is restricted to these rate-independent formulations, the proposed approach holds substantial promise for adoption with models incorporating complex physics, multiple inelastic mechanisms, and/or multiphysics. As a first step, the non-quadratic Hosford yield surface is used as a representative case to investigate computationally challenging constitutive models. The theory and implementation are presented, discussed, andmore » compared to other common integration schemes. Multiple boundary value problems are studied and used to verify the proposed algorithm and demonstrate the capabilities of this approach over more common methodologies. Robustness and speed are then investigated and compared to existing algorithms. Through these efforts, it is shown that the utilization of a trust-region approach leads to superior performance versus a traditional closest-point projection Newton-Raphson method and comparable speed and robustness to a line search augmented scheme.« less

  18. Trust-region based return mapping algorithm for implicit integration of elastic-plastic constitutive models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lester, Brian T.; Scherzinger, William M.

    2017-01-19

    A new method for the solution of the non-linear equations forming the core of constitutive model integration is proposed. Specifically, the trust-region method that has been developed in the numerical optimization community is successfully modified for use in implicit integration of elastic-plastic models. Although attention here is restricted to these rate-independent formulations, the proposed approach holds substantial promise for adoption with models incorporating complex physics, multiple inelastic mechanisms, and/or multiphysics. As a first step, the non-quadratic Hosford yield surface is used as a representative case to investigate computationally challenging constitutive models. The theory and implementation are presented, discussed, and comparedmore » to other common integration schemes. Multiple boundary value problems are studied and used to verify the proposed algorithm and demonstrate the capabilities of this approach over more common methodologies. Robustness and speed are then investigated and compared to existing algorithms. As a result through these efforts, it is shown that the utilization of a trust-region approach leads to superior performance versus a traditional closest-point projection Newton-Raphson method and comparable speed and robustness to a line search augmented scheme.« less

  19. Restoration of a single superresolution image from several blurred, noisy, and undersampled measured images.

    PubMed

    Elad, M; Feuer, A

    1997-01-01

    The three main tools in the single image restoration theory are the maximum likelihood (ML) estimator, the maximum a posteriori probability (MAP) estimator, and the set theoretic approach using projection onto convex sets (POCS). This paper utilizes the above known tools to propose a unified methodology toward the more complicated problem of superresolution restoration. In the superresolution restoration problem, an improved resolution image is restored from several geometrically warped, blurred, noisy and downsampled measured images. The superresolution restoration problem is modeled and analyzed from the ML, the MAP, and POCS points of view, yielding a generalization of the known superresolution restoration methods. The proposed restoration approach is general but assumes explicit knowledge of the linear space- and time-variant blur, the (additive Gaussian) noise, the different measured resolutions, and the (smooth) motion characteristics. A hybrid method combining the simplicity of the ML and the incorporation of nonellipsoid constraints is presented, giving improved restoration performance, compared with the ML and the POCS approaches. The hybrid method is shown to converge to the unique optimal solution of a new definition of the optimization problem. Superresolution restoration from motionless measurements is also discussed. Simulations demonstrate the power of the proposed methodology.

  20. Evaluation of genotype x environment interactions in cotton using the method proposed by Eberhart and Russell and reaction norm models.

    PubMed

    Alves, R S; Teodoro, P E; Farias, F C; Farias, F J C; Carvalho, L P; Rodrigues, J I S; Bhering, L L; Resende, M D V

    2017-08-17

    Cotton produces one of the most important textile fibers of the world and has great relevance in the world economy. It is an economically important crop in Brazil, which is the world's fifth largest producer. However, studies evaluating the genotype x environment (G x E) interactions in cotton are scarce in this country. Therefore, the goal of this study was to evaluate the G x E interactions in two important traits in cotton (fiber yield and fiber length) using the method proposed by Eberhart and Russell (simple linear regression) and reaction norm models (random regression). Eight trials with sixteen upland cotton genotypes, conducted in a randomized block design, were used. It was possible to identify a genotype with wide adaptability and stability for both traits. Reaction norm models have excellent theoretical and practical properties and led to more informative and accurate results than the method proposed by Eberhart and Russell and should, therefore, be preferred. Curves of genotypic values as a function of the environmental gradient, which predict the behavior of the genotypes along the environmental gradient, were generated. These curves make possible the recommendation to untested environmental levels.

  1. Evolutionary selection growth of two-dimensional materials on polycrystalline substrates

    NASA Astrophysics Data System (ADS)

    Vlassiouk, Ivan V.; Stehle, Yijing; Pudasaini, Pushpa Raj; Unocic, Raymond R.; Rack, Philip D.; Baddorf, Arthur P.; Ivanov, Ilia N.; Lavrik, Nickolay V.; List, Frederick; Gupta, Nitant; Bets, Ksenia V.; Yakobson, Boris I.; Smirnov, Sergei N.

    2018-03-01

    There is a demand for the manufacture of two-dimensional (2D) materials with high-quality single crystals of large size. Usually, epitaxial growth is considered the method of choice1 in preparing single-crystalline thin films, but it requires single-crystal substrates for deposition. Here we present a different approach and report the synthesis of single-crystal-like monolayer graphene films on polycrystalline substrates. The technological realization of the proposed method resembles the Czochralski process and is based on the evolutionary selection2 approach, which is now realized in 2D geometry. The method relies on `self-selection' of the fastest-growing domain orientation, which eventually overwhelms the slower-growing domains and yields a single-crystal continuous 2D film. Here we have used it to synthesize foot-long graphene films at rates up to 2.5 cm h-1 that possess the quality of a single crystal. We anticipate that the proposed approach could be readily adopted for the synthesis of other 2D materials and heterostructures.

  2. Partial homogeneity based high-resolution nuclear magnetic resonance spectra under inhomogeneous magnetic fields

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wei, Zhiliang; Lin, Liangjie; Lin, Yanqin, E-mail: linyq@xmu.edu.cn, E-mail: chenz@xmu.edu.cn

    2014-09-29

    In nuclear magnetic resonance (NMR) technique, it is of great necessity and importance to obtain high-resolution spectra, especially under inhomogeneous magnetic fields. In this study, a method based on partial homogeneity is proposed for retrieving high-resolution one-dimensional NMR spectra under inhomogeneous fields. Signals from series of small voxels, which characterize high resolution due to small sizes, are recorded simultaneously. Then, an inhomogeneity correction algorithm is developed based on pattern recognition to correct the influence brought by field inhomogeneity automatically, thus yielding high-resolution information. Experiments on chemical solutions and fish spawn were carried out to demonstrate the performance of the proposedmore » method. The proposed method serves as a single radiofrequency pulse high-resolution NMR spectroscopy under inhomogeneous fields and may provide an alternative of obtaining high-resolution spectra of in vivo living systems or chemical-reaction systems, where performances of conventional techniques are usually degenerated by field inhomogeneity.« less

  3. Correction of data truncation artifacts in differential phase contrast (DPC) tomosynthesis imaging

    NASA Astrophysics Data System (ADS)

    Garrett, John; Ge, Yongshuai; Li, Ke; Chen, Guang-Hong

    2015-10-01

    The use of grating based Talbot-Lau interferometry permits the acquisition of differential phase contrast (DPC) imaging with a conventional medical x-ray source and detector. However, due to the limited area of the gratings, limited area of the detector, or both, data truncation image artifacts are often observed in tomographic DPC acquisitions and reconstructions, such as tomosynthesis (limited-angle tomography). When data are truncated in the conventional x-ray absorption tomosynthesis imaging, a variety of methods have been developed to mitigate the truncation artifacts. However, the same strategies used to mitigate absorption truncation artifacts do not yield satisfactory reconstruction results in DPC tomosynthesis reconstruction. In this work, several new methods have been proposed to mitigate data truncation artifacts in a DPC tomosynthesis system. The proposed methods have been validated using experimental data of a mammography accreditation phantom, a bovine udder, as well as several human cadaver breast specimens using a bench-top DPC imaging system at our facility.

  4. Bayesian source term estimation of atmospheric releases in urban areas using LES approach.

    PubMed

    Xue, Fei; Kikumoto, Hideki; Li, Xiaofeng; Ooka, Ryozo

    2018-05-05

    The estimation of source information from limited measurements of a sensor network is a challenging inverse problem, which can be viewed as an assimilation process of the observed concentration data and the predicted concentration data. When dealing with releases in built-up areas, the predicted data are generally obtained by the Reynolds-averaged Navier-Stokes (RANS) equations, which yields building-resolving results; however, RANS-based models are outperformed by large-eddy simulation (LES) in the predictions of both airflow and dispersion. Therefore, it is important to explore the possibility of improving the estimation of the source parameters by using the LES approach. In this paper, a novel source term estimation method is proposed based on LES approach using Bayesian inference. The source-receptor relationship is obtained by solving the adjoint equations constructed using the time-averaged flow field simulated by the LES approach based on the gradient diffusion hypothesis. A wind tunnel experiment with a constant point source downwind of a single building model is used to evaluate the performance of the proposed method, which is compared with that of the existing method using a RANS model. The results show that the proposed method reduces the errors of source location and releasing strength by 77% and 28%, respectively. Copyright © 2018 Elsevier B.V. All rights reserved.

  5. Reproducible segmentation of white matter hyperintensities using a new statistical definition.

    PubMed

    Damangir, Soheil; Westman, Eric; Simmons, Andrew; Vrenken, Hugo; Wahlund, Lars-Olof; Spulber, Gabriela

    2017-06-01

    We present a method based on a proposed statistical definition of white matter hyperintensities (WMH), which can work with any combination of conventional magnetic resonance (MR) sequences without depending on manually delineated samples. T1-weighted, T2-weighted, FLAIR, and PD sequences acquired at 1.5 Tesla from 119 subjects from the Kings Health Partners-Dementia Case Register (healthy controls, mild cognitive impairment, Alzheimer's disease) were used. The segmentation was performed using a proposed definition for WMH based on the one-tailed Kolmogorov-Smirnov test. The presented method was verified, given all possible combinations of input sequences, against manual segmentations and a high similarity (Dice 0.85-0.91) was observed. Comparing segmentations with different input sequences to one another also yielded a high similarity (Dice 0.83-0.94) that exceeded intra-rater similarity (Dice 0.75-0.91). We compared the results with those of other available methods and showed that the segmentation based on the proposed definition has better accuracy and reproducibility in the test dataset used. Overall, the presented definition is shown to produce accurate results with higher reproducibility than manual delineation. This approach can be an alternative to other manual or automatic methods not only because of its accuracy, but also due to its good reproducibility.

  6. Harmony Search Algorithm for Word Sense Disambiguation.

    PubMed

    Abed, Saad Adnan; Tiun, Sabrina; Omar, Nazlia

    2015-01-01

    Word Sense Disambiguation (WSD) is the task of determining which sense of an ambiguous word (word with multiple meanings) is chosen in a particular use of that word, by considering its context. A sentence is considered ambiguous if it contains ambiguous word(s). Practically, any sentence that has been classified as ambiguous usually has multiple interpretations, but just one of them presents the correct interpretation. We propose an unsupervised method that exploits knowledge based approaches for word sense disambiguation using Harmony Search Algorithm (HSA) based on a Stanford dependencies generator (HSDG). The role of the dependency generator is to parse sentences to obtain their dependency relations. Whereas, the goal of using the HSA is to maximize the overall semantic similarity of the set of parsed words. HSA invokes a combination of semantic similarity and relatedness measurements, i.e., Jiang and Conrath (jcn) and an adapted Lesk algorithm, to perform the HSA fitness function. Our proposed method was experimented on benchmark datasets, which yielded results comparable to the state-of-the-art WSD methods. In order to evaluate the effectiveness of the dependency generator, we perform the same methodology without the parser, but with a window of words. The empirical results demonstrate that the proposed method is able to produce effective solutions for most instances of the datasets used.

  7. Harmony Search Algorithm for Word Sense Disambiguation

    PubMed Central

    Abed, Saad Adnan; Tiun, Sabrina; Omar, Nazlia

    2015-01-01

    Word Sense Disambiguation (WSD) is the task of determining which sense of an ambiguous word (word with multiple meanings) is chosen in a particular use of that word, by considering its context. A sentence is considered ambiguous if it contains ambiguous word(s). Practically, any sentence that has been classified as ambiguous usually has multiple interpretations, but just one of them presents the correct interpretation. We propose an unsupervised method that exploits knowledge based approaches for word sense disambiguation using Harmony Search Algorithm (HSA) based on a Stanford dependencies generator (HSDG). The role of the dependency generator is to parse sentences to obtain their dependency relations. Whereas, the goal of using the HSA is to maximize the overall semantic similarity of the set of parsed words. HSA invokes a combination of semantic similarity and relatedness measurements, i.e., Jiang and Conrath (jcn) and an adapted Lesk algorithm, to perform the HSA fitness function. Our proposed method was experimented on benchmark datasets, which yielded results comparable to the state-of-the-art WSD methods. In order to evaluate the effectiveness of the dependency generator, we perform the same methodology without the parser, but with a window of words. The empirical results demonstrate that the proposed method is able to produce effective solutions for most instances of the datasets used. PMID:26422368

  8. Optical Method for Estimating the Chlorophyll Contents in Plant Leaves.

    PubMed

    Pérez-Patricio, Madaín; Camas-Anzueto, Jorge Luis; Sanchez-Alegría, Avisaí; Aguilar-González, Abiel; Gutiérrez-Miceli, Federico; Escobar-Gómez, Elías; Voisin, Yvon; Rios-Rojas, Carlos; Grajales-Coutiño, Ruben

    2018-02-22

    This work introduces a new vision-based approach for estimating chlorophyll contents in a plant leaf using reflectance and transmittance as base parameters. Images of the top and underside of the leaf are captured. To estimate the base parameters (reflectance/transmittance), a novel optical arrangement is proposed. The chlorophyll content is then estimated by using linear regression where the inputs are the reflectance and transmittance of the leaf. Performance of the proposed method for chlorophyll content estimation was compared with a spectrophotometer and a Soil Plant Analysis Development (SPAD) meter. Chlorophyll content estimation was realized for Lactuca sativa L., Azadirachta indica , Canavalia ensiforme , and Lycopersicon esculentum . Experimental results showed that-in terms of accuracy and processing speed-the proposed algorithm outperformed many of the previous vision-based approach methods that have used SPAD as a reference device. On the other hand, the accuracy reached is 91% for crops such as Azadirachta indica , where the chlorophyll value was obtained using the spectrophotometer. Additionally, it was possible to achieve an estimation of the chlorophyll content in the leaf every 200 ms with a low-cost camera and a simple optical arrangement. This non-destructive method increased accuracy in the chlorophyll content estimation by using an optical arrangement that yielded both the reflectance and transmittance information, while the required hardware is cheap.

  9. A kriging metamodel-assisted robust optimization method based on a reverse model

    NASA Astrophysics Data System (ADS)

    Zhou, Hui; Zhou, Qi; Liu, Congwei; Zhou, Taotao

    2018-02-01

    The goal of robust optimization methods is to obtain a solution that is both optimum and relatively insensitive to uncertainty factors. Most existing robust optimization approaches use outer-inner nested optimization structures where a large amount of computational effort is required because the robustness of each candidate solution delivered from the outer level should be evaluated in the inner level. In this article, a kriging metamodel-assisted robust optimization method based on a reverse model (K-RMRO) is first proposed, in which the nested optimization structure is reduced into a single-loop optimization structure to ease the computational burden. Ignoring the interpolation uncertainties from kriging, K-RMRO may yield non-robust optima. Hence, an improved kriging-assisted robust optimization method based on a reverse model (IK-RMRO) is presented to take the interpolation uncertainty of kriging metamodel into consideration. In IK-RMRO, an objective switching criterion is introduced to determine whether the inner level robust optimization or the kriging metamodel replacement should be used to evaluate the robustness of design alternatives. The proposed criterion is developed according to whether or not the robust status of the individual can be changed because of the interpolation uncertainties from the kriging metamodel. Numerical and engineering cases are used to demonstrate the applicability and efficiency of the proposed approach.

  10. Improved modified energy ratio method using a multi-window approach for accurate arrival picking

    NASA Astrophysics Data System (ADS)

    Lee, Minho; Byun, Joongmoo; Kim, Dowan; Choi, Jihun; Kim, Myungsun

    2017-04-01

    To identify accurately the location of microseismic events generated during hydraulic fracture stimulation, it is necessary to detect the first break of the P- and S-wave arrival times recorded at multiple receivers. These microseismic data often contain high-amplitude noise, which makes it difficult to identify the P- and S-wave arrival times. The short-term-average to long-term-average (STA/LTA) and modified energy ratio (MER) methods are based on the differences in the energy densities of the noise and signal, and are widely used to identify the P-wave arrival times. The MER method yields more consistent results than the STA/LTA method for data with a low signal-to-noise (S/N) ratio. However, although the MER method shows good results regardless of the delay of the signal wavelet for signals with a high S/N ratio, it may yield poor results if the signal is contaminated by high-amplitude noise and does not have the minimum delay. Here we describe an improved MER (IMER) method, whereby we apply a multiple-windowing approach to overcome the limitations of the MER method. The IMER method contains calculations of an additional MER value using a third window (in addition to the original MER window), as well as the application of a moving average filter to each MER data point to eliminate high-frequency fluctuations in the original MER distributions. The resulting distribution makes it easier to apply thresholding. The proposed IMER method was applied to synthetic and real datasets with various S/N ratios and mixed-delay wavelets. The results show that the IMER method yields a high accuracy rate of around 80% within five sample errors for the synthetic datasets. Likewise, in the case of real datasets, 94.56% of the P-wave picking results obtained by the IMER method had a deviation of less than 0.5 ms (corresponding to 2 samples) from the manual picks.

  11. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, B; Southern Medical University, Guangzhou, Guangdong; Shen, C

    Purpose: Multi-energy computed tomography (MECT) is an emerging application in medical imaging due to its ability of material differentiation and potential for molecular imaging. In MECT, image correlations at different spatial and channels exist. It is desirable to incorporate these correlations in reconstruction to improve image quality. For this purpose, this study proposes a MECT reconstruction technique that employes spatial spectral non-local means (ssNLM) regularization. Methods: We consider a kVp-switching scanning method in which source energy is rapidly switched during data acquisition. For each energy channel, this yields projection data acquired at a number of angles, whereas projection angles amongmore » channels are different. We formulate the reconstruction task as an optimziation problem. A least square term enfores data fidelity. A ssNLM term is used as regularization to encourage similarities among image patches at different spatial locations and channels. When comparing image patches at different channels, intensity difference were corrected by a transformation estimated via histogram equalization during the reconstruction process. Results: We tested our method in a simulation study with a NCAT phantom and an experimental study with a Gammex phantom. For comparison purpose, we also performed reconstructions using conjugate-gradient least square (CGLS) method and conventional NLM method that only considers spatial correlation in an image. ssNLM is able to better suppress streak artifacts. The streaks are along different projection directions in images at different channels. ssNLM discourages this dissimilarity and hence removes them. True image structures are preserved in this process. Measurements in regions of interests yield 1.1 to 3.2 and 1.5 to 1.8 times higher contrast to noise ratio than the NLM approach. Improvements over CGLS is even more profound due to lack of regularization in the CGLS method and hence amplified noise. Conclusion: The proposed ssNLM method for kVp-switching MECT reconstruction can achieve high quality MECT images.« less

  12. Using Doppler Shifts of GPS Signals To Measure Angular Speed

    NASA Technical Reports Server (NTRS)

    Campbell, Charles E., Jr.

    2006-01-01

    A method has been proposed for extracting information on the rate of rotation of an aircraft, spacecraft, or other body from differential Doppler shifts of Global Positioning System (GPS) signals received by antennas mounted on the body. In principle, the method should be capable of yielding low-noise estimates of rates of rotation. The method could eliminate the need for gyroscopes to measure rates of rotation. The method is based on the fact that for a given signal of frequency ft transmitted by a given GPS satellite, the differential Doppler shift is attributable to the difference between those components of the instantaneous translational velocities of the antennas that lie along the line of sight from the antennas to the GPS satellite.

  13. Holoentropy enabled-decision tree for automatic classification of diabetic retinopathy using retinal fundus images.

    PubMed

    Mane, Vijay Mahadeo; Jadhav, D V

    2017-05-24

    Diabetic retinopathy (DR) is the most common diabetic eye disease. Doctors are using various test methods to detect DR. But, the availability of test methods and requirements of domain experts pose a new challenge in the automatic detection of DR. In order to fulfill this objective, a variety of algorithms has been developed in the literature. In this paper, we propose a system consisting of a novel sparking process and a holoentropy-based decision tree for automatic classification of DR images to further improve the effectiveness. The sparking process algorithm is developed for automatic segmentation of blood vessels through the estimation of optimal threshold. The holoentropy enabled decision tree is newly developed for automatic classification of retinal images into normal or abnormal using hybrid features which preserve the disease-level patterns even more than the signal level of the feature. The effectiveness of the proposed system is analyzed using standard fundus image databases DIARETDB0 and DIARETDB1 for sensitivity, specificity and accuracy. The proposed system yields sensitivity, specificity and accuracy values of 96.72%, 97.01% and 96.45%, respectively. The experimental result reveals that the proposed technique outperforms the existing algorithms.

  14. Selection of the initial design for the two-stage continual reassessment method.

    PubMed

    Jia, Xiaoyu; Ivanova, Anastasia; Lee, Shing M

    2017-01-01

    In the two-stage continual reassessment method (CRM), model-based dose escalation is preceded by a pre-specified escalating sequence starting from the lowest dose level. This is appealing to clinicians because it allows a sufficient number of patients to be assigned to each of the lower dose levels before escalating to higher dose levels. While a theoretical framework to build the two-stage CRM has been proposed, the selection of the initial dose-escalating sequence, generally referred to as the initial design, remains arbitrary, either by specifying cohorts of three patients or by trial and error through extensive simulations. Motivated by a currently ongoing oncology dose-finding study for which clinicians explicitly stated their desire to assign at least one patient to each of the lower dose levels, we proposed a systematic approach for selecting the initial design for the two-stage CRM. The initial design obtained using the proposed algorithm yields better operating characteristics compared to using a cohort of three initial design with a calibrated CRM. The proposed algorithm simplifies the selection of initial design for the two-stage CRM. Moreover, initial designs to be used as reference for planning a two-stage CRM are provided.

  15. Prediction of moment-rotation characteristic of top- and seat-angle bolted connection incorporating prying action

    NASA Astrophysics Data System (ADS)

    Ahmed, Ali

    2017-03-01

    Finite element (FE) analyses were performed to explore the prying influence on moment-rotation behaviour and to locate yielding zones of top- and seat-angle connections in author's past research studies. The results of those FE analyses with experimental failure strategies of the connections were used to develop failure mechanisms of top- and seat-angle connections in the present study. Then a formulation was developed based on three simple failure mechanisms considering bending and shear deformations, effects of prying action on the top angle and stiffness of the tension bolts to estimate rationally the ultimate moment M u of the connection, which is a vital parameter of the proposed four-parameter power model. Applicability of the proposed formulation is assessed by comparing moment-rotation ( M- θ r ) curves and ultimate moment capacities with those measured by experiments and estimated by FE analyses and three-parameter power model. This study shows that proposed formulation and Kishi-Chen's method both achieved close approximation driving M- θ r curves of all given connections except a few cases of Kishi-Chen model, and M u estimated by the proposed formulation is more rational than that predicted by Kishi-Chen's method.

  16. Speedup of lexicographic optimization by superiorization and its applications to cancer radiotherapy treatment

    NASA Astrophysics Data System (ADS)

    Bonacker, Esther; Gibali, Aviv; Küfer, Karl-Heinz; Süss, Philipp

    2017-04-01

    Multicriteria optimization problems occur in many real life applications, for example in cancer radiotherapy treatment and in particular in intensity modulated radiation therapy (IMRT). In this work we focus on optimization problems with multiple objectives that are ranked according to their importance. We solve these problems numerically by combining lexicographic optimization with our recently proposed level set scheme, which yields a sequence of auxiliary convex feasibility problems; solved here via projection methods. The projection enables us to combine the newly introduced superiorization methodology with multicriteria optimization methods to speed up computation while guaranteeing convergence of the optimization. We demonstrate our scheme with a simple 2D academic example (used in the literature) and also present results from calculations on four real head neck cases in IMRT (Radiation Oncology of the Ludwig-Maximilians University, Munich, Germany) for two different choices of superiorization parameter sets suited to yield fast convergence for each case individually or robust behavior for all four cases.

  17. A new method of two-phase anaerobic digestion for fruit and vegetable waste treatment.

    PubMed

    Wu, Yuanyuan; Wang, Cuiping; Liu, Xiaoji; Ma, Hailing; Wu, Jing; Zuo, Jiane; Wang, Kaijun

    2016-07-01

    A novel method of two-phase anaerobic digestion where the acid reactor is operated at low pH 4.0 was proposed and investigated. A completely stirred tank acid reactor and an up-flow anaerobic sludge bed methane reactor were operated to examine the possibility of efficient degradation of lactate and to identify their optimal operating conditions. Lactate with an average concentration of 14.8g/L was the dominant fermentative product and Lactobacillus was the predominant microorganism in the acid reactor. The effluent from the acid reactor was efficiently degraded in the methane reactor and the average methane yield was 261.4ml/gCOD removed. Organisms of Methanosaeta were the predominant methanogen in granular sludge of methane reactor, however, after acclimation hydrogenotrophic methanogens enriched, which benefited for the conversion of lactate to acetate. The two-phase AD system exhibited a low hydraulic retention time of 3.56days and high methane yield of 348.5ml/g VS removed. Copyright © 2016 Elsevier Ltd. All rights reserved.

  18. Innovative pretreatment of sugarcane bagasse using supercritical CO2 followed by alkaline hydrogen peroxide.

    PubMed

    Phan, Duy The; Tan, Chung-Sung

    2014-09-01

    An innovative method for pretreatment of sugarcane bagasse using sequential combination of supercritical CO2 (scCO2) and alkaline hydrogen peroxide (H2O2) at mild conditions is proposed. This method was found to be superior to the individual pretreatment with scCO2, ultrasound, or H2O2 and the sequential combination of scCO2 and ultrasound regarding the yield of cellulose and hemicellulose, almost twice the yield was observed. Pretreatment with scCO2 could obtain higher amount of cellulose and hemicellulose but also acid-insoluble lignin. Pretreatment with ultrasound or H2O2 could partly depolymerize lignin, however, could not separate cellulose from lignin. The analysis of liquid products via enzymatic hydrolysis by HPLC and the characterization of the solid residues by SEM revealed strong synergetic effects in the sequential combination of scCO2 and H2O2. Copyright © 2014 Elsevier Ltd. All rights reserved.

  19. Dynamic strain aging and plastic instabilities

    NASA Astrophysics Data System (ADS)

    Mesarovic, Sinisa Dj.

    1995-05-01

    A constitutive model proposed by McCormick [(1988) Theory of flow localization due to dynamic strain ageing. Acta. Metall.36, 3061-3067] based on dislocation-solute interaction and describing dynamic strain aging behavior, is analyzed for the simple loading case of uniaxial tension. The model is rate dependent and includes a time-varying state variable, representing the local concentration of the impurity atoms at dislocations. Stability of the system and its post-instability behavior are considered. The methods used include analytical and numerical stability and bifurcation analysis with a numerical continuation technique. Yield point behavior and serrated yielding are found to result for well defined intervals of temperature and strain rate. Serrated yielding emerges as a branch of periodic solutions of the relaxation oscillation type, similar to frictional stick-slip. The distinction between the temporal and spatial (loss of homogeneity of strain) instability is emphasized. It is found that a critical machine stiffness exists above which a purely temporal instability cannot occur. The results are compared to the available experimental data.

  20. Efficient approach for bioethanol production from red seaweed Gelidium amansii.

    PubMed

    Kim, Ho Myeong; Wi, Seung Gon; Jung, Sera; Song, Younho; Bae, Hyeun-Jong

    2015-01-01

    Gelidium amansii (GA), a red seaweed species, is a popular source of food and chemicals due to its high galactose and glucose content. In this study, we investigated the potential of bioethanol production from autoclave-treated GA (ATGA). The proposed method involved autoclaving GA for 60min for hydrolysis to glucose. Separate hydrolysis and fermentation processing (SHF) achieved a maximum ethanol concentration of 3.33mg/mL, with a conversion yield of 74.7% after 6h (2% substrate loading, w/v). In contrast, simultaneous saccharification and fermentation (SSF) produced an ethanol concentration of 3.78mg/mL, with an ethanol conversion yield of 84.9% after 12h. We also recorded an ethanol concentration of 25.7mg/mL from SSF processing of 15% (w/v) dry matter from ATGA after 24h. These results indicate that autoclaving can improve the glucose and ethanol conversion yield of GA, and that SSF is superior to SHF for ethanol production. Copyright © 2014 Elsevier Ltd. All rights reserved.

  1. Predictions of Daily Milk and Fat Yields, Major Groups of Fatty Acids, and C18:1 cis-9 from Single Milking Data without a Milking Interval

    PubMed Central

    Arnould, Valérie M. R.; Reding, Romain; Bormann, Jeanne; Gengler, Nicolas; Soyeurt, Hélène

    2015-01-01

    Simple Summary Reducing the frequency of milk recording decreases the costs of official milk recording. However, this approach can negatively affect the accuracy of predicting daily yields. Equations to predict daily yield from morning or evening data were developed in this study for fatty milk components from traits recorded easily by milk recording organizations. The correlation values ranged from 96.4% to 97.6% (96.9% to 98.3%) when the daily yields were estimated from the morning (evening) milkings. The simplicity of the proposed models which do not include the milking interval should facilitate their use by breeding and milk recording organizations. Abstract Reducing the frequency of milk recording would help reduce the costs of official milk recording. However, this approach could also negatively affect the accuracy of predicting daily yields. This problem has been investigated in numerous studies. In addition, published equations take into account milking intervals (MI), and these are often not available and/or are unreliable in practice. The first objective of this study was to propose models in which the MI was replaced by a combination of data easily recorded by dairy farmers. The second objective was to further investigate the fatty acids (FA) present in milk. Equations to predict daily yield from AM or PM data were based on a calibration database containing 79,971 records related to 51 traits [milk yield (expected AM, expected PM, and expected daily); fat content (expected AM, expected PM, and expected daily); fat yield (expected AM, expected PM, and expected daily; g/day); levels of seven different FAs or FA groups (expected AM, expected PM, and expected daily; g/dL milk), and the corresponding FA yields for these seven FA types/groups (expected AM, expected PM, and expected daily; g/day)]. These equations were validated using two distinct external datasets. The results obtained from the proposed models were compared to previously published results for models which included a MI effect. The corresponding correlation values ranged from 96.4% to 97.6% when the daily yields were estimated from the AM milkings and ranged from 96.9% to 98.3% when the daily yields were estimated from the PM milkings. The simplicity of these proposed models should facilitate their use by breeding and milk recording organizations. PMID:26479379

  2. Constructing a Watts-Strogatz network from a small-world network with symmetric degree distribution.

    PubMed

    Menezes, Mozart B C; Kim, Seokjin; Huang, Rongbing

    2017-01-01

    Though the small-world phenomenon is widespread in many real networks, it is still challenging to replicate a large network at the full scale for further study on its structure and dynamics when sufficient data are not readily available. We propose a method to construct a Watts-Strogatz network using a sample from a small-world network with symmetric degree distribution. Our method yields an estimated degree distribution which fits closely with that of a Watts-Strogatz network and leads into accurate estimates of network metrics such as clustering coefficient and degree of separation. We observe that the accuracy of our method increases as network size increases.

  3. Improved Frequency Fluctuation Model for Spectral Line Shape Calculations in Fusion Plasmas

    NASA Astrophysics Data System (ADS)

    Ferri, S.; Calisti, A.; Mossé, C.; Talin, B.; Lisitsa, V.

    2010-10-01

    A very fast method to calculate spectral line shapes emitted by plasmas accounting for charge particle dynamics and effects of an external magnetic field is proposed. This method relies on a new formulation of the Frequency Fluctuation Model (FFM), which yields to an expression of the dynamic line profile as a functional of the static distribution function of frequencies. This highly efficient formalism, not limited to hydrogen-like systems, allows to calculate pure Stark and Stark-Zeeman line shapes for a wide range of density, temperature and magnetic field values, which is of importance in plasma physics and astrophysics. Various applications of this method are presented for conditions related to fusion plasmas.

  4. A scale space feature based registration technique for fusion of satellite imagery

    NASA Technical Reports Server (NTRS)

    Raghavan, Srini; Cromp, Robert F.; Campbell, William C.

    1997-01-01

    Feature based registration is one of the most reliable methods to register multi-sensor images (both active and passive imagery) since features are often more reliable than intensity or radiometric values. The only situation where a feature based approach will fail is when the scene is completely homogenous or densely textural in which case a combination of feature and intensity based methods may yield better results. In this paper, we present some preliminary results of testing our scale space feature based registration technique, a modified version of feature based method developed earlier for classification of multi-sensor imagery. The proposed approach removes the sensitivity in parameter selection experienced in the earlier version as explained later.

  5. Oblique scattering from radially inhomogeneous dielectric cylinders: An exact Volterra integral equation formulation

    NASA Astrophysics Data System (ADS)

    Tsalamengas, John L.

    2018-07-01

    We study plane-wave electromagnetic scattering by radially and strongly inhomogeneous dielectric cylinders at oblique incidence. The method of analysis relies on an exact reformulation of the underlying field equations as a first-order 4 × 4 system of differential equations and on the ability to restate the associated initial-value problem in the form of a system of coupled linear Volterra integral equations of the second kind. The integral equations so derived are discretized via a sophisticated variant of the Nyström method. The proposed method yields results accurate up to machine precision without relying on approximations. Numerical results and case studies ably demonstrate the efficiency and high accuracy of the algorithms.

  6. Generalization of dielectric-dependent hybrid functionals to finite systems

    DOE PAGES

    Brawand, Nicholas P.; Voros, Marton; Govoni, Marco; ...

    2016-10-04

    The accurate prediction of electronic and optical properties of molecules and solids is a persistent challenge for methods based on density functional theory. We propose a generalization of dielectric-dependent hybrid functionals to finite systems where the definition of the mixing fraction of exact and semilocal exchange is physically motivated, nonempirical, and system dependent. The proposed functional yields ionization potentials, and fundamental and optical gaps of many, diverse molecular systems in excellent agreement with experiments, including organic and inorganic molecules and semiconducting nanocrystals. As a result, we further demonstrate that this hybrid functional gives the correct alignment between energy levels ofmore » the exemplary TTF-TCNQ donor-acceptor system.« less

  7. Automatic Parameterization Strategy for Cardiac Electrophysiology Simulations.

    PubMed

    Costa, Caroline Mendonca; Hoetzl, Elena; Rocha, Bernardo Martins; Prassl, Anton J; Plank, Gernot

    2013-10-01

    Driven by recent advances in medical imaging, image segmentation and numerical techniques, computer models of ventricular electrophysiology account for increasingly finer levels of anatomical and biophysical detail. However, considering the large number of model parameters involved parameterization poses a major challenge. A minimum requirement in combined experimental and modeling studies is to achieve good agreement in activation and repolarization sequences between model and experiment or patient data. In this study, we propose basic techniques which aid in determining bidomain parameters to match activation sequences. An iterative parameterization algorithm is implemented which determines appropriate bulk conductivities which yield prescribed velocities. In addition, a method is proposed for splitting the computed bulk conductivities into individual bidomain conductivities by prescribing anisotropy ratios.

  8. Simultaneous Localization and Mapping with Iterative Sparse Extended Information Filter for Autonomous Vehicles.

    PubMed

    He, Bo; Liu, Yang; Dong, Diya; Shen, Yue; Yan, Tianhong; Nian, Rui

    2015-08-13

    In this paper, a novel iterative sparse extended information filter (ISEIF) was proposed to solve the simultaneous localization and mapping problem (SLAM), which is very crucial for autonomous vehicles. The proposed algorithm solves the measurement update equations with iterative methods adaptively to reduce linearization errors. With the scalability advantage being kept, the consistency and accuracy of SEIF is improved. Simulations and practical experiments were carried out with both a land car benchmark and an autonomous underwater vehicle. Comparisons between iterative SEIF (ISEIF), standard EKF and SEIF are presented. All of the results convincingly show that ISEIF yields more consistent and accurate estimates compared to SEIF and preserves the scalability advantage over EKF, as well.

  9. a Direct Probe for Chemical Potentials Difference Between Neutron and Protons in Heavy-Ion Collisions

    NASA Astrophysics Data System (ADS)

    Ma, Chun-Wang; Zhang, Yan-Li; Wang, Shan-Shan

    We briefly introduce the newly proposed probe to the neutron and proton chemical potential (and density) difference, which is called as the isobaric yield ratio difference (IBD). The IBD probe is related to the chemical potential difference of neutrons and protons between two reactions, at the same time, the nuclear density difference between two reactions. The relationship between the IBD probe and the isoscaling method has also been discussed.

  10. Generalized type II hybrid ARQ scheme using punctured convolutional coding

    NASA Astrophysics Data System (ADS)

    Kallel, Samir; Haccoun, David

    1990-11-01

    A method is presented to construct rate-compatible convolutional (RCC) codes from known high-rate punctured convolutional codes, obtained from best-rate 1/2 codes. The construction method is rather simple and straightforward, and still yields good codes. Moreover, low-rate codes can be obtained without any limit on the lowest achievable code rate. Based on the RCC codes, a generalized type-II hybrid ARQ scheme, which combines the benefits of the modified type-II hybrid ARQ strategy of Hagenauer (1988) with the code-combining ARQ strategy of Chase (1985), is proposed and analyzed. With the proposed generalized type-II hybrid ARQ strategy, the throughput increases as the starting coding rate increases, and as the channel degrades, it tends to merge with the throughput of rate 1/2 type-II hybrid ARQ schemes with code combining, thus allowing the system to be flexible and adaptive to channel conditions, even under wide noise variations and severe degradations.

  11. Estimation of Dynamic Systems for Gene Regulatory Networks from Dependent Time-Course Data.

    PubMed

    Kim, Yoonji; Kim, Jaejik

    2018-06-15

    Dynamic system consisting of ordinary differential equations (ODEs) is a well-known tool for describing dynamic nature of gene regulatory networks (GRNs), and the dynamic features of GRNs are usually captured through time-course gene expression data. Owing to high-throughput technologies, time-course gene expression data have complex structures such as heteroscedasticity, correlations between genes, and time dependence. Since gene experiments typically yield highly noisy data with small sample size, for a more accurate prediction of the dynamics, the complex structures should be taken into account in ODE models. Hence, this study proposes an ODE model considering such data structures and a fast and stable estimation method for the ODE parameters based on the generalized profiling approach with data smoothing techniques. The proposed method also provides statistical inference for the ODE estimator and it is applied to a zebrafish retina cell network.

  12. Neural system modeling and simulation using Hybrid Functional Petri Net.

    PubMed

    Tang, Yin; Wang, Fei

    2012-02-01

    The Petri net formalism has been proved to be powerful in biological modeling. It not only boasts of a most intuitive graphical presentation but also combines the methods of classical systems biology with the discrete modeling technique. Hybrid Functional Petri Net (HFPN) was proposed specially for biological system modeling. An array of well-constructed biological models using HFPN yielded very interesting results. In this paper, we propose a method to represent neural system behavior, where biochemistry and electrical chemistry are both included using the Petri net formalism. We built a model for the adrenergic system using HFPN and employed quantitative analysis. Our simulation results match the biological data well, showing that the model is very effective. Predictions made on our model further manifest the modeling power of HFPN and improve the understanding of the adrenergic system. The file of our model and more results with their analysis are available in our supplementary material.

  13. The quasi-optimality criterion in the linear functional strategy

    NASA Astrophysics Data System (ADS)

    Kindermann, Stefan; Pereverzyev, Sergiy, Jr.; Pilipenko, Andrey

    2018-07-01

    The linear functional strategy for the regularization of inverse problems is considered. For selecting the regularization parameter therein, we propose the heuristic quasi-optimality principle and some modifications including the smoothness of the linear functionals. We prove convergence rates for the linear functional strategy with these heuristic rules taking into account the smoothness of the solution and the functionals and imposing a structural condition on the noise. Furthermore, we study these noise conditions in both a deterministic and stochastic setup and verify that for mildly-ill-posed problems and Gaussian noise, these conditions are satisfied almost surely, where on the contrary, in the severely-ill-posed case and in a similar setup, the corresponding noise condition fails to hold. Moreover, we propose an aggregation method for adaptively optimizing the parameter choice rule by making use of improved rates for linear functionals. Numerical results indicate that this method yields better results than the standard heuristic rule.

  14. A sensitive chemiluminescent immunoassay to detect Chromotrope FB (Chr FB) in foods.

    PubMed

    Xu, Kun; Long, Hao; Xing, Rongge; Yin, Yongmei; Eremin, Sergei A; Meng, Meng; Xi, Rimo

    2017-03-01

    Chromotrope FB (Chr FB) is a synthetic azo dye permitted for use in foods and medicines. An acceptable daily intake (ADI) of Chr FB was 0-0.5mg/kg in China. In this study, we synthesized a Chr FB hapten with an amino group to prepare its artificial immunogen. Polyclonal antibodies obtained from New Zealand rabbits were applied to develop an indirect competitive chemiluminescent immunoassay (icCLIA) to detect Chr FB in foods. A horseradish peroxidase (HRP)-luminol-H 2 O 2 system was used to yield CL signal with p-iodophenol as an enhancement reagent. The method showed good specificity towards Chr FB and could detect as low as 0.02ngmL -1 Chr FB in buffer, 0.07ngg -1 in yoghurt candy, 0.07ngg -1 in vitamin drink and 0.13ngg -1 in bread. Compared with HPLC method, the proposed method is more sensitive by two orders of magnitude. The accuracy and precision of this method are acceptable and comparable with HPLC method. Therefore, the proposed method could be used for rapid screening of Chr FB in the mentioned foodstuffs. Copyright © 2016. Published by Elsevier B.V.

  15. Characterizing Variability of Modular Brain Connectivity with Constrained Principal Component Analysis

    PubMed Central

    Hirayama, Jun-ichiro; Hyvärinen, Aapo; Kiviniemi, Vesa; Kawanabe, Motoaki; Yamashita, Okito

    2016-01-01

    Characterizing the variability of resting-state functional brain connectivity across subjects and/or over time has recently attracted much attention. Principal component analysis (PCA) serves as a fundamental statistical technique for such analyses. However, performing PCA on high-dimensional connectivity matrices yields complicated “eigenconnectivity” patterns, for which systematic interpretation is a challenging issue. Here, we overcome this issue with a novel constrained PCA method for connectivity matrices by extending the idea of the previously proposed orthogonal connectivity factorization method. Our new method, modular connectivity factorization (MCF), explicitly introduces the modularity of brain networks as a parametric constraint on eigenconnectivity matrices. In particular, MCF analyzes the variability in both intra- and inter-module connectivities, simultaneously finding network modules in a principled, data-driven manner. The parametric constraint provides a compact module-based visualization scheme with which the result can be intuitively interpreted. We develop an optimization algorithm to solve the constrained PCA problem and validate our method in simulation studies and with a resting-state functional connectivity MRI dataset of 986 subjects. The results show that the proposed MCF method successfully reveals the underlying modular eigenconnectivity patterns in more general situations and is a promising alternative to existing methods. PMID:28002474

  16. Linear combination methods to improve diagnostic/prognostic accuracy on future observations

    PubMed Central

    Kang, Le; Liu, Aiyi; Tian, Lili

    2014-01-01

    Multiple diagnostic tests or biomarkers can be combined to improve diagnostic accuracy. The problem of finding the optimal linear combinations of biomarkers to maximise the area under the receiver operating characteristic curve has been extensively addressed in the literature. The purpose of this article is threefold: (1) to provide an extensive review of the existing methods for biomarker combination; (2) to propose a new combination method, namely, the nonparametric stepwise approach; (3) to use leave-one-pair-out cross-validation method, instead of re-substitution method, which is overoptimistic and hence might lead to wrong conclusion, to empirically evaluate and compare the performance of different linear combination methods in yielding the largest area under receiver operating characteristic curve. A data set of Duchenne muscular dystrophy was analysed to illustrate the applications of the discussed combination methods. PMID:23592714

  17. Design and performance analysis of gas and liquid radial turbines

    NASA Astrophysics Data System (ADS)

    Tan, Xu

    In the first part of the research, pumps running in reverse as turbines are studied. This work uses experimental data of wide range of pumps representing the centrifugal pumps' configurations in terms of specific speed. Based on specific speed and specific diameter an accurate correlation is developed to predict the performances at best efficiency point of the centrifugal pump in its turbine mode operation. The proposed prediction method yields very good results to date compared to previous such attempts. The present method is compared to nine previous methods found in the literature. The comparison results show that the method proposed in this paper is the most accurate. The proposed method can be further complemented and supplemented by more future tests to increase its accuracy. The proposed method is meaningful because it is based both specific speed and specific diameter. The second part of the research is focused on the design and analysis of the radial gas turbine. The specification of the turbine is obtained from the solar biogas hybrid system. The system is theoretically analyzed and constructed based on the purchased compressor. Theoretical analysis results in a specification of 100lb/min, 900ºC inlet total temperature and 1.575atm inlet total pressure. 1-D and 3-D geometry of the rotor is generated based on Aungier's method. 1-D loss model analysis and 3-D CFD simulations are performed to examine the performances of the rotor. The total-to-total efficiency of the rotor is more than 90%. With the help of CFD analysis, modifications on the preliminary design obtained optimized aerodynamic performances. At last, the theoretical performance analysis on the hybrid system is performed with the designed turbine.

  18. Spectrofluorimetric determination of fluoroquinolones in pharmaceutical preparations.

    PubMed

    Ulu, Sevgi Tatar

    2009-02-01

    Simple, rapid and highly sensitive spectrofluorimetric method is presented for the determination of four fluoroquinolone (FQ) drugs, ciprofloxacin, enoxacin, norfloxacin and moxifloxacin in pharmaceutical preparations. Proposed method is based on the derivatization of FQ with 4-chloro-7-nitrobenzofurazan (NBD-Cl) in borate buffer of pH 9.0 to yield a yellow product. The optimum experimental conditions have been studied carefully. Beer's law is obeyed over the concentration range of 23.5-500 ng mL(-1) for ciprofloxacin, 28.5-700 ng mL(-1) for enoxacin, 29.5-800 ng mL(-1) for norfloxacin and 33.5-1000 ng mL(-1) for moxifloxacin using NBD-Cl reagent, respectively. The detection limits were found to be 7.0 ng mL(-1) for ciprofloxacin, 8.5 ng mL(-1) for enoxacin, 9.2 ng mL(-1) for norfloxacin and 9.98 ng mL(-1) for moxifloxacin, respectively. Intra-day and inter-day relative standard deviation and relative mean error values at three different concentrations were determined. The low relative standard deviation values indicate good precision and high recovery values indicate accuracy of the proposed methods. The method is highly sensitive and specific. The results obtained are in good agreement with those obtained by the official and reference method. The results presented in this report show that the applied spectrofluorimetric method is acceptable for the determination of the four FQ in the pharmaceutical preparations. Common excipients used as additives in pharmaceutical preparations do not interfere with the proposed method.

  19. A Remote Sensing-Derived Corn Yield Assessment Model

    NASA Astrophysics Data System (ADS)

    Shrestha, Ranjay Man

    Agricultural studies and food security have become critical research topics due to continuous growth in human population and simultaneous shrinkage in agricultural land. In spite of modern technological advancements to improve agricultural productivity, more studies on crop yield assessments and food productivities are still necessary to fulfill the constantly increasing food demands. Besides human activities, natural disasters such as flood and drought, along with rapid climate changes, also inflect an adverse effect on food productivities. Understanding the impact of these disasters on crop yield and making early impact estimations could help planning for any national or international food crisis. Similarly, the United States Department of Agriculture (USDA) Risk Management Agency (RMA) insurance management utilizes appropriately estimated crop yield and damage assessment information to sustain farmers' practice through timely and proper compensations. Through County Agricultural Production Survey (CAPS), the USDA National Agricultural Statistical Service (NASS) uses traditional methods of field interviews and farmer-reported survey data to perform annual crop condition monitoring and production estimations at the regional and state levels. As these manual approaches of yield estimations are highly inefficient and produce very limited samples to represent the entire area, NASS requires supplemental spatial data that provides continuous and timely information on crop production and annual yield. Compared to traditional methods, remote sensing data and products offer wider spatial extent, more accurate location information, higher temporal resolution and data distribution, and lower data cost--thus providing a complementary option for estimation of crop yield information. Remote sensing derived vegetation indices such as Normalized Difference Vegetation Index (NDVI) provide measurable statistics of potential crop growth based on the spectral reflectance and could be further associated with the actual yield. Utilizing satellite remote sensing products, such as daily NDVI derived from Moderate Resolution Imaging Spectroradiometer (MODIS) at 250 m pixel size, the crop yield estimation can be performed at a very fine spatial resolution. Therefore, this study examined the potential of these daily NDVI products within agricultural studies and crop yield assessments. In this study, a regression-based approach was proposed to estimate the annual corn yield through changes in MODIS daily NDVI time series. The relationship between daily NDVI and corn yield was well defined and established, and as changes in corn phenology and yield were directly reflected by the changes in NDVI within the growing season, these two entities were combined to develop a relational model. The model was trained using 15 years (2000-2014) of historical NDVI and county-level corn yield data for four major corn producing states: Kansas, Nebraska, Iowa, and Indiana, representing four climatic regions as South, West North Central, East North Central, and Central, respectively, within the U.S. Corn Belt area. The model's goodness of fit was well defined with a high coefficient of determination (R2>0.81). Similarly, using 2015 yield data for validation, 92% of average accuracy signified the performance of the model in estimating corn yield at county level. Besides providing the county-level corn yield estimations, the derived model was also accurate enough to estimate the yield at finer spatial resolution (field level). The model's assessment accuracy was evaluated using the randomly selected field level corn yield within the study area for 2014, 2015, and 2016. A total of over 120 plot level corn yield were used for validation, and the overall average accuracy was 87%, which statistically justified the model's capability to estimate plot-level corn yield. Additionally, the proposed model was applied to the impact estimation by examining the changes in corn yield due to flood events during the growing season. Using a 2011 Missouri River flood event as a case study, field-level flood impact map on corn yield throughout the flooded regions was produced and an overall agreement of over 82.2% was achieved when compared with the reference impact map. The future research direction of this dissertation research would be to examine other major crops outside the Corn Belt region of the U.S.

  20. Reaction of an Iron(IV) Nitrido Complex with Cyclohexadienes: Cycloaddition and Hydrogen-Atom Abstraction

    PubMed Central

    2015-01-01

    The iron(IV) nitrido complex PhB(MesIm)3Fe≡N reacts with 1,3-cyclohexadiene to yield the iron(II) pyrrolide complex PhB(MesIm)3Fe(η5-C4H4N) in high yield. The mechanism of product formation is proposed to involve sequential [4 + 1] cycloaddition and retro Diels–Alder reactions. Surprisingly, reaction with 1,4-cyclohexadiene yields the same iron-containing product, albeit in substantially lower yield. The proposed reaction mechanism, supported by electronic structure calculations, involves hydrogen-atom abstraction from 1,4-cyclohexadiene to provide the cyclohexadienyl radical. This radical is an intermediate in substrate isomerization to 1,3-cyclohexadiene, leading to formation of the pyrrolide product. PMID:25068927

  1. Seismic interferometry by multidimensional deconvolution as a means to compensate for anisotropic illumination

    NASA Astrophysics Data System (ADS)

    Wapenaar, K.; van der Neut, J.; Ruigrok, E.; Draganov, D.; Hunziker, J.; Slob, E.; Thorbecke, J.; Snieder, R.

    2008-12-01

    It is well-known that under specific conditions the crosscorrelation of wavefields observed at two receivers yields the impulse response between these receivers. This principle is known as 'Green's function retrieval' or 'seismic interferometry'. Recently it has been recognized that in many situations it can be advantageous to replace the correlation process by deconvolution. One of the advantages is that deconvolution compensates for the waveform emitted by the source; another advantage is that it is not necessary to assume that the medium is lossless. The approaches that have been developed to date employ a 1D deconvolution process. We propose a method for seismic interferometry by multidimensional deconvolution and show that under specific circumstances the method compensates for irregularities in the source distribution. This is an important difference with crosscorrelation methods, which rely on the condition that waves are equipartitioned. This condition is for example fulfilled when the sources are regularly distributed along a closed surface and the power spectra of the sources are identical. The proposed multidimensional deconvolution method compensates for anisotropic illumination, without requiring knowledge about the positions and the spectra of the sources.

  2. Spatial sound field synthesis and upmixing based on the equivalent source method.

    PubMed

    Bai, Mingsian R; Hsu, Hoshen; Wen, Jheng-Ciang

    2014-01-01

    Given scarce number of recorded signals, spatial sound field synthesis with an extended sweet spot is a challenging problem in acoustic array signal processing. To address the problem, a synthesis and upmixing approach inspired by the equivalent source method (ESM) is proposed. The synthesis procedure is based on the pressure signals recorded by a microphone array and requires no source model. The array geometry can also be arbitrary. Four upmixing strategies are adopted to enhance the resolution of the reproduced sound field when there are more channels of loudspeakers than the microphones. Multi-channel inverse filtering with regularization is exploited to deal with the ill-posedness in the reconstruction process. The distance between the microphone and loudspeaker arrays is optimized to achieve the best synthesis quality. To validate the proposed system, numerical simulations and subjective listening experiments are performed. The results demonstrated that all upmixing methods improved the quality of reproduced target sound field over the original reproduction. In particular, the underdetermined ESM interpolation method yielded the best spatial sound field synthesis in terms of the reproduction error, timbral quality, and spatial quality.

  3. SU-C-9A-04: Alternative Analytic Solution to the Paralyzable Detector Model to Calculate Deadtime and Deadtime Loss

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Siman, W; Kappadath, S

    2014-06-01

    Purpose: Some common methods to solve for deadtime are (1) dual-source method, which assumes two equal activities; (2) model fitting, which requires multiple acquisitions as source decays; and (3) lossless model, which assumes no deadtime loss at low count rates. We propose a new analytic alternative solution to calculate deadtime for paralyzable gamma camera. Methods: Deadtime T can be calculated analytically from two distinct observed count rates M1 and M2 when the ratio of the true count rates alpha=N2/N1 is known. Alpha can be measured as a ratio of two measured activities using dose calibrators or via radioactive decay. Knowledgemore » of alpha creates a system with 2 equations and 2 unknowns, i.e., T and N1. To verify the validity of the proposed method, projections of a non-uniform phantom (4GBq 99mTc) were acquired in using Siemens SymbiaS multiple times over 48 hours. Each projection has >100kcts. The deadtime for each projection was calculated by fitting the data to a paralyzable model and also by using the proposed 2-acquisition method. The two estimates of deadtime were compared using the Bland-Altmann method. In addition, the dependency of uncertainty in T on uncertainty in alpha was investigated for several imaging conditions. Results: The results strongly suggest that the 2-acquisition method is equivalent to the fitting method. The Bland-Altman analysis yielded mean difference in deadtime estimate of ∼0.076us (95%CI: -0.049us, 0.103us) between the 2-acquisition and model fitting methods. The 95% limits of agreement were calculated to be -0.104 to 0.256us. The uncertainty in deadtime calculated using the proposed method is highly dependent on the uncertainty in the ratio alpha. Conclusion: The 2-acquisition method was found to be equivalent to the parameter fitting method. The proposed method offers a simpler and more practical way to analytically solve for a paralyzable detector deadtime, especially during physics testing.« less

  4. Practical no-gold-standard evaluation framework for quantitative imaging methods: application to lesion segmentation in positron emission tomography

    PubMed Central

    Jha, Abhinav K.; Mena, Esther; Caffo, Brian; Ashrafinia, Saeed; Rahmim, Arman; Frey, Eric; Subramaniam, Rathan M.

    2017-01-01

    Abstract. Recently, a class of no-gold-standard (NGS) techniques have been proposed to evaluate quantitative imaging methods using patient data. These techniques provide figures of merit (FoMs) quantifying the precision of the estimated quantitative value without requiring repeated measurements and without requiring a gold standard. However, applying these techniques to patient data presents several practical difficulties including assessing the underlying assumptions, accounting for patient-sampling-related uncertainty, and assessing the reliability of the estimated FoMs. To address these issues, we propose statistical tests that provide confidence in the underlying assumptions and in the reliability of the estimated FoMs. Furthermore, the NGS technique is integrated within a bootstrap-based methodology to account for patient-sampling-related uncertainty. The developed NGS framework was applied to evaluate four methods for segmenting lesions from F-Fluoro-2-deoxyglucose positron emission tomography images of patients with head-and-neck cancer on the task of precisely measuring the metabolic tumor volume. The NGS technique consistently predicted the same segmentation method as the most precise method. The proposed framework provided confidence in these results, even when gold-standard data were not available. The bootstrap-based methodology indicated improved performance of the NGS technique with larger numbers of patient studies, as was expected, and yielded consistent results as long as data from more than 80 lesions were available for the analysis. PMID:28331883

  5. 1-D DC Resistivity Modeling and Interpretation in Anisotropic Media Using Particle Swarm Optimization

    NASA Astrophysics Data System (ADS)

    Pekşen, Ertan; Yas, Türker; Kıyak, Alper

    2014-09-01

    We examine the one-dimensional direct current method in anisotropic earth formation. We derive an analytic expression of a simple, two-layered anisotropic earth model. Further, we also consider a horizontally layered anisotropic earth response with respect to the digital filter method, which yields a quasi-analytic solution over anisotropic media. These analytic and quasi-analytic solutions are useful tests for numerical codes. A two-dimensional finite difference earth model in anisotropic media is presented in order to generate a synthetic data set for a simple one-dimensional earth. Further, we propose a particle swarm optimization method for estimating the model parameters of a layered anisotropic earth model such as horizontal and vertical resistivities, and thickness. The particle swarm optimization is a naturally inspired meta-heuristic algorithm. The proposed method finds model parameters quite successfully based on synthetic and field data. However, adding 5 % Gaussian noise to the synthetic data increases the ambiguity of the value of the model parameters. For this reason, the results should be controlled by a number of statistical tests. In this study, we use probability density function within 95 % confidence interval, parameter variation of each iteration and frequency distribution of the model parameters to reduce the ambiguity. The result is promising and the proposed method can be used for evaluating one-dimensional direct current data in anisotropic media.

  6. Indirect spectrophotometric determination of propranolol hydrochloride and piroxicam in pure and pharmaceutical formulations.

    PubMed

    Gowda, Babu G; Seetharamappa, Jaldappa; Melwanki, Mahaveer B

    2002-06-01

    Two simple and sensitive indirect spectrophotometric methods for the assay of propranolol hydrochloride (PPH) and piroxicam (PX) in pure and pharmaceutical formulations have been proposed. The methods are based on the oxidation of PPH by a known excess of standard N-bromosuccinimide (NBS) and PX by ceric ammonium sulfate (CAS) in an acidic medium followed by the reaction of excess oxidant with promethazine hydrochloride (PMH) and methdilazine hydrochloride (MDH) to yield red-colored products. The absorbance values decreased linearly with increasing concentration of the drugs. The systems obeyed Beer's law over the concentration ranges of 0.5 - 12.5 and 0.3 - 16.0 microg/ml for PPH, and 0.4 - 7.5 and 0.2 - 10 microg/ml for PX with PMH and MDH, respectively. Molar absorptivity values, as calculated from Beer's law data, were found to be 1.36 x 10(4) and 2.55 x 10(4) l mol(-1) cm(-1) for PPH, and 2.08 x 10(4) and 2.05 x 10(4) l mol(-1) cm(-1) for PX with PMH and MDH, respectively. The common excipients and additives did not interfere with their determinations. The proposed methods have been successfully applied to the determinations of PPH and PX in various dosage forms. The results obtained by the proposed methods compare favorably with those of official methods.

  7. Projections onto Convex Sets Super-Resolution Reconstruction Based on Point Spread Function Estimation of Low-Resolution Remote Sensing Images

    PubMed Central

    Fan, Chong; Wu, Chaoyun; Li, Grand; Ma, Jun

    2017-01-01

    To solve the problem on inaccuracy when estimating the point spread function (PSF) of the ideal original image in traditional projection onto convex set (POCS) super-resolution (SR) reconstruction, this paper presents an improved POCS SR algorithm based on PSF estimation of low-resolution (LR) remote sensing images. The proposed algorithm can improve the spatial resolution of the image and benefit agricultural crop visual interpolation. The PSF of the high-resolution (HR) image is unknown in reality. Therefore, analysis of the relationship between the PSF of the HR image and the PSF of the LR image is important to estimate the PSF of the HR image by using multiple LR images. In this study, the linear relationship between the PSFs of the HR and LR images can be proven. In addition, the novel slant knife-edge method is employed, which can improve the accuracy of the PSF estimation of LR images. Finally, the proposed method is applied to reconstruct airborne digital sensor 40 (ADS40) three-line array images and the overlapped areas of two adjacent GF-2 images by embedding the estimated PSF of the HR image to the original POCS SR algorithm. Experimental results show that the proposed method yields higher quality of reconstructed images than that produced by the blind SR method and the bicubic interpolation method. PMID:28208837

  8. Direct reconstruction of cardiac PET kinetic parametric images using a preconditioned conjugate gradient approach

    PubMed Central

    Rakvongthai, Yothin; Ouyang, Jinsong; Guerin, Bastien; Li, Quanzheng; Alpert, Nathaniel M.; El Fakhri, Georges

    2013-01-01

    Purpose: Our research goal is to develop an algorithm to reconstruct cardiac positron emission tomography (PET) kinetic parametric images directly from sinograms and compare its performance with the conventional indirect approach. Methods: Time activity curves of a NCAT phantom were computed according to a one-tissue compartmental kinetic model with realistic kinetic parameters. The sinograms at each time frame were simulated using the activity distribution for the time frame. The authors reconstructed the parametric images directly from the sinograms by optimizing a cost function, which included the Poisson log-likelihood and a spatial regularization terms, using the preconditioned conjugate gradient (PCG) algorithm with the proposed preconditioner. The proposed preconditioner is a diagonal matrix whose diagonal entries are the ratio of the parameter and the sensitivity of the radioactivity associated with parameter. The authors compared the reconstructed parametric images using the direct approach with those reconstructed using the conventional indirect approach. Results: At the same bias, the direct approach yielded significant relative reduction in standard deviation by 12%–29% and 32%–70% for 50 × 106 and 10 × 106 detected coincidences counts, respectively. Also, the PCG method effectively reached a constant value after only 10 iterations (with numerical convergence achieved after 40–50 iterations), while more than 500 iterations were needed for CG. Conclusions: The authors have developed a novel approach based on the PCG algorithm to directly reconstruct cardiac PET parametric images from sinograms, and yield better estimation of kinetic parameters than the conventional indirect approach, i.e., curve fitting of reconstructed images. The PCG method increases the convergence rate of reconstruction significantly as compared to the conventional CG method. PMID:24089922

  9. Template‐based field map prediction for rapid whole brain B0 shimming

    PubMed Central

    Shi, Yuhang; Vannesjo, S. Johanna; Miller, Karla L.

    2017-01-01

    Purpose In typical MRI protocols, time is spent acquiring a field map to calculate the shim settings for best image quality. We propose a fast template‐based field map prediction method that yields near‐optimal shims without measuring the field. Methods The template‐based prediction method uses prior knowledge of the B0 distribution in the human brain, based on a large database of field maps acquired from different subjects, together with subject‐specific structural information from a quick localizer scan. The shimming performance of using the template‐based prediction is evaluated in comparison to a range of potential fast shimming methods. Results Static B0 shimming based on predicted field maps performed almost as well as shimming based on individually measured field maps. In experimental evaluations at 7 T, the proposed approach yielded a residual field standard deviation in the brain of on average 59 Hz, compared with 50 Hz using measured field maps and 176 Hz using no subject‐specific shim. Conclusions This work demonstrates that shimming based on predicted field maps is feasible. The field map prediction accuracy could potentially be further improved by generating the template from a subset of subjects, based on parameters such as head rotation and body mass index. Magn Reson Med 80:171–180, 2018. © 2017 The Authors Magnetic Resonance in Medicine published by Wiley Periodicals, Inc. on behalf of International Society for Magnetic Resonance in Medicine. This is an open access article under the terms of the Creative Commons Attribution License, which permits use, distribution and reproduction in any medium, provided the original work is properly cited. PMID:29193340

  10. Evaluation of digestion methods for analysis of trace metals in mammalian tissues and NIST 1577c.

    PubMed

    Binder, Grace A; Metcalf, Rainer; Atlas, Zachary; Daniel, Kenyon G

    2018-02-15

    Digestion techniques for ICP analysis have been poorly studied for biological samples. This report describes an optimized method for analysis of trace metals that can be used across a variety of sample types. Digestion methods were tested and optimized with the analysis of trace metals in cancerous as compared to normal tissue as the end goal. Anthropological, forensic, oncological and environmental research groups can employ this method reasonably cheaply and safely whilst still being able to compare between laboratories. We examined combined HNO 3 and H 2 O 2 digestion at 170 °C for human, porcine and bovine samples whether they are frozen, fresh or lyophilized powder. Little discrepancy is found between microwave digestion and PFA Teflon pressure vessels. The elements of interest (Cu, Zn, Fe and Ni) yielded consistently higher and more accurate values on standard reference material than samples heated to 75 °C or samples that utilized HNO 3 alone. Use of H 2 SO 4 does not improve homogeneity of the sample and lowers precision during ICP analysis. High temperature digestions (>165 °C) using a combination of HNO 3 and H 2 O 2 as outlined are proposed as a standard technique for all mammalian tissues, specifically, human tissues and yield greater than 300% higher values than samples digested at 75 °C regardless of the acid or acid combinations used. The proposed standardized technique is designed to accurately quantify potential discrepancies in metal loads between cancerous and healthy tissues and applies to numerous tissue studies requiring quick, effective and safe digestions. Copyright © 2017 Elsevier Inc. All rights reserved.

  11. Marker-Based Hierarchical Segmentation and Classification Approach for Hyperspectral Imagery

    NASA Technical Reports Server (NTRS)

    Tarabalka, Yuliya; Tilton, James C.; Benediktsson, Jon Atli; Chanussot, Jocelyn

    2011-01-01

    The Hierarchical SEGmentation (HSEG) algorithm, which is a combination of hierarchical step-wise optimization and spectral clustering, has given good performances for hyperspectral image analysis. This technique produces at its output a hierarchical set of image segmentations. The automated selection of a single segmentation level is often necessary. We propose and investigate the use of automatically selected markers for this purpose. In this paper, a novel Marker-based HSEG (M-HSEG) method for spectral-spatial classification of hyperspectral images is proposed. First, pixelwise classification is performed and the most reliably classified pixels are selected as markers, with the corresponding class labels. Then, a novel constrained marker-based HSEG algorithm is applied, resulting in a spectral-spatial classification map. The experimental results show that the proposed approach yields accurate segmentation and classification maps, and thus is attractive for hyperspectral image analysis.

  12. Adaptive regularization network based neural modeling paradigm for nonlinear adaptive estimation of cerebral evoked potentials.

    PubMed

    Zhang, Jian-Hua; Böhme, Johann F

    2007-11-01

    In this paper we report an adaptive regularization network (ARN) approach to realizing fast blind separation of cerebral evoked potentials (EPs) from background electroencephalogram (EEG) activity with no need to make any explicit assumption on the statistical (or deterministic) signal model. The ARNs are proposed to construct nonlinear EEG and EP signal models. A novel adaptive regularization training (ART) algorithm is proposed to improve the generalization performance of the ARN. Two adaptive neural modeling methods based on the ARN are developed and their implementation and performance analysis are also presented. The computer experiments using simulated and measured visual evoked potential (VEP) data have shown that the proposed ARN modeling paradigm yields computationally efficient and more accurate VEP signal estimation owing to its intrinsic model-free and nonlinear processing characteristics.

  13. A new method for the radiochemical purity measurement of ¹¹¹In-pentetreotide.

    PubMed

    Salgado-Garcia, Carlos; Montoza-Aguado, Manuel; Luna-Alcaide, Ana B; Segovia-Gonzalez, Maria M; de Mora, Elena Sanchez; Lopez-Martin, Juana; Ramos-Font, Carlos; Jimenez-Heffernan, Amelia

    2011-12-01

    The recommended method for the measurement of radiochemical purity (RCP) of ¹¹¹In-labelled pentetreotide is thin-layer chromatography with a silica gel as the stationary phase and a 0.1 N sodium citrate solution (pH 5) as the mobile phase. According to the supplier's instructions, the mobile phase must be prepared before the test is carried out, and the recommended stationary phase is off-market. We propose a new method for RCP measurement in which the mobile phase is acid citrate dextrose, solution A, which does not need to be prepared beforehand, and thin-layer chromatography is performed with a silica gel-impregnated glass fibre sheet as the stationary phase. We used both methods to measure the percentages of radiopharmaceutical and impurities. The range of RCP values obtained was 98.0-99.9% (mean=99.3%) by the standard method and 98.1-99.9% (mean=99.2%) by the new method. We observed no differences between the RCP values of both methods (P=0.070). The proposed method is suitable for RCP testing because it yields results that are in good agreement with those of the standard method and because it is easier to perform as the mobile-phase solution need not be prepared in advance.

  14. An improved facile method for extraction and determination of steroidal saponins in Tribulus terrestris by focused microwave-assisted extraction coupled with GC-MS.

    PubMed

    Li, Tianlin; Zhang, Zhuomin; Zhang, Lan; Huang, Xinjian; Lin, Junwei; Chen, Guonan

    2009-12-01

    An improved fast method for extraction of steroidal saponins in Tribulus terrestris based on the use of focus microwave-assisted extraction (FMAE) is proposed. Under optimized conditions, four steroidal saponins were extracted from Tribulus terrestris and identified by GC-MS, which are Tigogenin (TG), Gitogenin (GG), Hecogenin (HG) and Neohecogenin (NG). One of the most important steroidal saponins, namely TG was quantified finally. The recovery of TG was in the range of 86.7-91.9% with RSD<5.2%. The convention heating reflux extraction was also conducted in order to validate the reliability of this new FMAE method. The yield of total steroidal saponins was 90.3% in a one-step FMAE, while the yield of 65.0% was achieved during heating reflux extraction, and the extraction time was reduced from 3 h to 5 min by using less solvent. The method was successfully applied to analyze the steroidal saponins of Tribulus terrestris from different areas of occurrence. The difference in chromatographic characteristics of steroidal saponins was proved to be related to the different areas of occurrence. The results showed that FMAE-GC-MS is a simple, rapid, solvent-saving method for the extraction and determination of steroidal saponins in Tribulus terrestris.

  15. Applications of novel effects derived from Si ingot growth inside Si melt without contact with crucible wall using noncontact crucible method to high-efficiency solar cells

    NASA Astrophysics Data System (ADS)

    Nakajima, Kazuo; Ono, Satoshi; Kaneko, Yuzuru; Murai, Ryota; Shirasawa, Katsuhiko; Fukuda, Tetsuo; Takato, Hidetaka; Jensen, Mallory A.; Youssef, Amanda; Looney, Erin E.; Buonassisi, Tonio; Martel, Benoit; Dubois, Sèbastien; Jouini, Anis

    2017-06-01

    The noncontact crucible (NOC) method was proposed for obtaining Si single bulk crystals with a large diameter and volume using a cast furnace and solar cells with high conversion efficiency and yield. This method has several novel characteristics that originate from its key feature that ingots can be grown inside a Si melt without contact with a crucible wall. Si ingots for solar cells were grown by utilizing the merits resulting from these characteristics. Single ingots with high quality were grown by the NOC method after furnace cleaning, and the minority carrier lifetime was measured to investigate reduction of the number of impurities. A p-type ingot with a convex growth interface in the growth direction was also grown after furnace cleaning. For p-type solar cells prepared using wafers cut from the ingot, the highest and average conversion efficiencies were 19.14% and 19.0%, respectively, which were obtained using the same solar cell structure and process as those employed to obtain a conversion efficiency of 19.1% for a p-type Czochralski (CZ) wafer. Using the cast furnace, solar cells with a conversion efficiency and yield as high as those of CZ solar cells were obtained by the NOC method.

  16. Development of an ionic liquid-based microwave-assisted method for simultaneous extraction and distillation for determination of proanthocyanidins and essential oil in Cortex cinnamomi.

    PubMed

    Liu, Ye; Yang, Lei; Zu, Yuangang; Zhao, Chunjian; Zhang, Lin; Zhang, Ying; Zhang, Zhonghua; Wang, Wenjie

    2012-12-15

    Cortex cinnamomi is associated with many health benefits and is used in the food and pharmaceutical industries. In this study, an efficient ionic liquid-based microwave-assisted simultaneous extraction and distillation (ILMSED) technique was used to extract cassia oil and proanthocyanidins from Cortex cinnamomi; these were quantified by gas chromatography/mass spectrometry (GC-MS) and the vanillin-HCl colorimetric method, respectively. 0.5M 1-butyl-3-methylimidazolium bromide ionic liquid was selected as solvent. The optimum parameters of dealing with 20.0 g sample were 230 W microwave irradiation power, 15 min microwave extraction time and 10 liquid-solid ratio. The yields of essential oil and proanthocyanidins were 1.24 ± 0.04% and 4.58 ± 0.21% under the optimum conditions. The composition of the essential oil was analysed by GC-MS. Using the ILMSED method, the energy consumption was reduced and the extraction yields were improved. The proposed method was validated using stability, repeatability, and recovery experiments. The results indicated that the developed ILMSED method provided a good alternative for the extraction of both the essential oil and proanthocyanidins from Cortex cinnamomi. Copyright © 2012 Elsevier Ltd. All rights reserved.

  17. Efficient dynamic graph construction for inductive semi-supervised learning.

    PubMed

    Dornaika, F; Dahbi, R; Bosaghzadeh, A; Ruichek, Y

    2017-10-01

    Most of graph construction techniques assume a transductive setting in which the whole data collection is available at construction time. Addressing graph construction for inductive setting, in which data are coming sequentially, has received much less attention. For inductive settings, constructing the graph from scratch can be very time consuming. This paper introduces a generic framework that is able to make any graph construction method incremental. This framework yields an efficient and dynamic graph construction method that adds new samples (labeled or unlabeled) to a previously constructed graph. As a case study, we use the recently proposed Two Phase Weighted Regularized Least Square (TPWRLS) graph construction method. The paper has two main contributions. First, we use the TPWRLS coding scheme to represent new sample(s) with respect to an existing database. The representative coefficients are then used to update the graph affinity matrix. The proposed method not only appends the new samples to the graph but also updates the whole graph structure by discovering which nodes are affected by the introduction of new samples and by updating their edge weights. The second contribution of the article is the application of the proposed framework to the problem of graph-based label propagation using multiple observations for vision-based recognition tasks. Experiments on several image databases show that, without any significant loss in the accuracy of the final classification, the proposed dynamic graph construction is more efficient than the batch graph construction. Copyright © 2017 Elsevier Ltd. All rights reserved.

  18. A shape-preserving oriented partial differential equation based on a new fidelity term for electronic speckle pattern interferometry fringe patterns denoising

    NASA Astrophysics Data System (ADS)

    Xu, Wenjun; Tang, Chen; Zheng, Tingyue; Qiu, Yue

    2018-07-01

    Oriented partial differential equations (OPDEs) have been demonstrated to be a powerful tool for preserving the integrity of fringes while filtering electronic speckle pattern interferometry (ESPI) fringe patterns. However, the main drawback of OPDEs-based methods is that many iterations are often needed, which causes the change in the shape of fringes. Change in the shape of fringes will affect the accuracy of subsequent fringe analysis. In this paper, we focus on preserving the shape of fringes while filtering, suggested here for the first time. We propose a shape-preserving OPDE for ESPI fringe patterns denoising by introducing a new fidelity term to the previous second-order single oriented PDE (SOOPDE). In our proposed fidelity term, the evolution image is subtracted from the shrinkage result of original noisy image by shearlet transform. Our proposed shape-preserving OPDE is capable of eliminating noise effectively, keeping the integrity of fringes, and more importantly, preserving the shape of fringes. We test the proposed shape-preserving OPDE on three computer-simulated and three experimentally obtained ESPI fringe patterns with poor quality. Furthermore, we compare our model with three representative filtering methods, including the widely used SOOPDE, shearlet transform and coherence-enhancing diffusion (CED). We also compare our proposed fidelity term with the traditional fidelity term. Experimental results show that the proposed shape-preserving OPDE not only yields filtered images with visual quality on par with those by CED which is the state-of-the-art method for ESPI fringe patterns denoising, but also keeps the shape of ESPI fringe patterns.

  19. Hot spot formation and stagnation properties in simulations of direct-drive NIF implosions

    NASA Astrophysics Data System (ADS)

    Schmitt, Andrew J.; Obenschain, Stephen P.

    2016-05-01

    We investigate different proposed methods of increasing the hot spot energy and radius in inertial confinement fusion implosions. In particular, shock mistiming (preferentially heating the inner edge of the target's fuel) and increasing the initial vapor gas density are investigated as possible control mechanisms. We find that only the latter is effective in substantially increasing the hot spot energy and dimensions while achieving ignition. In all cases an increase in the hot spot energy is accompanied by a decrease in the hot spot energy density (pressure) and both the yield and the gain of the target drop substantially. 2D simulations of increased vapor density targets predict an increase in the robustness of the target with respect to surface perturbations but are accompanied by significant yield degradation.

  20. Decentralized modal identification using sparse blind source separation

    NASA Astrophysics Data System (ADS)

    Sadhu, A.; Hazra, B.; Narasimhan, S.; Pandey, M. D.

    2011-12-01

    Popular ambient vibration-based system identification methods process information collected from a dense array of sensors centrally to yield the modal properties. In such methods, the need for a centralized processing unit capable of satisfying large memory and processing demands is unavoidable. With the advent of wireless smart sensor networks, it is now possible to process information locally at the sensor level, instead. The information at the individual sensor level can then be concatenated to obtain the global structure characteristics. A novel decentralized algorithm based on wavelet transforms to infer global structure mode information using measurements obtained using a small group of sensors at a time is proposed in this paper. The focus of the paper is on algorithmic development, while the actual hardware and software implementation is not pursued here. The problem of identification is cast within the framework of under-determined blind source separation invoking transformations of measurements to the time-frequency domain resulting in a sparse representation. The partial mode shape coefficients so identified are then combined to yield complete modal information. The transformations are undertaken using stationary wavelet packet transform (SWPT), yielding a sparse representation in the wavelet domain. Principal component analysis (PCA) is then performed on the resulting wavelet coefficients, yielding the partial mixing matrix coefficients from a few measurement channels at a time. This process is repeated using measurements obtained from multiple sensor groups, and the results so obtained from each group are concatenated to obtain the global modal characteristics of the structure.

  1. A review of the application of propensity score methods yielded increasing use, advantages in specific settings, but not substantially different estimates compared with conventional multivariable methods

    PubMed Central

    Stürmer, Til; Joshi, Manisha; Glynn, Robert J.; Avorn, Jerry; Rothman, Kenneth J.; Schneeweiss, Sebastian

    2006-01-01

    Objective Propensity score analyses attempt to control for confounding in non-experimental studies by adjusting for the likelihood that a given patient is exposed. Such analyses have been proposed to address confounding by indication, but there is little empirical evidence that they achieve better control than conventional multivariate outcome modeling. Study design and methods Using PubMed and Science Citation Index, we assessed the use of propensity scores over time and critically evaluated studies published through 2003. Results Use of propensity scores increased from a total of 8 papers before 1998 to 71 in 2003. Most of the 177 published studies abstracted assessed medications (N=60) or surgical interventions (N=51), mainly in cardiology and cardiac surgery (N=90). Whether PS methods or conventional outcome models were used to control for confounding had little effect on results in those studies in which such comparison was possible. Only 9 out of 69 studies (13%) had an effect estimate that differed by more than 20% from that obtained with a conventional outcome model in all PS analyses presented. Conclusions Publication of results based on propensity score methods has increased dramatically, but there is little evidence that these methods yield substantially different estimates compared with conventional multivariable methods. PMID:16632131

  2. Direct simulation of groundwater age

    USGS Publications Warehouse

    Goode, Daniel J.

    1996-01-01

    A new method is proposed to simulate groundwater age directly, by use of an advection-dispersion transport equation with a distributed zero-order source of unit (1) strength, corresponding to the rate of aging. The dependent variable in the governing equation is the mean age, a mass-weighted average age. The governing equation is derived from residence-time-distribution concepts for the case of steady flow. For the more general case of transient flow, a transient governing equation for age is derived from mass-conservation principles applied to conceptual “age mass.” The age mass is the product of the water mass and its age, and age mass is assumed to be conserved during mixing. Boundary conditions include zero age mass flux across all noflow and inflow boundaries and no age mass dispersive flux across outflow boundaries. For transient-flow conditions, the initial distribution of age must be known. The solution of the governing transport equation yields the spatial distribution of the mean groundwater age and includes diffusion, dispersion, mixing, and exchange processes that typically are considered only through tracer-specific solute transport simulation. Traditional methods have relied on advective transport to predict point values of groundwater travel time and age. The proposed method retains the simplicity and tracer-independence of advection-only models, but incorporates the effects of dispersion and mixing on volume-averaged age. Example simulations of age in two idealized regional aquifer systems, one homogeneous and the other layered, demonstrate the agreement between the proposed method and traditional particle-tracking approaches and illustrate use of the proposed method to determine the effects of diffusion, dispersion, and mixing on groundwater age.

  3. A marker-based watershed method for X-ray image segmentation.

    PubMed

    Zhang, Xiaodong; Jia, Fucang; Luo, Suhuai; Liu, Guiying; Hu, Qingmao

    2014-03-01

    Digital X-ray images are the most frequent modality for both screening and diagnosis in hospitals. To facilitate subsequent analysis such as quantification and computer aided diagnosis (CAD), it is desirable to exclude image background. A marker-based watershed segmentation method was proposed to segment background of X-ray images. The method consisted of six modules: image preprocessing, gradient computation, marker extraction, watershed segmentation from markers, region merging and background extraction. One hundred clinical direct radiograph X-ray images were used to validate the method. Manual thresholding and multiscale gradient based watershed method were implemented for comparison. The proposed method yielded a dice coefficient of 0.964±0.069, which was better than that of the manual thresholding (0.937±0.119) and that of multiscale gradient based watershed method (0.942±0.098). Special means were adopted to decrease the computational cost, including getting rid of few pixels with highest grayscale via percentile, calculation of gradient magnitude through simple operations, decreasing the number of markers by appropriate thresholding, and merging regions based on simple grayscale statistics. As a result, the processing time was at most 6s even for a 3072×3072 image on a Pentium 4 PC with 2.4GHz CPU (4 cores) and 2G RAM, which was more than one time faster than that of the multiscale gradient based watershed method. The proposed method could be a potential tool for diagnosis and quantification of X-ray images. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  4. A new randomized Kaczmarz based kernel canonical correlation analysis algorithm with applications to information retrieval.

    PubMed

    Cai, Jia; Tang, Yi

    2018-02-01

    Canonical correlation analysis (CCA) is a powerful statistical tool for detecting the linear relationship between two sets of multivariate variables. Kernel generalization of it, namely, kernel CCA is proposed to describe nonlinear relationship between two variables. Although kernel CCA can achieve dimensionality reduction results for high-dimensional data feature selection problem, it also yields the so called over-fitting phenomenon. In this paper, we consider a new kernel CCA algorithm via randomized Kaczmarz method. The main contributions of the paper are: (1) A new kernel CCA algorithm is developed, (2) theoretical convergence of the proposed algorithm is addressed by means of scaled condition number, (3) a lower bound which addresses the minimum number of iterations is presented. We test on both synthetic dataset and several real-world datasets in cross-language document retrieval and content-based image retrieval to demonstrate the effectiveness of the proposed algorithm. Numerical results imply the performance and efficiency of the new algorithm, which is competitive with several state-of-the-art kernel CCA methods. Copyright © 2017 Elsevier Ltd. All rights reserved.

  5. An Information Retrieval Approach for Robust Prediction of Road Surface States.

    PubMed

    Park, Jae-Hyung; Kim, Kwanho

    2017-01-28

    Recently, due to the increasing importance of reducing severe vehicle accidents on roads (especially on highways), the automatic identification of road surface conditions, and the provisioning of such information to drivers in advance, have recently been gaining significant momentum as a proactive solution to decrease the number of vehicle accidents. In this paper, we firstly propose an information retrieval approach that aims to identify road surface states by combining conventional machine-learning techniques and moving average methods. Specifically, when signal information is received from a radar system, our approach attempts to estimate the current state of the road surface based on the similar instances observed previously based on utilizing a given similarity function. Next, the estimated state is then calibrated by using the recently estimated states to yield both effective and robust prediction results. To validate the performances of the proposed approach, we established a real-world experimental setting on a section of actual highway in South Korea and conducted a comparison with the conventional approaches in terms of accuracy. The experimental results show that the proposed approach successfully outperforms the previously developed methods.

  6. Non-Mutually Exclusive Deep Neural Network Classifier for Combined Modes of Bearing Fault Diagnosis.

    PubMed

    Duong, Bach Phi; Kim, Jong-Myon

    2018-04-07

    The simultaneous occurrence of various types of defects in bearings makes their diagnosis more challenging owing to the resultant complexity of the constituent parts of the acoustic emission (AE) signals. To address this issue, a new approach is proposed in this paper for the detection of multiple combined faults in bearings. The proposed methodology uses a deep neural network (DNN) architecture to effectively diagnose the combined defects. The DNN structure is based on the stacked denoising autoencoder non-mutually exclusive classifier (NMEC) method for combined modes. The NMEC-DNN is trained using data for a single fault and it classifies both single faults and multiple combined faults. The results of experiments conducted on AE data collected through an experimental test-bed demonstrate that the DNN achieves good classification performance with a maximum accuracy of 95%. The proposed method is compared with a multi-class classifier based on support vector machines (SVMs). The NMEC-DNN yields better diagnostic performance in comparison to the multi-class classifier based on SVM. The NMEC-DNN reduces the number of necessary data collections and improves the bearing fault diagnosis performance.

  7. A Sensor Fusion Method Based on an Integrated Neural Network and Kalman Filter for Vehicle Roll Angle Estimation.

    PubMed

    Vargas-Meléndez, Leandro; Boada, Beatriz L; Boada, María Jesús L; Gauchía, Antonio; Díaz, Vicente

    2016-08-31

    This article presents a novel estimator based on sensor fusion, which combines the Neural Network (NN) with a Kalman filter in order to estimate the vehicle roll angle. The NN estimates a "pseudo-roll angle" through variables that are easily measured from Inertial Measurement Unit (IMU) sensors. An IMU is a device that is commonly used for vehicle motion detection, and its cost has decreased during recent years. The pseudo-roll angle is introduced in the Kalman filter in order to filter noise and minimize the variance of the norm and maximum errors' estimation. The NN has been trained for J-turn maneuvers, double lane change maneuvers and lane change maneuvers at different speeds and road friction coefficients. The proposed method takes into account the vehicle non-linearities, thus yielding good roll angle estimation. Finally, the proposed estimator has been compared with one that uses the suspension deflections to obtain the pseudo-roll angle. Experimental results show the effectiveness of the proposed NN and Kalman filter-based estimator.

  8. A Sensor Fusion Method Based on an Integrated Neural Network and Kalman Filter for Vehicle Roll Angle Estimation

    PubMed Central

    Vargas-Meléndez, Leandro; Boada, Beatriz L.; Boada, María Jesús L.; Gauchía, Antonio; Díaz, Vicente

    2016-01-01

    This article presents a novel estimator based on sensor fusion, which combines the Neural Network (NN) with a Kalman filter in order to estimate the vehicle roll angle. The NN estimates a “pseudo-roll angle” through variables that are easily measured from Inertial Measurement Unit (IMU) sensors. An IMU is a device that is commonly used for vehicle motion detection, and its cost has decreased during recent years. The pseudo-roll angle is introduced in the Kalman filter in order to filter noise and minimize the variance of the norm and maximum errors’ estimation. The NN has been trained for J-turn maneuvers, double lane change maneuvers and lane change maneuvers at different speeds and road friction coefficients. The proposed method takes into account the vehicle non-linearities, thus yielding good roll angle estimation. Finally, the proposed estimator has been compared with one that uses the suspension deflections to obtain the pseudo-roll angle. Experimental results show the effectiveness of the proposed NN and Kalman filter-based estimator. PMID:27589763

  9. An Information Retrieval Approach for Robust Prediction of Road Surface States

    PubMed Central

    Park, Jae-Hyung; Kim, Kwanho

    2017-01-01

    Recently, due to the increasing importance of reducing severe vehicle accidents on roads (especially on highways), the automatic identification of road surface conditions, and the provisioning of such information to drivers in advance, have recently been gaining significant momentum as a proactive solution to decrease the number of vehicle accidents. In this paper, we firstly propose an information retrieval approach that aims to identify road surface states by combining conventional machine-learning techniques and moving average methods. Specifically, when signal information is received from a radar system, our approach attempts to estimate the current state of the road surface based on the similar instances observed previously based on utilizing a given similarity function. Next, the estimated state is then calibrated by using the recently estimated states to yield both effective and robust prediction results. To validate the performances of the proposed approach, we established a real-world experimental setting on a section of actual highway in South Korea and conducted a comparison with the conventional approaches in terms of accuracy. The experimental results show that the proposed approach successfully outperforms the previously developed methods. PMID:28134859

  10. Multilevel segmentation of intracranial aneurysms in CT angiography images

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Yan; Zhang, Yue, E-mail: y.zhang525@gmail.com; Navarro, Laurent

    Purpose: Segmentation of aneurysms plays an important role in interventional planning. Yet, the segmentation of both the lumen and the thrombus of an intracranial aneurysm in computed tomography angiography (CTA) remains a challenge. This paper proposes a multilevel segmentation methodology for efficiently segmenting intracranial aneurysms in CTA images. Methods: The proposed methodology first uses the lattice Boltzmann method (LBM) to extract the lumen part directly from the original image. Then, the LBM is applied again on an intermediate image whose lumen part is filled by the mean gray-level value outside the lumen, to yield an image region containing part ofmore » the aneurysm boundary. After that, an expanding disk is introduced to estimate the complete contour of the aneurysm. Finally, the contour detected is used as the initial contour of the level set with ellipse to refine the aneurysm. Results: The results obtained on 11 patients from different hospitals showed that the proposed segmentation was comparable with manual segmentation, and that quantitatively, the average segmentation matching factor (SMF) reached 86.99%, demonstrating good segmentation accuracy. Chan–Vese method, Sen’s model, and Luca’s model were used to compare the proposed method and their average SMF values were 39.98%, 40.76%, and 77.11%, respectively. Conclusions: The authors have presented a multilevel segmentation method based on the LBM and level set with ellipse for accurate segmentation of intracranial aneurysms. Compared to three existing methods, for all eleven patients, the proposed method can successfully segment the lumen with the highest SMF values for nine patients and second highest SMF values for the two. It also segments the entire aneurysm with the highest SMF values for ten patients and second highest SMF value for the one. This makes it potential for clinical assessment of the volume and aspect ratio of the intracranial aneurysms.« less

  11. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Xiaoming; Nan, Zhaodong, E-mail: zdnan@yzu.edu.cn

    Graphical abstract: Glass-slices were used as a template to induce formation and assembly of aragonite. Different morphologies, such as hemisphere, twinborn hemisphere and flower-shaped particles, were produced by direction of the glass-slices. Highlights: {yields} Glass-slices were used as a template to induce formation and assembly of aragonite. {yields} Hemisphere, twinborn hemisphere and flower-shaped particles were produced by direction of the glass-slices. {yields} Planes were always appeared in these as-synthesized samples. {yields} Thermodynamic theory was applied to explain the production of the aragonite. -- Abstract: A glass-slice was used as a template to induce formation and assembly of aragonite. Thermodynamic theorymore » was applied to explain the production of the aragonite. Transformation of three-dimensional nucleation to template-based two-dimensional surface nucleation caused the production of aragonite. Hemisphere, twinborn hemisphere and flower-shaped particles were produced by direction of the glass-slices. Planes were always appeared in these as-synthesized samples because the nucleation and the growth of these samples were adsorbed at the surfaces of the glass-slices. The formation mechanism of the as-formed sample was proposed. Compared with organic template, the present study provides a facile method to apply inorganic template to prepare functional materials.« less

  12. Effect of preparation method and CuO promotion in the conversion of ethanol into 1,3-butadiene over SiO₂-MgO catalysts.

    PubMed

    Angelici, Carlo; Velthoen, Marjolein E Z; Weckhuysen, Bert M; Bruijnincx, Pieter C A

    2014-09-01

    Silica-magnesia (Si/Mg=1:1) catalysts were studied in the one-pot conversion of ethanol to butadiene. The catalyst synthesis method was found to greatly influence morphology and performance, with materials prepared through wet-kneading performing best both in terms of ethanol conversion and butadiene yield. Detailed characterization of the catalysts synthesized through co-precipitation or wet-kneading allowed correlation of activity and selectivity with morphology, textural properties, crystallinity, and acidity/basicity. The higher yields achieved with the wet-kneaded catalysts were attributed to a morphology consisting of SiO2 spheres embedded in a thin layer of MgO. The particle size of the SiO2 catalysts also influenced performance, with catalysts with smaller SiO2 spheres showing higher activity. Temperature-programmed desorption (TPD) measurements showed that best butadiene yields were obtained with SiO2-MgO catalysts characterized by an intermediate amount of acidic and basic sites. A Hammett indicator study showed the catalysts' pK(a) value to be inversely correlated with the amount of dehydration by-products formed. Butadiene yields could be further improved by the addition of 1 wt% of CuO as promoter to give butadiene yields and selectivities as high as 40% and 53%, respectively. The copper promoter boosts the production of the acetaldehyde intermediate changing the rate-determining step of the process. TEM-energy-dispersive X-ray (EDX) analyses showed CuO to be present on both the SiO2 and MgO components. UV/Vis spectra of promoted catalysts in turn pointed at the presence of cluster-like CuO species, which are proposed to be responsible for the increased butadiene production. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  13. An optimized method for high quality DNA extraction from microalga Prototheca wickerhamii for genome sequencing.

    PubMed

    Jagielski, Tomasz; Gawor, Jan; Bakuła, Zofia; Zuchniewicz, Karolina; Żak, Iwona; Gromadka, Robert

    2017-01-01

    The complex cell wall structure of algae often precludes efficient extraction of their genetic material. The purpose of this study was to design a next-generation sequencing-suitable DNA isolation method for unicellular, achlorophyllous, yeast-like microalgae of the genus Prototheca , the only known plant pathogens of both humans and animals. The effectiveness of the newly proposed scheme was compared with five other, previously described methods, commonly used for DNA isolation from plants and/or yeasts, available either as laboratory-developed, in-house assays, based on liquid nitrogen grinding or different enzymatic digestion, or as commercially manufactured kits. All five, previously described, isolation assays yielded DNA concentrations lower than those obtained with the new method, averaging 16.15 ± 25.39 vs 74.2 ± 0.56 ng/µL, respectively. The new method was also superior in terms of DNA purity, as measured by A260/A280 (-0.41 ± 4.26 vs 2.02 ± 0.03), and A260/A230 (1.20 ± 1.12 vs 1.97 ± 0.07) ratios. Only the liquid nitrogen-based method yielded DNA of comparable quantity (60.96 ± 0.16 ng/µL) and quality (A260/A280 = 2.08 ± 0.02; A260/A230 = 2.23 ± 0.26). Still, the new method showed higher integrity, which was best illustrated upon electrophoretic analysis. Genomic DNA of Prototheca wickerhamii POL-1 strain isolated with the protocol herein proposed was successfully sequenced on the Illumina MiSeq platform. A new method for DNA isolation from Prototheca algae is described. The method, whose protocol involves glass beads pulverization and cesium chloride (CsCl) density gradient centrifugation, was demonstrated superior over the other common assays in terms of DNA quantity and quality. The method is also the first to offer the possibility of preparation of DNA template suitable for whole genome sequencing of Prototheca spp.

  14. A method of self-pursued boundary value on a body and the Magnus effect calculated with this method

    NASA Astrophysics Data System (ADS)

    Yoshino, Fumio; Hayashi, Tatsuo; Waka, Ryoji

    1991-03-01

    A computational method, designated 'SPB', is proposed for the automatic determination of the stream function Phi on an arbitrarily profiled body without recourse to empirical factors. The method is applied to the case of a rotating, circular cross-section cylinder in a uniform shear flow, and the results obtained are compared with those of both the method in which the value of Phi is fixed on a body and the conventional empirical method; it is in view of this established that the SPB method is very efficient and applicable to both steady and unsteady flows. The SPB method, in addition to yielding the aerodynamic forces acting on a cylinder, shows that the Magnus effect lift force decreases as the velocity gradient of the shear flow increases while the cylinder's rotational speed is kept constant.

  15. Direct runoff assessment using modified SME method in catchments in the Upper Vistula River Basin

    NASA Astrophysics Data System (ADS)

    Wałęga, A.; Rutkowska, A.; Grzebinoga, M.

    2017-04-01

    Correct determination of direct runoff is crucial for proper and safe dimensioning of hydroengineering structures. It is commonly assessed using SCS-CN method developed in the United States. However, due to deficiencies of this method, many improvements and modifications have been proposed. In this paper, a modified Sahu-Mishra-Eldo (SME) method was introduced and tested for three catchments located in the upper Vistula basin. Modification of SME method involved a determination of maximum potential retention S based on CN parameter derived from SCS-CN method. The modified SME method yielded direct runoff values very similar to those observed in the investigated catchments. Moreover, it generated significantly smaller errors in the direct runoff estimation as compared with SCS-CN and SME methods in the analyzed catchments. This approach may be used for estimating the runoff in uncontrolled catchments.

  16. Genetic Architecture of Ear Fasciation in Maize (Zea mays) under QTL Scrutiny

    PubMed Central

    Mendes-Moreira, Pedro; Alves, Mara L.; Satovic, Zlatko; dos Santos, João Pacheco; Santos, João Nina; Souza, João Cândido; Pêgo, Silas E.; Hallauer, Arnel R.; Vaz Patto, Maria Carlota

    2015-01-01

    Maize ear fasciation Knowledge of the genes affecting maize ear inflorescence may lead to better grain yield modeling. Maize ear fasciation, defined as abnormal flattened ears with high kernel row number, is a quantitative trait widely present in Portuguese maize landraces. Material and Methods Using a segregating population derived from an ear fasciation contrasting cross (consisting of 149 F2:3 families) we established a two location field trial using a complete randomized block design. Correlations and heritabilities for several ear fasciation-related traits and yield were determined. Quantitative Trait Loci (QTL) involved in the inheritance of those traits were identified and candidate genes for these QTL proposed. Results and Discussion Ear fasciation broad-sense heritability was 0.73. Highly significant correlations were found between ear fasciation and some ear and cob diameters and row number traits. For the 23 yield and ear fasciation-related traits, 65 QTL were identified, out of which 11 were detected in both environments, while for the three principal components, five to six QTL were detected per environment. Detected QTL were distributed across 17 genomic regions and explained individually, 8.7% to 22.4% of the individual traits or principal components phenotypic variance. Several candidate genes for these QTL regions were proposed, such as bearded-ear1, branched silkless1, compact plant1, ramosa2, ramosa3, tasselseed4 and terminal ear1. However, many QTL mapped to regions without known candidate genes, indicating potential chromosomal regions not yet targeted for maize ear traits selection. Conclusions Portuguese maize germplasm represents a valuable source of genes or allelic variants for yield improvement and elucidation of the genetic basis of ear fasciation traits. Future studies should focus on fine mapping of the identified genomic regions with the aim of map-based cloning. PMID:25923975

  17. Multiclass cancer classification using a feature subset-based ensemble from microRNA expression profiles.

    PubMed

    Piao, Yongjun; Piao, Minghao; Ryu, Keun Ho

    2017-01-01

    Cancer classification has been a crucial topic of research in cancer treatment. In the last decade, messenger RNA (mRNA) expression profiles have been widely used to classify different types of cancers. With the discovery of a new class of small non-coding RNAs; known as microRNAs (miRNAs), various studies have shown that the expression patterns of miRNA can also accurately classify human cancers. Therefore, there is a great demand for the development of machine learning approaches to accurately classify various types of cancers using miRNA expression data. In this article, we propose a feature subset-based ensemble method in which each model is learned from a different projection of the original feature space to classify multiple cancers. In our method, the feature relevance and redundancy are considered to generate multiple feature subsets, the base classifiers are learned from each independent miRNA subset, and the average posterior probability is used to combine the base classifiers. To test the performance of our method, we used bead-based and sequence-based miRNA expression datasets and conducted 10-fold and leave-one-out cross validations. The experimental results show that the proposed method yields good results and has higher prediction accuracy than popular ensemble methods. The Java program and source code of the proposed method and the datasets in the experiments are freely available at https://sourceforge.net/projects/mirna-ensemble/. Copyright © 2016 Elsevier Ltd. All rights reserved.

  18. Myocardial strains from 3D displacement encoded magnetic resonance imaging

    PubMed Central

    2012-01-01

    Background The ability to measure and quantify myocardial motion and deformation provides a useful tool to assist in the diagnosis, prognosis and management of heart disease. The recent development of magnetic resonance imaging methods, such as harmonic phase analysis of tagging and displacement encoding with stimulated echoes (DENSE), make detailed non-invasive 3D kinematic analyses of human myocardium possible in the clinic and for research purposes. A robust analysis method is required, however. Methods We propose to estimate strain using a polynomial function which produces local models of the displacement field obtained with DENSE. Given a specific polynomial order, the model is obtained as the least squares fit of the acquired displacement field. These local models are subsequently used to produce estimates of the full strain tensor. Results The proposed method is evaluated on a numerical phantom as well as in vivo on a healthy human heart. The evaluation showed that the proposed method produced accurate results and showed low sensitivity to noise in the numerical phantom. The method was also demonstrated in vivo by assessment of the full strain tensor and to resolve transmural strain variations. Conclusions Strain estimation within a 3D myocardial volume based on polynomial functions yields accurate and robust results when validated on an analytical model. The polynomial field is capable of resolving the measured material positions from the in vivo data, and the obtained in vivo strains values agree with previously reported myocardial strains in normal human hearts. PMID:22533791

  19. Improving resolution of MR images with an adversarial network incorporating images with different contrast.

    PubMed

    Kim, Ki Hwan; Do, Won-Joon; Park, Sung-Hong

    2018-05-04

    The routine MRI scan protocol consists of multiple pulse sequences that acquire images of varying contrast. Since high frequency contents such as edges are not significantly affected by image contrast, down-sampled images in one contrast may be improved by high resolution (HR) images acquired in another contrast, reducing the total scan time. In this study, we propose a new deep learning framework that uses HR MR images in one contrast to generate HR MR images from highly down-sampled MR images in another contrast. The proposed convolutional neural network (CNN) framework consists of two CNNs: (a) a reconstruction CNN for generating HR images from the down-sampled images using HR images acquired with a different MRI sequence and (b) a discriminator CNN for improving the perceptual quality of the generated HR images. The proposed method was evaluated using a public brain tumor database and in vivo datasets. The performance of the proposed method was assessed in tumor and no-tumor cases separately, with perceptual image quality being judged by a radiologist. To overcome the challenge of training the network with a small number of available in vivo datasets, the network was pretrained using the public database and then fine-tuned using the small number of in vivo datasets. The performance of the proposed method was also compared to that of several compressed sensing (CS) algorithms. Incorporating HR images of another contrast improved the quantitative assessments of the generated HR image in reference to ground truth. Also, incorporating a discriminator CNN yielded perceptually higher image quality. These results were verified in regions of normal tissue as well as tumors for various MRI sequences from pseudo k-space data generated from the public database. The combination of pretraining with the public database and fine-tuning with the small number of real k-space datasets enhanced the performance of CNNs in in vivo application compared to training CNNs from scratch. The proposed method outperformed the compressed sensing methods. The proposed method can be a good strategy for accelerating routine MRI scanning. © 2018 American Association of Physicists in Medicine.

  20. Deformable segmentation of 3D MR prostate images via distributed discriminative dictionary and ensemble learning

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Guo, Yanrong; Shao, Yeqin; Gao, Yaozong

    Purpose: Automatic prostate segmentation from MR images is an important task in various clinical applications such as prostate cancer staging and MR-guided radiotherapy planning. However, the large appearance and shape variations of the prostate in MR images make the segmentation problem difficult to solve. Traditional Active Shape/Appearance Model (ASM/AAM) has limited accuracy on this problem, since its basic assumption, i.e., both shape and appearance of the targeted organ follow Gaussian distributions, is invalid in prostate MR images. To this end, the authors propose a sparse dictionary learning method to model the image appearance in a nonparametric fashion and further integratemore » the appearance model into a deformable segmentation framework for prostate MR segmentation. Methods: To drive the deformable model for prostate segmentation, the authors propose nonparametric appearance and shape models. The nonparametric appearance model is based on a novel dictionary learning method, namely distributed discriminative dictionary (DDD) learning, which is able to capture fine distinctions in image appearance. To increase the differential power of traditional dictionary-based classification methods, the authors' DDD learning approach takes three strategies. First, two dictionaries for prostate and nonprostate tissues are built, respectively, using the discriminative features obtained from minimum redundancy maximum relevance feature selection. Second, linear discriminant analysis is employed as a linear classifier to boost the optimal separation between prostate and nonprostate tissues, based on the representation residuals from sparse representation. Third, to enhance the robustness of the authors' classification method, multiple local dictionaries are learned for local regions along the prostate boundary (each with small appearance variations), instead of learning one global classifier for the entire prostate. These discriminative dictionaries are located on different patches of the prostate surface and trained to adaptively capture the appearance in different prostate zones, thus achieving better local tissue differentiation. For each local region, multiple classifiers are trained based on the randomly selected samples and finally assembled by a specific fusion method. In addition to this nonparametric appearance model, a prostate shape model is learned from the shape statistics using a novel approach, sparse shape composition, which can model nonGaussian distributions of shape variation and regularize the 3D mesh deformation by constraining it within the observed shape subspace. Results: The proposed method has been evaluated on two datasets consisting of T2-weighted MR prostate images. For the first (internal) dataset, the classification effectiveness of the authors' improved dictionary learning has been validated by comparing it with three other variants of traditional dictionary learning methods. The experimental results show that the authors' method yields a Dice Ratio of 89.1% compared to the manual segmentation, which is more accurate than the three state-of-the-art MR prostate segmentation methods under comparison. For the second dataset, the MICCAI 2012 challenge dataset, the authors' proposed method yields a Dice Ratio of 87.4%, which also achieves better segmentation accuracy than other methods under comparison. Conclusions: A new magnetic resonance image prostate segmentation method is proposed based on the combination of deformable model and dictionary learning methods, which achieves more accurate segmentation performance on prostate T2 MR images.« less

  1. A Feature and Algorithm Selection Method for Improving the Prediction of Protein Structural Class.

    PubMed

    Ni, Qianwu; Chen, Lei

    2017-01-01

    Correct prediction of protein structural class is beneficial to investigation on protein functions, regulations and interactions. In recent years, several computational methods have been proposed in this regard. However, based on various features, it is still a great challenge to select proper classification algorithm and extract essential features to participate in classification. In this study, a feature and algorithm selection method was presented for improving the accuracy of protein structural class prediction. The amino acid compositions and physiochemical features were adopted to represent features and thirty-eight machine learning algorithms collected in Weka were employed. All features were first analyzed by a feature selection method, minimum redundancy maximum relevance (mRMR), producing a feature list. Then, several feature sets were constructed by adding features in the list one by one. For each feature set, thirtyeight algorithms were executed on a dataset, in which proteins were represented by features in the set. The predicted classes yielded by these algorithms and true class of each protein were collected to construct a dataset, which were analyzed by mRMR method, yielding an algorithm list. From the algorithm list, the algorithm was taken one by one to build an ensemble prediction model. Finally, we selected the ensemble prediction model with the best performance as the optimal ensemble prediction model. Experimental results indicate that the constructed model is much superior to models using single algorithm and other models that only adopt feature selection procedure or algorithm selection procedure. The feature selection procedure or algorithm selection procedure are really helpful for building an ensemble prediction model that can yield a better performance. Copyright© Bentham Science Publishers; For any queries, please email at epub@benthamscience.org.

  2. A variational approach to liver segmentation using statistics from multiple sources

    NASA Astrophysics Data System (ADS)

    Zheng, Shenhai; Fang, Bin; Li, Laquan; Gao, Mingqi; Wang, Yi

    2018-01-01

    Medical image segmentation plays an important role in digital medical research, and therapy planning and delivery. However, the presence of noise and low contrast renders automatic liver segmentation an extremely challenging task. In this study, we focus on a variational approach to liver segmentation in computed tomography scan volumes in a semiautomatic and slice-by-slice manner. In this method, one slice is selected and its connected component liver region is determined manually to initialize the subsequent automatic segmentation process. From this guiding slice, we execute the proposed method downward to the last one and upward to the first one, respectively. A segmentation energy function is proposed by combining the statistical shape prior, global Gaussian intensity analysis, and enforced local statistical feature under the level set framework. During segmentation, the shape of the liver shape is estimated by minimization of this function. The improved Chan-Vese model is used to refine the shape to capture the long and narrow regions of the liver. The proposed method was verified on two independent public databases, the 3D-IRCADb and the SLIVER07. Among all the tested methods, our method yielded the best volumetric overlap error (VOE) of 6.5 +/- 2.8 % , the best root mean square symmetric surface distance (RMSD) of 2.1 +/- 0.8 mm, the best maximum symmetric surface distance (MSD) of 18.9 +/- 8.3 mm in 3D-IRCADb dataset, and the best average symmetric surface distance (ASD) of 0.8 +/- 0.5 mm, the best RMSD of 1.5 +/- 1.1 mm in SLIVER07 dataset, respectively. The results of the quantitative comparison show that the proposed liver segmentation method achieves competitive segmentation performance with state-of-the-art techniques.

  3. Development of a control algorithm for the ultrasound scanning robot (NCCUSR) using ultrasound image and force feedback.

    PubMed

    Kim, Yeoun Jae; Seo, Jong Hyun; Kim, Hong Rae; Kim, Kwang Gi

    2017-06-01

    Clinicians who frequently perform ultrasound scanning procedures often suffer from musculoskeletal disorders, arthritis, and myalgias. To minimize their occurrence and to assist clinicians, ultrasound scanning robots have been developed worldwide. Although, to date, there is still no commercially available ultrasound scanning robot, many control methods have been suggested and researched. These control algorithms are either image based or force based. If the ultrasound scanning robot control algorithm was a combination of the two algorithms, it could benefit from the advantage of each one. However, there are no existing control methods for ultrasound scanning robots that combine force control and image analysis. Therefore, in this work, a control algorithm is developed for an ultrasound scanning robot using force feedback and ultrasound image analysis. A manipulator-type ultrasound scanning robot named 'NCCUSR' is developed and a control algorithm for this robot is suggested and verified. First, conventional hybrid position-force control is implemented for the robot and the hybrid position-force control algorithm is combined with ultrasound image analysis to fully control the robot. The control method is verified using a thyroid phantom. It was found that the proposed algorithm can be applied to control the ultrasound scanning robot and experimental outcomes suggest that the images acquired using the proposed control method can yield a rating score that is equivalent to images acquired directly by the clinicians. The proposed control method can be applied to control the ultrasound scanning robot. However, more work must be completed to verify the proposed control method in order to become clinically feasible. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  4. Improved Variable Selection Algorithm Using a LASSO-Type Penalty, with an Application to Assessing Hepatitis B Infection Relevant Factors in Community Residents

    PubMed Central

    Guo, Pi; Zeng, Fangfang; Hu, Xiaomin; Zhang, Dingmei; Zhu, Shuming; Deng, Yu; Hao, Yuantao

    2015-01-01

    Objectives In epidemiological studies, it is important to identify independent associations between collective exposures and a health outcome. The current stepwise selection technique ignores stochastic errors and suffers from a lack of stability. The alternative LASSO-penalized regression model can be applied to detect significant predictors from a pool of candidate variables. However, this technique is prone to false positives and tends to create excessive biases. It remains challenging to develop robust variable selection methods and enhance predictability. Material and methods Two improved algorithms denoted the two-stage hybrid and bootstrap ranking procedures, both using a LASSO-type penalty, were developed for epidemiological association analysis. The performance of the proposed procedures and other methods including conventional LASSO, Bolasso, stepwise and stability selection models were evaluated using intensive simulation. In addition, methods were compared by using an empirical analysis based on large-scale survey data of hepatitis B infection-relevant factors among Guangdong residents. Results The proposed procedures produced comparable or less biased selection results when compared to conventional variable selection models. In total, the two newly proposed procedures were stable with respect to various scenarios of simulation, demonstrating a higher power and a lower false positive rate during variable selection than the compared methods. In empirical analysis, the proposed procedures yielding a sparse set of hepatitis B infection-relevant factors gave the best predictive performance and showed that the procedures were able to select a more stringent set of factors. The individual history of hepatitis B vaccination, family and individual history of hepatitis B infection were associated with hepatitis B infection in the studied residents according to the proposed procedures. Conclusions The newly proposed procedures improve the identification of significant variables and enable us to derive a new insight into epidemiological association analysis. PMID:26214802

  5. 76 FR 12072 - Guidance for Agency Information Collection Activities: Proposed Collection; Comment Request...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-03-04

    ... not statistical surveys that yield quantitative results that can be generalized to the population of... information will not be used for quantitative information collections that are designed to yield reliably... generic mechanisms that are designed to yield quantitative results. No comments were received in response...

  6. Site selection for managed aquifer recharge using fuzzy rules: integrating geographical information system (GIS) tools and multi-criteria decision making

    NASA Astrophysics Data System (ADS)

    Malekmohammadi, Bahram; Ramezani Mehrian, Majid; Jafari, Hamid Reza

    2012-11-01

    One of the most important water-resources management strategies for arid lands is managed aquifer recharge (MAR). In establishing a MAR scheme, site selection is the prime prerequisite that can be assisted by geographic information system (GIS) tools. One of the most important uncertainties in the site-selection process using GIS is finite ranges or intervals resulting from data classification. In order to reduce these uncertainties, a novel method has been developed involving the integration of multi-criteria decision making (MCDM), GIS, and a fuzzy inference system (FIS). The Shemil-Ashkara plain in the Hormozgan Province of Iran was selected as the case study; slope, geology, groundwater depth, potential for runoff, land use, and groundwater electrical conductivity have been considered as site-selection factors. By defining fuzzy membership functions for the input layers and the output layer, and by constructing fuzzy rules, a FIS has been developed. Comparison of the results produced by the proposed method and the traditional simple additive weighted (SAW) method shows that the proposed method yields more precise results. In conclusion, fuzzy-set theory can be an effective method to overcome associated uncertainties in classification of geographic information data.

  7. Investigation on the reproduction performance versus acoustic contrast control in sound field synthesis.

    PubMed

    Bai, Mingsian R; Wen, Jheng-Ciang; Hsu, Hoshen; Hua, Yi-Hsin; Hsieh, Yu-Hao

    2014-10-01

    A sound reconstruction system is proposed for audio reproduction with extended sweet spot and reduced reflections. An equivalent source method (ESM)-based sound field synthesis (SFS) approach, with the aid of dark zone minimization is adopted in the study. Conventional SFS that is based on the free-field assumption suffers from synthesis error due to boundary reflections. To tackle the problem, the proposed system utilizes convex optimization in designing array filters with both reproduction performance and acoustic contrast taken into consideration. Control points are deployed in the dark zone to minimize the reflections from the walls. Two approaches are employed to constrain the pressure and velocity in the dark zone. Pressure matching error (PME) and acoustic contrast (AC) are used as performance measures in simulations and experiments for a rectangular loudspeaker array. Perceptual Evaluation of Audio Quality (PEAQ) is also used to assess the audio reproduction quality. The results show that the pressure-constrained (PC) method yields better acoustic contrast, but poorer reproduction performance than the pressure-velocity constrained (PVC) method. A subjective listening test also indicates that the PVC method is the preferred method in a live room.

  8. Heart rate calculation from ensemble brain wave using wavelet and Teager-Kaiser energy operator.

    PubMed

    Srinivasan, Jayaraman; Adithya, V

    2015-01-01

    Electroencephalogram (EEG) signal artifacts are caused by various factors, such as, Electro-oculogram (EOG), Electromyogram (EMG), Electrocardiogram (ECG), movement artifact and line interference. The relatively high electrical energy cardiac activity causes EEG artifacts. In EEG signal processing the general approach is to remove the ECG signal. In this paper, we introduce an automated method to extract the ECG signal from EEG using wavelet and Teager-Kaiser energy operator for R-peak enhancement and detection. From the detected R-peaks the heart rate (HR) is calculated for clinical diagnosis. To check the efficiency of our method, we compare the HR calculated from ECG signal recorded in synchronous with EEG. The proposed method yields a mean error of 1.4% for the heart rate and 1.7% for mean R-R interval. The result illustrates that, proposed method can be used for ECG extraction from single channel EEG and used in clinical diagnosis like estimation for stress analysis, fatigue, and sleep stages classification studies as a multi-model system. In addition, this method eliminates the dependence of additional synchronous ECG in extraction of ECG from EEG signal process.

  9. Empirische Verfahren zur Ableitung verschiedener Porositätsarten aus Durchlässigkeitsbeiwert und Ungleichkörnigkeitszahl - ein Überblick

    NASA Astrophysics Data System (ADS)

    Fuchs, Sven; Ziesche, Michael; Nillert, Peter

    2017-06-01

    This paper comprises a review of the 13 studies that have been proposed for the derivation of porosity, effective porosity and/or specific yield from grain size distributions (Lejbenson 1947; Istomina 1957; Beyer 1964; Hennig 1966; Golf 1966; Marotz 1968; Beyer und Schweiger 1969; Seiler 1973; Bureau of Reclamation 1984; Helmbold 1988; Beims und Luckner 1999; Balke et al. 2000; Helmbold 2002). Experimental design, limitations and application boundaries are discussed and methods are compared against each other. The quality of the predictive methods strongly depends on the experimental design and the sample type.

  10. Deterministic theory of Monte Carlo variance

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ueki, T.; Larsen, E.W.

    1996-12-31

    The theoretical estimation of variance in Monte Carlo transport simulations, particularly those using variance reduction techniques, is a substantially unsolved problem. In this paper, the authors describe a theory that predicts the variance in a variance reduction method proposed by Dwivedi. Dwivedi`s method combines the exponential transform with angular biasing. The key element of this theory is a new modified transport problem, containing the Monte Carlo weight w as an extra independent variable, which simulates Dwivedi`s Monte Carlo scheme. The (deterministic) solution of this modified transport problem yields an expression for the variance. The authors give computational results that validatemore » this theory.« less

  11. Optimal control strategy for an impulsive stochastic competition system with time delays and jumps

    NASA Astrophysics Data System (ADS)

    Liu, Lidan; Meng, Xinzhu; Zhang, Tonghua

    2017-07-01

    Driven by both white and jump noises, a stochastic delayed model with two competitive species in a polluted environment is proposed and investigated. By using the comparison theorem of stochastic differential equations and limit superior theory, sufficient conditions for persistence in mean and extinction of two species are established. In addition, we obtain that the system is asymptotically stable in distribution by using ergodic method. Furthermore, the optimal harvesting effort and the maximum of expectation of sustainable yield (ESY) are derived from Hessian matrix method and optimal harvesting theory of differential equations. Finally, some numerical simulations are provided to illustrate the theoretical results.

  12. On the Heating of Ions in Noncylindrical Z-Pinches

    NASA Astrophysics Data System (ADS)

    Svirsky, E. B.

    2018-01-01

    The method proposed here for analyzing processes in a hot plasma of noncylindrical Z-pinches is based on separation of the group of high-energy ions into a special fraction. Such ions constitute an insignificant fraction ( 10%) of the total volume of the Z-pinch plasma, but these ions contribute the most to the formation of conditions in which the pinch becomes a source of nuclear fusion products and X-ray radiation. The method allows a quite correct approach to obtaining quantitative estimates of the plasma parameters, the nuclear fusion energy yield, and the features of neutron fluxes in experiments with Z-pinches.

  13. Evaluation of quantification methods for real-time PCR minor groove binding hybridization probe assays.

    PubMed

    Durtschi, Jacob D; Stevenson, Jeffery; Hymas, Weston; Voelkerding, Karl V

    2007-02-01

    Real-time PCR data analysis for quantification has been the subject of many studies aimed at the identification of new and improved quantification methods. Several analysis methods have been proposed as superior alternatives to the common variations of the threshold crossing method. Notably, sigmoidal and exponential curve fit methods have been proposed. However, these studies have primarily analyzed real-time PCR with intercalating dyes such as SYBR Green. Clinical real-time PCR assays, in contrast, often employ fluorescent probes whose real-time amplification fluorescence curves differ from those of intercalating dyes. In the current study, we compared four analysis methods related to recent literature: two versions of the threshold crossing method, a second derivative maximum method, and a sigmoidal curve fit method. These methods were applied to a clinically relevant real-time human herpes virus type 6 (HHV6) PCR assay that used a minor groove binding (MGB) Eclipse hybridization probe as well as an Epstein-Barr virus (EBV) PCR assay that used an MGB Pleiades hybridization probe. We found that the crossing threshold method yielded more precise results when analyzing the HHV6 assay, which was characterized by lower signal/noise and less developed amplification curve plateaus. In contrast, the EBV assay, characterized by greater signal/noise and amplification curves with plateau regions similar to those observed with intercalating dyes, gave results with statistically similar precision by all four analysis methods.

  14. Activated N-nitrosocarbamates for regioselective synthesis of N-nitrosoureas.

    PubMed

    Martinez, J; Oiry, J; Imbach, J L; Winternitz, F

    1982-02-01

    A practical and convenient method for synthesizing antitumor compounds, N-alkyl-N-nitrosoureas, regioselectively nitrosated on the nitrogen atom bearing the alkyl group is proposed. N-Alkyl-N-nitrosocarbamates are interesting intermediates in these syntheses and yield, by reaction with amino compounds, the regioselectively nitrosated N-alkyl-N-nitrosoureas. As an interesting example, N,N'-bis[(2-chloroethyl)nitrosocarbamoyl]cystamine, a new attractive oncostatic derivative, has been prepared. The cytotoxic activity of these various compounds were tested on L1210 leukemia.

  15. Local dynamic range compensation for scanning electron microscope imaging system.

    PubMed

    Sim, K S; Huang, Y H

    2015-01-01

    This is the extended project by introducing the modified dynamic range histogram modification (MDRHM) and is presented in this paper. This technique is used to enhance the scanning electron microscope (SEM) imaging system. By comparing with the conventional histogram modification compensators, this technique utilizes histogram profiling by extending the dynamic range of each tile of an image to the limit of 0-255 range while retains its histogram shape. The proposed technique yields better image compensation compared to conventional methods. © Wiley Periodicals, Inc.

  16. Gastrointestinal bleeding detection in wireless capsule endoscopy images using handcrafted and CNN features.

    PubMed

    Xiao Jia; Meng, Max Q-H

    2017-07-01

    Gastrointestinal (GI) bleeding detection plays an essential role in wireless capsule endoscopy (WCE) examination. In this paper, we present a new approach for WCE bleeding detection that combines handcrafted (HC) features and convolutional neural network (CNN) features. Compared with our previous work, a smaller-scale CNN architecture is constructed to lower the computational cost. In experiments, we show that the proposed strategy is highly capable when training data is limited, and yields comparable or better results than the latest methods.

  17. Use of the Contour Method to Determine Autofrettage Residual Stresses: A Proposed Experimental Procedure

    DTIC Science & Technology

    2013-05-01

    autofrettage of a long tube: Residual hoop, radial and axial stresses, 70% overstrain, numerical, open-end Autofrettage of A723 steel including non- linear...concentrate axial stresses which are expected to range between 18% of yield in compression at the bore to 15% in tension at the OD. So the zone of the...experiments is that they were conducted on axially thin (quasi plane stress) ring specimens cut from much longer gun tubes. A recent paper [2

  18. Simple and efficient sustainable semi-synthesis of oleacein [2-(3,4-hydroxyphenyl) ethyl (3S,4E)-4-formyl-3-(2-oxoethyl)hex-4-enoate] as potential additive for edible oils.

    PubMed

    Costanzo, Paola; Bonacci, Sonia; Cariati, Luca; Nardi, Monica; Oliverio, Manuela; Procopio, Antonio

    2018-04-15

    A simple and very environmental friendly microwave assisted method to produce oleacein in good yield starting from the easily available oleuropein is here presented. The methodology is proposed to produce the appropriate amount of hydroxytyrosol derivatives to enrich a commercial oil for an oil which provides beneficial effects on the human health. Copyright © 2017 Elsevier Ltd. All rights reserved.

  19. Disocclusion: a variational approach using level lines.

    PubMed

    Masnou, Simon

    2002-01-01

    Object recognition, robot vision, image and film restoration may require the ability to perform disocclusion. We call disocclusion the recovery of occluded areas in a digital image by interpolation from their vicinity. It is shown in this paper how disocclusion can be performed by means of the level-lines structure, which offers a reliable, complete and contrast-invariant representation of images. Level-lines based disocclusion yields a solution that may have strong discontinuities. The proposed method is compatible with Kanizsa's amodal completion theory.

  20. Transfer function modeling of damping mechanisms in viscoelastic plates

    NASA Technical Reports Server (NTRS)

    Slater, J. C.; Inman, D. J.

    1991-01-01

    This work formulates a method for the modeling of material damping characteristics in plates. The Sophie German equation of classical plate theory is modified to incorporate hysteresis effects represented by complex stiffness using the transfer function approach proposed by Golla and Hughes, (1985). However, this procedure is not limited to this representation. The governing characteristic equation is decoupled through separation of variables, yielding a solution similar to that of undamped classical plate theory, allowing solution of the steady state as well as the transient response problem.

Top