Method to improve commercial bonded SOI material
Maris, Humphrey John; Sadana, Devendra Kumar
2000-07-11
A method of improving the bonding characteristics of a previously bonded silicon on insulator (SOI) structure is provided. The improvement in the bonding characteristics is achieved in the present invention by, optionally, forming an oxide cap layer on the silicon surface of the bonded SOI structure and then annealing either the uncapped or oxide capped structure in a slightly oxidizing ambient at temperatures greater than 1200.degree. C. Also provided herein is a method for detecting the bonding characteristics of previously bonded SOI structures. According to this aspect of the present invention, a pico-second laser pulse technique is employed to determine the bonding imperfections of previously bonded SOI structures.
Spectra library assisted de novo peptide sequencing for HCD and ETD spectra pairs.
Yan, Yan; Zhang, Kaizhong
2016-12-23
De novo peptide sequencing via tandem mass spectrometry (MS/MS) has been developed rapidly in recent years. With the use of spectra pairs from the same peptide under different fragmentation modes, performance of de novo sequencing is greatly improved. Currently, with large amount of spectra sequenced everyday, spectra libraries containing tens of thousands of annotated experimental MS/MS spectra become available. These libraries provide information of the spectra properties, thus have the potential to be used with de novo sequencing to improve its performance. In this study, an improved de novo sequencing method assisted with spectra library is proposed. It uses spectra libraries as training datasets and introduces significant scores of the features used in our previous de novo sequencing method for HCD and ETD spectra pairs. Two pairs of HCD and ETD spectral datasets were used to test the performance of the proposed method and our previous method. The results show that this proposed method achieves better sequencing accuracy with higher ranked correct sequences and less computational time. This paper proposed an advanced de novo sequencing method for HCD and ETD spectra pair and used information from spectra libraries and significant improved previous similar methods.
Improved Method for Prediction of Attainable Wing Leading-Edge Thrust
NASA Technical Reports Server (NTRS)
Carlson, Harry W.; McElroy, Marcus O.; Lessard, Wendy B.; McCullers, L. Arnold
1996-01-01
Prediction of the loss of wing leading-edge thrust and the accompanying increase in drag due to lift, when flow is not completely attached, presents a difficult but commonly encountered problem. A method (called the previous method) for the prediction of attainable leading-edge thrust and the resultant effect on airplane aerodynamic performance has been in use for more than a decade. Recently, the method has been revised to enhance its applicability to current airplane design and evaluation problems. The improved method (called the present method) provides for a greater range of airfoil shapes from very sharp to very blunt leading edges. It is also based on a wider range of Reynolds numbers than was available for the previous method. The present method, when employed in computer codes for aerodynamic analysis, generally results in improved correlation with experimental wing-body axial-force data and provides reasonable estimates of the measured drag.
An Improved Method for Studying the Enzyme-Catalyzed Oxidation of Glucose Using Luminescent Probes
ERIC Educational Resources Information Center
Bare, William D.; Pham, Chi V.; Cuber, Matthew; Demas, J. N.
2007-01-01
A new method is presented for measuring the rate of the oxidation of glucose in the presence of glucose oxidase. The improved method employs luminescence measurements to directly determine the concentration of oxygen in real time, thus obviating complicated reaction schemes employed in previous methods. Our method has been used to determine…
NASA Astrophysics Data System (ADS)
Jang, T. S.
2018-03-01
A dispersion-relation preserving (DRP) method, as a semi-analytic iterative procedure, has been proposed by Jang (2017) for integrating the classical Boussinesq equation. It has been shown to be a powerful numerical procedure for simulating a nonlinear dispersive wave system because it preserves the dispersion-relation, however, there still exists a potential flaw, e.g., a restriction on nonlinear wave amplitude and a small region of convergence (ROC) and so on. To remedy the flaw, a new DRP method is proposed in this paper, aimed at improving convergence performance. The improved method is proved to have convergence properties and dispersion-relation preserving nature for small waves; of course, unique existence of the solutions is also proved. In addition, by a numerical experiment, the method is confirmed to be good at observing nonlinear wave phenomena such as moving solitary waves and their binary collision with different wave amplitudes. Especially, it presents a ROC (much) wider than that of the previous method by Jang (2017). Moreover, it gives the numerical simulation of a high (or large-amplitude) nonlinear dispersive wave. In fact, it is demonstrated to simulate a large-amplitude solitary wave and the collision of two solitary waves with large-amplitudes that we have failed to simulate with the previous method. Conclusively, it is worth noting that better convergence results are achieved compared to Jang (2017); i.e., they represent a major improvement in practice over the previous method.
Improved Method for Determining the Heat Capacity of Metals
ERIC Educational Resources Information Center
Barth, Roger; Moran, Michael J.
2014-01-01
An improved procedure for laboratory determination of the heat capacities of metals is described. The temperature of cold water is continuously recorded with a computer-interfaced temperature probe and the room temperature metal is added. The method is more accurate and faster than previous methods. It allows students to get accurate measurements…
Illias, Hazlee Azil; Chai, Xin Rui; Abu Bakar, Ab Halim; Mokhlis, Hazlie
2015-01-01
It is important to predict the incipient fault in transformer oil accurately so that the maintenance of transformer oil can be performed correctly, reducing the cost of maintenance and minimise the error. Dissolved gas analysis (DGA) has been widely used to predict the incipient fault in power transformers. However, sometimes the existing DGA methods yield inaccurate prediction of the incipient fault in transformer oil because each method is only suitable for certain conditions. Many previous works have reported on the use of intelligence methods to predict the transformer faults. However, it is believed that the accuracy of the previously proposed methods can still be improved. Since artificial neural network (ANN) and particle swarm optimisation (PSO) techniques have never been used in the previously reported work, this work proposes a combination of ANN and various PSO techniques to predict the transformer incipient fault. The advantages of PSO are simplicity and easy implementation. The effectiveness of various PSO techniques in combination with ANN is validated by comparison with the results from the actual fault diagnosis, an existing diagnosis method and ANN alone. Comparison of the results from the proposed methods with the previously reported work was also performed to show the improvement of the proposed methods. It was found that the proposed ANN-Evolutionary PSO method yields the highest percentage of correct identification for transformer fault type than the existing diagnosis method and previously reported works.
2015-01-01
It is important to predict the incipient fault in transformer oil accurately so that the maintenance of transformer oil can be performed correctly, reducing the cost of maintenance and minimise the error. Dissolved gas analysis (DGA) has been widely used to predict the incipient fault in power transformers. However, sometimes the existing DGA methods yield inaccurate prediction of the incipient fault in transformer oil because each method is only suitable for certain conditions. Many previous works have reported on the use of intelligence methods to predict the transformer faults. However, it is believed that the accuracy of the previously proposed methods can still be improved. Since artificial neural network (ANN) and particle swarm optimisation (PSO) techniques have never been used in the previously reported work, this work proposes a combination of ANN and various PSO techniques to predict the transformer incipient fault. The advantages of PSO are simplicity and easy implementation. The effectiveness of various PSO techniques in combination with ANN is validated by comparison with the results from the actual fault diagnosis, an existing diagnosis method and ANN alone. Comparison of the results from the proposed methods with the previously reported work was also performed to show the improvement of the proposed methods. It was found that the proposed ANN-Evolutionary PSO method yields the highest percentage of correct identification for transformer fault type than the existing diagnosis method and previously reported works. PMID:26103634
Kaplan, Samuel; Chertock, Alan J.; Punches, James R.
1977-01-01
A method for spacing fast reactor fuel rods using a wire wrapper improved by orienting the wire-wrapped fuel rods in a unique manner which introduces desirable performance characteristics not attainable by previous wire-wrapped designs. Use of this method in a liquid metal fast breeder reactor results in: (a) improved mechanical performance, (b) improved rod-to-rod contact, (c) reduced steel volume, and (d) improved thermal-hydraulic performance. The method produces a "locked wrap" design which tends to lock the rods together at each of the wire cluster locations.
Image enhancement in positron emission mammography
NASA Astrophysics Data System (ADS)
Slavine, Nikolai V.; Seiler, Stephen; McColl, Roderick W.; Lenkinski, Robert E.
2017-02-01
Purpose: To evaluate an efficient iterative deconvolution method (RSEMD) for improving the quantitative accuracy of previously reconstructed breast images by commercial positron emission mammography (PEM) scanner. Materials and Methods: The RSEMD method was tested on breast phantom data and clinical PEM imaging data. Data acquisition was performed on a commercial Naviscan Flex Solo II PEM camera. This method was applied to patient breast images previously reconstructed with Naviscan software (MLEM) to determine improvements in resolution, signal to noise ratio (SNR) and contrast to noise ratio (CNR.) Results: In all of the patients' breast studies the post-processed images proved to have higher resolution and lower noise as compared with images reconstructed by conventional methods. In general, the values of SNR reached a plateau at around 6 iterations with an improvement factor of about 2 for post-processed Flex Solo II PEM images. Improvements in image resolution after the application of RSEMD have also been demonstrated. Conclusions: A rapidly converging, iterative deconvolution algorithm with a novel resolution subsets-based approach RSEMD that operates on patient DICOM images has been used for quantitative improvement in breast imaging. The RSEMD method can be applied to clinical PEM images to improve image quality to diagnostically acceptable levels and will be crucial in order to facilitate diagnosis of tumor progression at the earliest stages. The RSEMD method can be considered as an extended Richardson-Lucy algorithm with multiple resolution levels (resolution subsets).
Improved Dot Diffusion For Image Halftoning
1999-01-01
The dot diffusion method for digital halftoning has the advantage of parallelism unlike the error diffusion method. The method was recently improved...by optimization of the so-called class matrix so that the resulting halftones are comparable to the error diffused halftones . In this paper we will...first review the dot diffusion method. Previously, 82 class matrices were used for dot diffusion method. A problem with this size of class matrix is
Stoutland, Alicia; Long, Ross E; Mercado, Ana; Daskalogiannakis, John; Hathaway, Ronald R; Russell, Kathleen A; Singer, Emily; Semb, Gunvor; Shaw, William C
2017-11-01
The purpose of this study was to investigate ways to improve rater reliability and satisfaction in nasolabial esthetic evaluations of patients with complete unilateral cleft lip and palate (UCLP), by modifying the Asher-McDade method with use of Q-sort methodology. Blinded ratings of cropped photographs of one hundred forty-nine 5- to 7-year-old consecutively treated patients with complete UCLP from 4 different centers were used in a rating of frontal and profile nasolabial esthetic outcomes by 6 judges involved in the Americleft Project's intercenter outcome comparisons. Four judges rated in previous studies using the original Asher-McDade approach. For the Q-sort modification, rather than projection of images, each judge had cards with frontal and profile photographs of each patient and rated them on a scale of 1 to 5 for vermillion border, nasolabial frontal, and profile, using the Q-sort method with placement of cards into categories 1 to 5. Inter- and intrarater reliabilities were calculated using the Weighted Kappa (95% confidence interval). For 4 raters, the reliabilities were compared with those in previous studies. There was no significant improvement in inter-rater reliabilities using the new method. Intrarater reliability consistently improved. All raters preferred the Q-sort method with rating cards rather than a PowerPoint of photos, which improved internal consistency in rating compared to previous studies using the original Asher-McDade method. All raters preferred this method because of the ability to continuously compare photos and adjust relative ratings between patients.
NASA Technical Reports Server (NTRS)
Sitterley, T. E.
1974-01-01
The effectivess of an improved static retraining method was evaluated for a simulated space vehicle approach and landing under instrument and visual flight conditions. Experienced pilots were trained and then tested after 4 months without flying to compare their performance using the improved method with three methods previously evaluated. Use of the improved static retraining method resulted in no practical or significant skill degradation and was found to be even more effective than methods using a dynamic presentation of visual cues. The results suggested that properly structured open loop methods of flight control task retraining are feasible.
GStream: Improving SNP and CNV Coverage on Genome-Wide Association Studies
Alonso, Arnald; Marsal, Sara; Tortosa, Raül; Canela-Xandri, Oriol; Julià, Antonio
2013-01-01
We present GStream, a method that combines genome-wide SNP and CNV genotyping in the Illumina microarray platform with unprecedented accuracy. This new method outperforms previous well-established SNP genotyping software. More importantly, the CNV calling algorithm of GStream dramatically improves the results obtained by previous state-of-the-art methods and yields an accuracy that is close to that obtained by purely CNV-oriented technologies like Comparative Genomic Hybridization (CGH). We demonstrate the superior performance of GStream using microarray data generated from HapMap samples. Using the reference CNV calls generated by the 1000 Genomes Project (1KGP) and well-known studies on whole genome CNV characterization based either on CGH or genotyping microarray technologies, we show that GStream can increase the number of reliably detected variants up to 25% compared to previously developed methods. Furthermore, the increased genome coverage provided by GStream allows the discovery of CNVs in close linkage disequilibrium with SNPs, previously associated with disease risk in published Genome-Wide Association Studies (GWAS). These results could provide important insights into the biological mechanism underlying the detected disease risk association. With GStream, large-scale GWAS will not only benefit from the combined genotyping of SNPs and CNVs at an unprecedented accuracy, but will also take advantage of the computational efficiency of the method. PMID:23844243
RESEARCH ASSOCIATED WITH THE DEVELOPMENT OF EPA METHOD 552.2
The work presented in this paper entails the development of a method for haloacetic acid (HAA) analysis, Environmental Protection Agency (EPA)method 552.2, that improves the saftey and efficiency of previous methods and incorporates three additional trihalogenated acetic acids: b...
Affine Projection Algorithm with Improved Data-Selective Method Using the Condition Number
NASA Astrophysics Data System (ADS)
Ban, Sung Jun; Lee, Chang Woo; Kim, Sang Woo
Recently, a data-selective method has been proposed to achieve low misalignment in affine projection algorithm (APA) by keeping the condition number of an input data matrix small. We present an improved method, and a complexity reduction algorithm for the APA with the data-selective method. Experimental results show that the proposed algorithm has lower misalignment and a lower condition number for an input data matrix than both the conventional APA and the APA with the previous data-selective method.
ERIC Educational Resources Information Center
Lin, P. L.; Tan, W. H.
2003-01-01
Presents a new method to improve the performance of query processing in a spatial database. Experiments demonstrated that performance of database systems can be improved because both the number of objects accessed and number of objects requiring detailed inspection are much less than those in the previous approach. (AEF)
NASA Astrophysics Data System (ADS)
Batool, Fiza; Akram, Ghazala
2018-05-01
An improved (G'/G)-expansion method is proposed for extracting more general solitary wave solutions of the nonlinear fractional Cahn-Allen equation. The temporal fractional derivative is taken in the sense of Jumarie's fractional derivative. The results of this article are generalized and extended version of previously reported solutions.
puma: a Bioconductor package for propagating uncertainty in microarray analysis.
Pearson, Richard D; Liu, Xuejun; Sanguinetti, Guido; Milo, Marta; Lawrence, Neil D; Rattray, Magnus
2009-07-09
Most analyses of microarray data are based on point estimates of expression levels and ignore the uncertainty of such estimates. By determining uncertainties from Affymetrix GeneChip data and propagating these uncertainties to downstream analyses it has been shown that we can improve results of differential expression detection, principal component analysis and clustering. Previously, implementations of these uncertainty propagation methods have only been available as separate packages, written in different languages. Previous implementations have also suffered from being very costly to compute, and in the case of differential expression detection, have been limited in the experimental designs to which they can be applied. puma is a Bioconductor package incorporating a suite of analysis methods for use on Affymetrix GeneChip data. puma extends the differential expression detection methods of previous work from the 2-class case to the multi-factorial case. puma can be used to automatically create design and contrast matrices for typical experimental designs, which can be used both within the package itself but also in other Bioconductor packages. The implementation of differential expression detection methods has been parallelised leading to significant decreases in processing time on a range of computer architectures. puma incorporates the first R implementation of an uncertainty propagation version of principal component analysis, and an implementation of a clustering method based on uncertainty propagation. All of these techniques are brought together in a single, easy-to-use package with clear, task-based documentation. For the first time, the puma package makes a suite of uncertainty propagation methods available to a general audience. These methods can be used to improve results from more traditional analyses of microarray data. puma also offers improvements in terms of scope and speed of execution over previously available methods. puma is recommended for anyone working with the Affymetrix GeneChip platform for gene expression analysis and can also be applied more generally.
Propagation-based x-ray phase contrast imaging using an iterative phase diversity technique
NASA Astrophysics Data System (ADS)
Carroll, Aidan J.; van Riessen, Grant A.; Balaur, Eugeniu; Dolbnya, Igor P.; Tran, Giang N.; Peele, Andrew G.
2018-03-01
Through the use of a phase diversity technique, we demonstrate a near-field in-line x-ray phase contrast algorithm that provides improved object reconstruction when compared to our previous iterative methods for a homogeneous sample. Like our previous methods, the new technique uses the sample refractive index distribution during the reconstruction process. The technique complements existing monochromatic and polychromatic methods and is useful in situations where experimental phase contrast data is affected by noise.
NASA Astrophysics Data System (ADS)
Li, Tianyi; Schlüter, Steffen; Dragila, Maria Ines; Wildenschild, Dorthe
2018-04-01
We present an improved method for estimating interfacial curvatures from x-ray computed microtomography (CMT) data that significantly advances the potential for this tool to unravel the mechanisms and phenomena associated with multi-phase fluid motion in porous media. CMT data, used to analyze the spatial distribution and capillary pressure-saturation (Pc-S) relationships of liquid phases, requires accurate estimates of interfacial curvature. Our improved method for curvature estimation combines selective interface modification and distance weighting approaches. It was verified against synthetic (analytical computer-generated) and real image data sets, demonstrating a vast improvement over previous methods. Using this new tool on a previously published data set (multiphase flow) yielded important new insights regarding the pressure state of the disconnected nonwetting phase during drainage and imbibition. The trapped and disconnected non-wetting phase delimits its own hysteretic Pc-S curve that inhabits the space within the main hysteretic Pc-S loop of the connected wetting phase. Data suggests that the pressure of the disconnected, non-wetting phase is strongly modified by the pore geometry rather than solely by the bulk liquid phase that surrounds it.
NASA Astrophysics Data System (ADS)
Crawford, I.; Ruske, S.; Topping, D. O.; Gallagher, M. W.
2015-07-01
In this paper we present improved methods for discriminating and quantifying Primary Biological Aerosol Particles (PBAP) by applying hierarchical agglomerative cluster analysis to multi-parameter ultra violet-light induced fluorescence (UV-LIF) spectrometer data. The methods employed in this study can be applied to data sets in excess of 1×106 points on a desktop computer, allowing for each fluorescent particle in a dataset to be explicitly clustered. This reduces the potential for misattribution found in subsampling and comparative attribution methods used in previous approaches, improving our capacity to discriminate and quantify PBAP meta-classes. We evaluate the performance of several hierarchical agglomerative cluster analysis linkages and data normalisation methods using laboratory samples of known particle types and an ambient dataset. Fluorescent and non-fluorescent polystyrene latex spheres were sampled with a Wideband Integrated Bioaerosol Spectrometer (WIBS-4) where the optical size, asymmetry factor and fluorescent measurements were used as inputs to the analysis package. It was found that the Ward linkage with z-score or range normalisation performed best, correctly attributing 98 and 98.1 % of the data points respectively. The best performing methods were applied to the BEACHON-RoMBAS ambient dataset where it was found that the z-score and range normalisation methods yield similar results with each method producing clusters representative of fungal spores and bacterial aerosol, consistent with previous results. The z-score result was compared to clusters generated with previous approaches (WIBS AnalysiS Program, WASP) where we observe that the subsampling and comparative attribution method employed by WASP results in the overestimation of the fungal spore concentration by a factor of 1.5 and the underestimation of bacterial aerosol concentration by a factor of 5. We suggest that this likely due to errors arising from misatrribution due to poor centroid definition and failure to assign particles to a cluster as a result of the subsampling and comparative attribution method employed by WASP. The methods used here allow for the entire fluorescent population of particles to be analysed yielding an explict cluster attribution for each particle, improving cluster centroid definition and our capacity to discriminate and quantify PBAP meta-classes compared to previous approaches.
NASA Astrophysics Data System (ADS)
Takahashi, Hiroki; Hasegawa, Hideyuki; Kanai, Hiroshi
2013-07-01
For the facilitation of analysis and elimination of the operator dependence in estimating the myocardial function in echocardiography, we have previously developed a method for automated identification of the heart wall. However, there are misclassified regions because the magnitude-squared coherence (MSC) function of echo signals, which is one of the features in the previous method, is sensitively affected by the clutter components such as multiple reflection and off-axis echo from external tissue or the nearby myocardium. The objective of the present study is to improve the performance of automated identification of the heart wall. For this purpose, we proposed a method to suppress the effect of the clutter components on the MSC of echo signals by applying an adaptive moving target indicator (MTI) filter to echo signals. In vivo experimental results showed that the misclassified regions were significantly reduced using our proposed method in the longitudinal axis view of the heart.
Asteroid mass estimation with Markov-chain Monte Carlo
NASA Astrophysics Data System (ADS)
Siltala, Lauri; Granvik, Mikael
2017-10-01
Estimates for asteroid masses are based on their gravitational perturbations on the orbits of other objects such as Mars, spacecraft, or other asteroids and/or their satellites. In the case of asteroid-asteroid perturbations, this leads to a 13-dimensional inverse problem at minimum where the aim is to derive the mass of the perturbing asteroid and six orbital elements for both the perturbing asteroid and the test asteroid by fitting their trajectories to their observed positions. The fitting has typically been carried out with linearized methods such as the least-squares method. These methods need to make certain assumptions regarding the shape of the probability distributions of the model parameters. This is problematic as these assumptions have not been validated. We have developed a new Markov-chain Monte Carlo method for mass estimation which does not require an assumption regarding the shape of the parameter distribution. Recently, we have implemented several upgrades to our MCMC method including improved schemes for handling observational errors and outlier data alongside the option to consider multiple perturbers and/or test asteroids simultaneously. These upgrades promise significantly improved results: based on two separate results for (19) Fortuna with different test asteroids we previously hypothesized that simultaneous use of both test asteroids would lead to an improved result similar to the average literature value for (19) Fortuna with substantially reduced uncertainties. Our upgraded algorithm indeed finds a result essentially equal to the literature value for this asteroid, confirming our previous hypothesis. Here we show these new results for (19) Fortuna and other example cases, and compare our results to previous estimates. Finally, we discuss our plans to improve our algorithm further, particularly in connection with Gaia.
Training Compliance Control Yields Improvements in Drawing as a Function of Beery Scores
Snapp-Childs, Winona; Flatters, Ian; Fath, Aaron; Mon-Williams, Mark; Bingham, Geoffrey P.
2014-01-01
Many children have difficulty producing movements well enough to improve in sensori-motor learning. Previously, we developed a training method that supports active movement generation to allow improvement at a 3D tracing task requiring good compliance control. Here, we tested 7–8 year old children from several 2nd grade classrooms to determine whether 3D tracing performance could be predicted using the Beery VMI. We also examined whether 3D tracing training lead to improvements in drawing. Baseline testing included Beery, a drawing task on a tablet computer, and 3D tracing. We found that baseline performance in 3D tracing and drawing co-varied with the visual perception (VP) component of the Beery. Differences in 3D tracing between children scoring low versus high on the Beery VP replicated differences previously found between children with and without motor impairments, as did post-training performance that eliminated these differences. Drawing improved as a result of training in the 3D tracing task. The training method improved drawing and reduced differences predicted by Beery scores. PMID:24651280
Artifact interactions retard technological improvement: An empirical study
Magee, Christopher L.
2017-01-01
Empirical research has shown performance improvement of many different technological domains occurs exponentially but with widely varying improvement rates. What causes some technologies to improve faster than others do? Previous quantitative modeling research has identified artifact interactions, where a design change in one component influences others, as an important determinant of improvement rates. The models predict that improvement rate for a domain is proportional to the inverse of the domain’s interaction parameter. However, no empirical research has previously studied and tested the dependence of improvement rates on artifact interactions. A challenge to testing the dependence is that any method for measuring interactions has to be applicable to a wide variety of technologies. Here we propose a novel patent-based method that is both technology domain-agnostic and less costly than alternative methods. We use textual content from patent sets in 27 domains to find the influence of interactions on improvement rates. Qualitative analysis identified six specific keywords that signal artifact interactions. Patent sets from each domain were then examined to determine the total count of these 6 keywords in each domain, giving an estimate of artifact interactions in each domain. It is found that improvement rates are positively correlated with the inverse of the total count of keywords with Pearson correlation coefficient of +0.56 with a p-value of 0.002. The results agree with model predictions, and provide, for the first time, empirical evidence that artifact interactions have a retarding effect on improvement rates of technological domains. PMID:28777798
Improved adhesive method for microscopic examination of fungi in culture.
Rodriguez-Tudela, J L; Aviles, P
1991-01-01
A new method for the examination of molds that involves the use of a device that dispenses a thin layer of a transparent adhesive material over the surface of a coverslip is described. The advantages of this method over previous methods used for the microscopic examination of molds are delineated. Images PMID:1774269
Learning process mapping heuristics under stochastic sampling overheads
NASA Technical Reports Server (NTRS)
Ieumwananonthachai, Arthur; Wah, Benjamin W.
1991-01-01
A statistical method was developed previously for improving process mapping heuristics. The method systematically explores the space of possible heuristics under a specified time constraint. Its goal is to get the best possible heuristics while trading between the solution quality of the process mapping heuristics and their execution time. The statistical selection method is extended to take into consideration the variations in the amount of time used to evaluate heuristics on a problem instance. The improvement in performance is presented using the more realistic assumption along with some methods that alleviate the additional complexity.
Shintani, H
1985-05-31
Studies were made of the analytical conditions required for indirect photometric ion chromatography using ultraviolet photometric detection (UV method) for the determination of serum cations following a previously developed serum pre-treatment. The sensitivities of the conductivity detection (CD) and UV methods and the amounts of serum cations determined by both methods were compared. Attempts to improve the sensitivity of the conventional UV method are reported. It was found that the mobile phase previously reported by Small and Miller showed no quantitative response when more than 4 mM copper(II) sulphate pentahydrate was used. As a result, there was no significant difference in the amounts of serum cations shown by the CD and UV methods. However, by adding 0.5-5 mM cobalt(II) sulphate heptahydrate, nickel(II) sulphate hexahydrate, zinc(II) sulphate heptahydrate or cobalt(II) diammonium sulphate hexahydrate to 0.5-1.5 mM copper(II) sulphate pentahydrate, higher sensitivity and a quantitative response were attained.
MMASS: an optimized array-based method for assessing CpG island methylation.
Ibrahim, Ashraf E K; Thorne, Natalie P; Baird, Katie; Barbosa-Morais, Nuno L; Tavaré, Simon; Collins, V Peter; Wyllie, Andrew H; Arends, Mark J; Brenton, James D
2006-01-01
We describe an optimized microarray method for identifying genome-wide CpG island methylation called microarray-based methylation assessment of single samples (MMASS) which directly compares methylated to unmethylated sequences within a single sample. To improve previous methods we used bioinformatic analysis to predict an optimized combination of methylation-sensitive enzymes that had the highest utility for CpG-island probes and different methods to produce unmethylated representations of test DNA for more sensitive detection of differential methylation by hybridization. Subtraction or methylation-dependent digestion with McrBC was used with optimized (MMASS-v2) or previously described (MMASS-v1, MMASS-sub) methylation-sensitive enzyme combinations and compared with a published McrBC method. Comparison was performed using DNA from the cell line HCT116. We show that the distribution of methylation microarray data is inherently skewed and requires exogenous spiked controls for normalization and that analysis of digestion of methylated and unmethylated control sequences together with linear fit models of replicate data showed superior statistical power for the MMASS-v2 method. Comparison with previous methylation data for HCT116 and validation of CpG islands from PXMP4, SFRP2, DCC, RARB and TSEN2 confirmed the accuracy of MMASS-v2 results. The MMASS-v2 method offers improved sensitivity and statistical power for high-throughput microarray identification of differential methylation.
Warren, Jamie M; Pawliszyn, Janusz
2011-12-16
For air/headspace analysis, needle trap devices (NTDs) are applicable for sampling a wide range of volatiles such as benzene, alkanes, and semi-volatile particulate bound compounds such as pyrene. This paper describes a new NTD that is simpler to produce and improves performance relative to previous NTD designs. A NTD utilizing a side-hole needle used a modified tip, which removed the need to use epoxy glue to hold sorbent particles inside the NTD. This design also improved the seal between the NTD and narrow neck liner of the GC injector; therefore, improving the desorption efficiency. A new packing method has been developed and evaluated using solvent to pack the device, and is compared to NTDs prepared using the previous vacuum aspiration method. The slurry packing method reduced preparation time and improved reproducibility between NTDs. To evaluate the NTDs, automated headspace extraction was completed using benzene, toluene, ethylbenzene, p-xylene (BTEX), anthracene, and pyrene (PAH). NTD geometries evaluated include: blunt tip with side-hole needle, tapered tip with side-hole needle, slider tip with side-hole, dome tapered tip with side-hole and blunt with no side-hole needle (expanded desorptive flow). Results demonstrate that the tapered and slider tip NTDs performed with improved desorption efficiency. Copyright © 2011 Elsevier B.V. All rights reserved.
Wu, Yicong; Chandris, Panagiotis; Winter, Peter W.; Kim, Edward Y.; Jaumouillé, Valentin; Kumar, Abhishek; Guo, Min; Leung, Jacqueline M.; Smith, Corey; Rey-Suarez, Ivan; Liu, Huafeng; Waterman, Clare M.; Ramamurthi, Kumaran S.; La Riviere, Patrick J.; Shroff, Hari
2016-01-01
Most fluorescence microscopes are inefficient, collecting only a small fraction of the emitted light at any instant. Besides wasting valuable signal, this inefficiency also reduces spatial resolution and causes imaging volumes to exhibit significant resolution anisotropy. We describe microscopic and computational techniques that address these problems by simultaneously capturing and subsequently fusing and deconvolving multiple specimen views. Unlike previous methods that serially capture multiple views, our approach improves spatial resolution without introducing any additional illumination dose or compromising temporal resolution relative to conventional imaging. When applying our methods to single-view wide-field or dual-view light-sheet microscopy, we achieve a twofold improvement in volumetric resolution (~235 nm × 235 nm × 340 nm) as demonstrated on a variety of samples including microtubules in Toxoplasma gondii, SpoVM in sporulating Bacillus subtilis, and multiple protein distributions and organelles in eukaryotic cells. In every case, spatial resolution is improved with no drawback by harnessing previously unused fluorescence. PMID:27761486
NASA Astrophysics Data System (ADS)
Huang, Zhijiong; Hu, Yongtao; Zheng, Junyu; Zhai, Xinxin; Huang, Ran
2018-05-01
Lateral boundary conditions (LBCs) are essential for chemical transport models to simulate regional transport; however they often contain large uncertainties. This study proposes an optimized data fusion approach to reduce the bias of LBCs by fusing gridded model outputs, from which the daughter domain's LBCs are derived, with ground-level measurements. The optimized data fusion approach follows the framework of a previous interpolation-based fusion method but improves it by using a bias kriging method to correct the spatial bias in gridded model outputs. Cross-validation shows that the optimized approach better estimates fused fields in areas with a large number of observations compared to the previous interpolation-based method. The optimized approach was applied to correct LBCs of PM2.5 concentrations for simulations in the Pearl River Delta (PRD) region as a case study. Evaluations show that the LBCs corrected by data fusion improve in-domain PM2.5 simulations in terms of the magnitude and temporal variance. Correlation increases by 0.13-0.18 and fractional bias (FB) decreases by approximately 3%-15%. This study demonstrates the feasibility of applying data fusion to improve regional air quality modeling.
Pattarino, Franco; Piepel, Greg; Rinaldi, Maurizio
2018-03-03
A paper by Foglio Bonda et al. published previously in this journal (2016, Vol. 83, pp. 175–183) discussed the use of mixture experiment design and modeling methods to study how the proportions of three components in an extemporaneous oral suspension affected the mean diameter of drug particles (Z ave). The three components were itraconazole (ITZ), Tween 20 (TW20), and Methocel® E5 (E5). This commentary addresses some errors and other issues in the previous paper, and also discusses an improved model relating proportions of ITZ, TW20, and E5 to Z ave. The improved model contains six of the 10 terms inmore » the full-cubic mixture model, which were selected using a different cross-validation procedure than used in the previous paper. In conclusion, compared to the four-term model presented in the previous paper, the improved model fit the data better, had excellent cross-validation performance, and the predicted Z ave of a validation point was within model uncertainty of the measured value.« less
Pattarino, Franco; Piepel, Greg; Rinaldi, Maurizio
2018-05-30
A paper by Foglio Bonda et al. published previously in this journal (2016, Vol. 83, pp. 175-183) discussed the use of mixture experiment design and modeling methods to study how the proportions of three components in an extemporaneous oral suspension affected the mean diameter of drug particles (Z ave ). The three components were itraconazole (ITZ), Tween 20 (TW20), and Methocel® E5 (E5). This commentary addresses some errors and other issues in the previous paper, and also discusses an improved model relating proportions of ITZ, TW20, and E5 to Z ave . The improved model contains six of the 10 terms in the full-cubic mixture model, which were selected using a different cross-validation procedure than used in the previous paper. Compared to the four-term model presented in the previous paper, the improved model fit the data better, had excellent cross-validation performance, and the predicted Z ave of a validation point was within model uncertainty of the measured value. Copyright © 2018 Elsevier B.V. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pattarino, Franco; Piepel, Greg; Rinaldi, Maurizio
A paper by Foglio Bonda et al. published previously in this journal (2016, Vol. 83, pp. 175–183) discussed the use of mixture experiment design and modeling methods to study how the proportions of three components in an extemporaneous oral suspension affected the mean diameter of drug particles (Z ave). The three components were itraconazole (ITZ), Tween 20 (TW20), and Methocel® E5 (E5). This commentary addresses some errors and other issues in the previous paper, and also discusses an improved model relating proportions of ITZ, TW20, and E5 to Z ave. The improved model contains six of the 10 terms inmore » the full-cubic mixture model, which were selected using a different cross-validation procedure than used in the previous paper. In conclusion, compared to the four-term model presented in the previous paper, the improved model fit the data better, had excellent cross-validation performance, and the predicted Z ave of a validation point was within model uncertainty of the measured value.« less
Adeola, Abiodun A; Aworh, Ogugua C
2014-01-01
The effect of sodium benzoate on the quality attributes of improved tamarind beverage during storage was investigated. Tamarind beverages were produced according to a previously reported improved method, with or without chemical preservatives (100 mg/100 mL sodium benzoate). Tamarind beverage produced according to traditional processing method served as the control. The tamarind beverages were stored for 4 months at room (29 ± 2°C) and refrigerated (4-10°C) temperatures. Samples were analyzed, at regular intervals, for chemical, sensory, and microbiological qualities. Appearance of coliforms or overall acceptability score of 5.9 was used as deterioration index. The control beverages deteriorated by 2nd and 10th days at room and refrigerated temperatures, respectively. Improved tamarind beverage produced without the inclusion of sodium benzoate was stable for 3 and 5 weeks at room and refrigerated temperatures, respectively. Sodium benzoate extended the shelf life of the improved tamarind beverage to 6 and 13 weeks, respectively, at room and refrigerated temperatures.
A case report: using SNOMED CT for grouping Adverse Drug Reactions Terms
Alecu, Iulian; Bousquet, Cedric; Jaulent, Marie-Christine
2008-01-01
Background WHO-ART and MedDRA are medical terminologies used for the coding of adverse drug reactions in pharmacovigilance databases. MedDRA proposes 13 Special Search Categories (SSC) grouping terms associated to specific medical conditions. For instance, the SSC "Haemorrhage" includes 346 MedDRA terms among which 55 are also WHO-ART terms. WHO-ART itself does not provide such groupings. Our main contention is the possibility of classifying WHO-ART terms in semantic categories by using knowledge extracted from SNOMED CT. A previous paper presents the way WHO-ART term definitions have been automatically generated in a description logics formalism by using their corresponding SNOMED CT synonyms. Based on synonymy and relative position of WHO-ART terms in SNOMED CT, specialization or generalization relationships could be inferred. This strategy is successful for grouping the WHO-ART terms present in most MedDRA SSCs. However the strategy failed when SSC were organized on other basis than taxonomy. Methods We propose a new method that improves the previous WHO-ART structure by integrating the associative relationships included in SNOMED CT. Results The new method improves the groupings. For example, none of the 55 WHO-ART terms in the Haemorrhage SSC were matched using the previous method. With the new method, we improve the groupings and obtain 87% coverage of the Haemorrhage SSC. Conclusion SNOMED CT's terminological structure can be used to perform automated groupings in WHO-ART. This work proves that groupings already present in the MedDRA SSCs (e.g. the haemorrhage SSC) may be retrieved using classification in SNOMED CT. PMID:19007441
Aota, Arata; Date, Yasumoto; Terakado, Shingo; Ohmura, Naoya
2013-01-01
Polychlorinated biphenyls (PCBs) are persistent organic pollutants that are present in the insulating oil inside a large number of transformers. To aid in eliminating PCB-contaminated transformers, PCBs in oil need to be measured using a rapid and cost-effective analytical method. We previously reported a pretreatment method for the immunoassay of PCBs in oil using a large-scale multilayer column and a microchip with multiple microrecesses, which permitted concentrated solvent extraction. In this paper, we report on a more rapid and facile pretreatment method, without an evaporation process, by improving the column and the microchip. In a miniaturized column, the decomposition and separation of oil were completed in 2 min. PCBs can be eluted from the capillary column at concentrations seven-times higher than those from the previous column. The total volume of the microrecesses was increased by improving the microrecess structure, the enabling extraction of four-times the amount of PCBs achieved with the previous system. By interfacing the capillary column with the improved microchip, PCBs in the eluate from the column were extracted into dimethyl sulfoxide in microrecesses with high enrichment and without the need for evaporation. Pretreatment was completed within 20 min. The pretreated oil was analyzed using a flow-based kinetic exclusion immunoassay. The limit of detection of PCBs in oil was 0.15 mg kg(-1), which satisfies the criterion set in Japan of 0.5 mg kg(-1).
ECHO: A reference-free short-read error correction algorithm
Kao, Wei-Chun; Chan, Andrew H.; Song, Yun S.
2011-01-01
Developing accurate, scalable algorithms to improve data quality is an important computational challenge associated with recent advances in high-throughput sequencing technology. In this study, a novel error-correction algorithm, called ECHO, is introduced for correcting base-call errors in short-reads, without the need of a reference genome. Unlike most previous methods, ECHO does not require the user to specify parameters of which optimal values are typically unknown a priori. ECHO automatically sets the parameters in the assumed model and estimates error characteristics specific to each sequencing run, while maintaining a running time that is within the range of practical use. ECHO is based on a probabilistic model and is able to assign a quality score to each corrected base. Furthermore, it explicitly models heterozygosity in diploid genomes and provides a reference-free method for detecting bases that originated from heterozygous sites. On both real and simulated data, ECHO is able to improve the accuracy of previous error-correction methods by several folds to an order of magnitude, depending on the sequence coverage depth and the position in the read. The improvement is most pronounced toward the end of the read, where previous methods become noticeably less effective. Using a whole-genome yeast data set, it is demonstrated here that ECHO is capable of coping with nonuniform coverage. Also, it is shown that using ECHO to perform error correction as a preprocessing step considerably facilitates de novo assembly, particularly in the case of low-to-moderate sequence coverage depth. PMID:21482625
A Comparison of Two Flashcard Drill Methods Targeting Word Recognition
ERIC Educational Resources Information Center
Volpe, Robert J.; Mule, Christina M.; Briesch, Amy M.; Joseph, Laurice M.; Burns, Matthew K.
2011-01-01
Traditional drill and practice (TD) and incremental rehearsal (IR) are two flashcard drill instructional methods previously noted to improve word recognition. The current study sought to compare the effectiveness and efficiency of these two methods, as assessed by next day retention assessments, under 2 conditions (i.e., opportunities to respond…
NASA Astrophysics Data System (ADS)
Crawford, I.; Ruske, S.; Topping, D. O.; Gallagher, M. W.
2015-11-01
In this paper we present improved methods for discriminating and quantifying primary biological aerosol particles (PBAPs) by applying hierarchical agglomerative cluster analysis to multi-parameter ultraviolet-light-induced fluorescence (UV-LIF) spectrometer data. The methods employed in this study can be applied to data sets in excess of 1 × 106 points on a desktop computer, allowing for each fluorescent particle in a data set to be explicitly clustered. This reduces the potential for misattribution found in subsampling and comparative attribution methods used in previous approaches, improving our capacity to discriminate and quantify PBAP meta-classes. We evaluate the performance of several hierarchical agglomerative cluster analysis linkages and data normalisation methods using laboratory samples of known particle types and an ambient data set. Fluorescent and non-fluorescent polystyrene latex spheres were sampled with a Wideband Integrated Bioaerosol Spectrometer (WIBS-4) where the optical size, asymmetry factor and fluorescent measurements were used as inputs to the analysis package. It was found that the Ward linkage with z-score or range normalisation performed best, correctly attributing 98 and 98.1 % of the data points respectively. The best-performing methods were applied to the BEACHON-RoMBAS (Bio-hydro-atmosphere interactions of Energy, Aerosols, Carbon, H2O, Organics and Nitrogen-Rocky Mountain Biogenic Aerosol Study) ambient data set, where it was found that the z-score and range normalisation methods yield similar results, with each method producing clusters representative of fungal spores and bacterial aerosol, consistent with previous results. The z-score result was compared to clusters generated with previous approaches (WIBS AnalysiS Program, WASP) where we observe that the subsampling and comparative attribution method employed by WASP results in the overestimation of the fungal spore concentration by a factor of 1.5 and the underestimation of bacterial aerosol concentration by a factor of 5. We suggest that this likely due to errors arising from misattribution due to poor centroid definition and failure to assign particles to a cluster as a result of the subsampling and comparative attribution method employed by WASP. The methods used here allow for the entire fluorescent population of particles to be analysed, yielding an explicit cluster attribution for each particle and improving cluster centroid definition and our capacity to discriminate and quantify PBAP meta-classes compared to previous approaches.
Detection of no-model input-output pairs in closed-loop systems.
Potts, Alain Segundo; Alvarado, Christiam Segundo Morales; Garcia, Claudio
2017-11-01
The detection of no-model input-output (IO) pairs is important because it can speed up the multivariable system identification process, since all the pairs with null transfer functions are previously discarded and it can also improve the identified model quality, thus improving the performance of model based controllers. In the available literature, the methods focus just on the open-loop case, since in this case there is not the effect of the controller forcing the main diagonal in the transfer matrix to one and all the other terms to zero. In this paper, a modification of a previous method able to detect no-model IO pairs in open-loop systems is presented, but adapted to perform this duty in closed-loop systems. Tests are performed by using the traditional methods and the proposed one to show its effectiveness. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.
Borman, Andrew M; Fraser, Mark; Linton, Christopher J; Palmer, Michael D; Johnson, Elizabeth M
2010-06-01
Here, we present a significantly improved version of our previously published method for the extraction of fungal genomic DNA from pure cultures using Whatman FTA filter paper matrix technology. This modified protocol is extremely rapid, significantly more cost effective than our original method, and importantly, substantially reduces the problem of potential cross-contamination between sequential filters when employing FTA technology.
Improving Children's Working Memory and Classroom Performance
ERIC Educational Resources Information Center
St Clair-Thompson, Helen; Stevens, Ruth; Hunt, Alexandra; Bolder, Emma
2010-01-01
Previous research has demonstrated close relationships between working memory and children's scholastic attainment. The aim of the present study was to explore a method of improving working memory, using memory strategy training. Two hundred and fifty-four children aged five to eight years were tested on measures of the phonological loop,…
Performance Evaluation of an Improved GC-MS Method to Quantify Methylmercury in Fish.
Watanabe, Takahiro; Kikuchi, Hiroyuki; Matsuda, Rieko; Hayashi, Tomoko; Akaki, Koichi; Teshima, Reiko
2015-01-01
Here, we set out to improve our previously developed methylmercury analytical method, involving phenyl derivatization and gas chromatography-mass spectrometry (GC-MS). In the improved method, phenylation of methylmercury with sodium tetraphenylborate was carried out in a toluene/water two-phase system, instead of in water alone. The modification enabled derivatization at optimum pH, and the formation of by-products was dramatically reduced. In addition, adsorption of methyl phenyl mercury in the GC system was suppressed by co-injection of PEG200, enabling continuous analysis without loss of sensitivity. The performance of the improved analytical method was independently evaluated by three analysts using certified reference materials and methylmercury-spiked fresh fish samples. The present analytical method was validated as suitable for determination of compliance with the provisional regulation value for methylmercury in fish, set in the Food Sanitation haw.
Handlogten, Michael W; Stefanick, Jared F; Deak, Peter E; Bilgicer, Basar
2014-09-07
In a previous study, we demonstrated a non-chromatographic affinity-based precipitation method, using trivalent haptens, for the purification of mAbs. In this study, we significantly improved this process by using a simplified bivalent peptidic hapten (BPH) design, which enables facile and rapid purification of mAbs while overcoming the limitations of the previous trivalent design. The improved affinity-based precipitation method (ABP(BPH)) combines the simplicity of salt-induced precipitation with the selectivity of affinity chromatography for the purification of mAbs. The ABP(BPH) method involves 3 steps: (i) precipitation and separation of protein contaminants larger than immunoglobulins with ammonium sulfate; (ii) selective precipitation of the target-antibody via BPH by inducing antibody-complex formation; (iii) solubilization of the antibody pellet and removal of BPH with membrane filtration resulting in the pure antibody. The ABP(BPH) method was evaluated by purifying the pharmaceutical antibody trastuzumab from common contaminants including CHO cell conditioned media, DNA, ascites fluid, other antibodies, and denatured antibody with >85% yield and >97% purity. Importantly, the purified antibody demonstrated native binding activity to cell lines expressing the target protein, HER2. Combined, the ABP(BPH) method is a rapid and scalable process for the purification of antibodies with the potential to improve product quality while decreasing purification costs.
Lin, Ying-Tsong; Collis, Jon M; Duda, Timothy F
2012-11-01
An alternating direction implicit (ADI) three-dimensional fluid parabolic equation solution method with enhanced accuracy is presented. The method uses a square-root Helmholtz operator splitting algorithm that retains cross-multiplied operator terms that have been previously neglected. With these higher-order cross terms, the valid angular range of the parabolic equation solution is improved. The method is tested for accuracy against an image solution in an idealized wedge problem. Computational efficiency improvements resulting from the ADI discretization are also discussed.
Federal Register 2010, 2011, 2012, 2013, 2014
2012-02-28
... these tools, including additional herbicides and application methods to increase treatment effectiveness... organisms than previously approved herbicides and higher effectiveness on particular invasive plants. Thus... examples demonstrate why additional herbicides, methods, and protocols are needed to improve treatment...
A recently published test method for Neocloeon triangulifer assessed the sensitivities of larval mayflies to several reference toxicants (NaCl, KCl, and CuSO4). Subsequent exposures have shown discrepancies from those results previously reported. To identify potential sources of ...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Subekti, M.; Center for Development of Reactor Safety Technology, National Nuclear Energy Agency of Indonesia, Puspiptek Complex BO.80, Serpong-Tangerang, 15340; Ohno, T.
2006-07-01
The neuro-expert has been utilized in previous monitoring-system research of Pressure Water Reactor (PWR). The research improved the monitoring system by utilizing neuro-expert, conventional noise analysis and modified neural networks for capability extension. The parallel method applications required distributed architecture of computer-network for performing real-time tasks. The research aimed to improve the previous monitoring system, which could detect sensor degradation, and to perform the monitoring demonstration in High Temperature Engineering Tested Reactor (HTTR). The developing monitoring system based on some methods that have been tested using the data from online PWR simulator, as well as RSG-GAS (30 MW research reactormore » in Indonesia), will be applied in HTTR for more complex monitoring. (authors)« less
Passive wireless strain monitoring of tyres using capacitance and tuning frequency changes
NASA Astrophysics Data System (ADS)
Matsuzaki, Ryosuke; Todoroki, Akira
2005-08-01
In-service strain monitoring of tyres of automobiles is quite effective for improving the reliability of tyres and anti-lock braking systems (ABS). Conventional strain gauges have high stiffness and require lead wires. Therefore, they are cumbersome for tyre strain measurements. In a previous study, the authors proposed a new wireless strain monitoring method that adopts the tyre itself as a sensor, with an oscillating circuit. This method is very simple and useful, but it requires a battery to activate the oscillating circuit. In the present study, the previous method for wireless tyre monitoring is improved to produce a passive wireless sensor. A specimen made from a commercially available tyre is connected to a tuning circuit comprising an inductance and a capacitance as a condenser. The capacitance change of the tyre alters the tuning frequency. This change of the tuned radio wave facilitates wireless measurement of the applied strain of the specimen without any power supply. This passive wireless method is applied to a specimen and the static applied strain is measured. Experiments demonstrate that the method is effective for passive wireless strain monitoring of tyres.
An Improved Pansharpening Method for Misaligned Panchromatic and Multispectral Data
Jing, Linhai; Tang, Yunwei; Ding, Haifeng
2018-01-01
Numerous pansharpening methods were proposed in recent decades for fusing low-spatial-resolution multispectral (MS) images with high-spatial-resolution (HSR) panchromatic (PAN) bands to produce fused HSR MS images, which are widely used in various remote sensing tasks. The effect of misregistration between MS and PAN bands on quality of fused products has gained much attention in recent years. An improved method for misaligned MS and PAN imagery is proposed, through two improvements made on a previously published method named RMI (reduce misalignment impact). The performance of the proposed method was assessed by comparing with some outstanding fusion methods, such as adaptive Gram-Schmidt and generalized Laplacian pyramid. Experimental results show that the improved version can reduce spectral distortions of fused dark pixels and sharpen boundaries between different image objects, as well as obtain similar quality indexes with the original RMI method. In addition, the proposed method was evaluated with respect to its sensitivity to misalignments between MS and PAN bands. It is certified that the proposed method is more robust to misalignments between MS and PAN bands than the other methods. PMID:29439502
An Improved Pansharpening Method for Misaligned Panchromatic and Multispectral Data.
Li, Hui; Jing, Linhai; Tang, Yunwei; Ding, Haifeng
2018-02-11
Numerous pansharpening methods were proposed in recent decades for fusing low-spatial-resolution multispectral (MS) images with high-spatial-resolution (HSR) panchromatic (PAN) bands to produce fused HSR MS images, which are widely used in various remote sensing tasks. The effect of misregistration between MS and PAN bands on quality of fused products has gained much attention in recent years. An improved method for misaligned MS and PAN imagery is proposed, through two improvements made on a previously published method named RMI (reduce misalignment impact). The performance of the proposed method was assessed by comparing with some outstanding fusion methods, such as adaptive Gram-Schmidt and generalized Laplacian pyramid. Experimental results show that the improved version can reduce spectral distortions of fused dark pixels and sharpen boundaries between different image objects, as well as obtain similar quality indexes with the original RMI method. In addition, the proposed method was evaluated with respect to its sensitivity to misalignments between MS and PAN bands. It is certified that the proposed method is more robust to misalignments between MS and PAN bands than the other methods.
Gray-world-assumption-based illuminant color estimation using color gamuts with high and low chroma
NASA Astrophysics Data System (ADS)
Kawamura, Harumi; Yonemura, Shunichi; Ohya, Jun; Kojima, Akira
2013-02-01
A new approach is proposed for estimating illuminant colors from color images under an unknown scene illuminant. The approach is based on a combination of a gray-world-assumption-based illuminant color estimation method and a method using color gamuts. The former method, which is one we had previously proposed, improved on the original method that hypothesizes that the average of all the object colors in a scene is achromatic. Since the original method estimates scene illuminant colors by calculating the average of all the image pixel values, its estimations are incorrect when certain image colors are dominant. Our previous method improves on it by choosing several colors on the basis of an opponent-color property, which is that the average color of opponent colors is achromatic, instead of using all colors. However, it cannot estimate illuminant colors when there are only a few image colors or when the image colors are unevenly distributed in local areas in the color space. The approach we propose in this paper combines our previous method and one using high chroma and low chroma gamuts, which makes it possible to find colors that satisfy the gray world assumption. High chroma gamuts are used for adding appropriate colors to the original image and low chroma gamuts are used for narrowing down illuminant color possibilities. Experimental results obtained using actual images show that even if the image colors are localized in a certain area in the color space, the illuminant colors are accurately estimated, with smaller estimation error average than that generated in the conventional method.
Improving the Bandwidth Selection in Kernel Equating
ERIC Educational Resources Information Center
Andersson, Björn; von Davier, Alina A.
2014-01-01
We investigate the current bandwidth selection methods in kernel equating and propose a method based on Silverman's rule of thumb for selecting the bandwidth parameters. In kernel equating, the bandwidth parameters have previously been obtained by minimizing a penalty function. This minimization process has been criticized by practitioners…
A case report: using SNOMED CT for grouping Adverse Drug Reactions Terms.
Alecu, Iulian; Bousquet, Cedric; Jaulent, Marie-Christine
2008-10-27
WHO-ART and MedDRA are medical terminologies used for the coding of adverse drug reactions in pharmacovigilance databases. MedDRA proposes 13 Special Search Categories (SSC) grouping terms associated to specific medical conditions. For instance, the SSC "Haemorrhage" includes 346 MedDRA terms among which 55 are also WHO-ART terms. WHO-ART itself does not provide such groupings. Our main contention is the possibility of classifying WHO-ART terms in semantic categories by using knowledge extracted from SNOMED CT. A previous paper presents the way WHO-ART term definitions have been automatically generated in a description logics formalism by using their corresponding SNOMED CT synonyms. Based on synonymy and relative position of WHO-ART terms in SNOMED CT, specialization or generalization relationships could be inferred. This strategy is successful for grouping the WHO-ART terms present in most MedDRA SSCs. However the strategy failed when SSC were organized on other basis than taxonomy. We propose a new method that improves the previous WHO-ART structure by integrating the associative relationships included in SNOMED CT. The new method improves the groupings. For example, none of the 55 WHO-ART terms in the Haemorrhage SSC were matched using the previous method. With the new method, we improve the groupings and obtain 87% coverage of the Haemorrhage SSC. SNOMED CT's terminological structure can be used to perform automated groupings in WHO-ART. This work proves that groupings already present in the MedDRA SSCs (e.g. the haemorrhage SSC) may be retrieved using classification in SNOMED CT.
Snapp-Childs, Winona; Fath, Aaron J; Watson, Carol A; Flatters, Ian; Mon-Williams, Mark; Bingham, Geoffrey P
2015-10-01
Many children have difficulty producing movements well enough to improve in perceptuo-motor learning. We have developed a training method that supports active movement generation to allow improvement in a 3D tracing task requiring good compliance control. We previously tested 7-8 year old children who exhibited poor performance and performance differences before training. After training, performance was significantly improved and performance differences were eliminated. According to the Dynamic Systems Theory of development, appropriate support can enable younger children to acquire the ability to perform like older children. In the present study, we compared 7-8 and 10-12 year old school children and predicted that younger children would show reduced performance that was nonetheless amenable to training. Indeed, the pre-training performance of the 7-8 year olds was worse than that of the 10-12 year olds, but post-training performance was equally good for both groups. This was similar to previous results found using this training method for children with DCD and age-matched typically developing children. We also found in a previous study of 7-8 year old school children that training in the 3D tracing task transferred to a 2D drawing task. We now found similar transfer for the 10-12 year olds. Copyright © 2015 Elsevier B.V. All rights reserved.
ERIC Educational Resources Information Center
Losinski, Mickey; Cuenca-Carlino, Yojanna; Zablocki, Mark; Teagarden, James
2014-01-01
Two previous reviews have indicated that self-regulated strategy instruction (SRSD) is an evidence-based practice that can improve the writing skills of students with emotional and behavioral disorders. The purpose of this meta-analysis is to extend the findings and analytic methods of previous reviews by examining published studies regarding…
Improved COD Measurements for Organic Content in Flowback Water with High Chloride Concentrations.
Cardona, Isabel; Park, Ho Il; Lin, Lian-Shin
2016-03-01
An improved method was used to determine chemical oxygen demand (COD) as a measure of organic content in water samples containing high chloride content. A contour plot of COD percent error in the Cl(-)-Cl(-):COD domain showed that COD errors increased with Cl(-):COD. Substantial errors (>10%) could occur in low Cl(-):COD regions (<300) for samples with low (<10 g/L) and high chloride concentrations (>25 g/L). Applying the method to flowback water samples resulted in COD concentrations ranging in 130 to 1060 mg/L, which were substantially lower than the previously reported values for flowback water samples from Marcellus Shale (228 to 21 900 mg/L). It is likely that overestimations of COD in the previous studies occurred as result of chloride interferences. Pretreatment with mercuric sulfate, and use of a low-strength digestion solution, and the contour plot to correct COD measurements are feasible steps to significantly improve the accuracy of COD measurements.
NASA Technical Reports Server (NTRS)
Tan, P. W.; Raju, I. S.; Shivakumar, K. N.; Newman, J. C., Jr.
1990-01-01
A re-evaluation of the 3-D finite-element models and methods used to analyze surface crack at stress concentrations is presented. Previous finite-element models used by Raju and Newman for surface and corner cracks at holes were shown to have ill-shaped elements at the intersection of the hole and crack boundaries. Improved models, without these ill-shaped elements, were developed for a surface crack at a circular hole and at a semi-circular edge notch. Stress-intensity factors were calculated by both the nodal-force and virtual-crack-closure methods. Comparisons made between the previously developed stress-intensity factor equations and the results from the improved models agreed well except for configurations with large notch-radii-to-plate-thickness ratios. Stress-intensity factors for a semi-elliptical surface crack located at the center of a semi-circular edge notch in a plate subjected to remote tensile loadings were calculated using the improved models.
Diagnostic accuracy of different caries risk assessment methods. A systematic review.
Senneby, Anna; Mejàre, Ingegerd; Sahlin, Nils-Eric; Svensäter, Gunnel; Rohlin, Madeleine
2015-12-01
To evaluate the accuracy of different methods used to identify individuals with increased risk of developing dental coronal caries. Studies on following methods were included: previous caries experience, tests using microbiota, buffering capacity, salivary flow rate, oral hygiene, dietary habits and sociodemographic variables. QUADAS-2 was used to assess risk of bias. Sensitivity, specificity, predictive values, and likelihood ratios (LR) were calculated. Quality of evidence based on ≥3 studies of a method was rated according to GRADE. PubMed, Cochrane Library, Web of Science and reference lists of included publications were searched up to January 2015. From 5776 identified articles, 18 were included. Assessment of study quality identified methodological limitations concerning study design, test technology and reporting. No study presented low risk of bias in all domains. Three or more studies were found only for previous caries experience and salivary mutans streptococci and quality of evidence for these methods was low. Evidence regarding other methods was lacking. For previous caries experience, sensitivity ranged between 0.21 and 0.94 and specificity between 0.20 and 1. Tests using salivary mutans streptococci resulted in low sensitivity and high specificity. For children with primary teeth at baseline, pooled LR for a positive test was 3 for previous caries experience and 4 for salivary mutans streptococci, given a threshold ≥10(5) CFU/ml. Evidence on the validity of analysed methods used for caries risk assessment is limited. As methodological quality was low, there is a need to improve study design. Low validity for the analysed methods may lead to patients with increased risk not being identified, whereas some are falsely identified as being at risk. As caries risk assessment guides individualized decisions on interventions and intervals for patient recall, improved performance based on best evidence is greatly needed. Copyright © 2015 Elsevier Ltd. All rights reserved.
Sun, Ye; Tao, Jing; Zhang, Geoff G Z; Yu, Lian
2010-09-01
A previous method for measuring solubilities of crystalline drugs in polymers has been improved to enable longer equilibration and used to survey the solubilities of indomethacin (IMC) and nifedipine (NIF) in two homo-polymers [polyvinyl pyrrolidone (PVP) and polyvinyl acetate (PVAc)] and their co-polymer (PVP/VA). These data are important for understanding the stability of amorphous drug-polymer dispersions, a strategy actively explored for delivering poorly soluble drugs. Measuring solubilities in polymers is difficult because their high viscosities impede the attainment of solubility equilibrium. In this method, a drug-polymer mixture prepared by cryo-milling is annealed at different temperatures and analyzed by differential scanning calorimetry to determine whether undissolved crystals remain and thus the upper and lower bounds of the equilibrium solution temperature. The new annealing method yielded results consistent with those obtained with the previous scanning method at relatively high temperatures, but revised slightly the previous results at lower temperatures. It also lowered the temperature of measurement closer to the glass transition temperature. For D-mannitol and IMC dissolving in PVP, the polymer's molecular weight has little effect on the weight-based solubility. For IMC and NIF, the dissolving powers of the polymers follow the order PVP > PVP/VA > PVAc. In each polymer studied, NIF is less soluble than IMC. The activities of IMC and NIF dissolved in various polymers are reasonably well fitted to the Flory-Huggins model, yielding the relevant drug-polymer interaction parameters. The new annealing method yields more accurate data than the previous scanning method when solubility equilibrium is slow to achieve. In practice, these two methods can be combined for efficiency. The measured solubilities are not readily anticipated, which underscores the importance of accurate experimental data for developing predictive models.
Formulation for Tin-.sup.117m /diethylenetriaminepentaacetic acids
Srivastava, Suresh C.; Meinken, George E.
1999-01-01
The invention provides improved formulations of .sup.117m Sn (Sn.sup.4+) DTPA which allow higher doses of .sup.117m Sn (Sn.sup.4+) to be administered than were previously possible. Methods for making pharmaceutical compositions comprising .sup.117m Sn (Sn.sup.4+) DTPA in which the amount of unchelated DTPA is minimized are disclosed along with methods of using the improved formlulations, both for palliation of bone pain associated with cancer and for treatment of osseous tumors.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Martínez-García, Eric E.; González-Lópezlira, Rosa A.; Bruzual A, Gustavo
2017-01-20
Stellar masses of galaxies are frequently obtained by fitting stellar population synthesis models to galaxy photometry or spectra. The state of the art method resolves spatial structures within a galaxy to assess the total stellar mass content. In comparison to unresolved studies, resolved methods yield, on average, higher fractions of stellar mass for galaxies. In this work we improve the current method in order to mitigate a bias related to the resolved spatial distribution derived for the mass. The bias consists in an apparent filamentary mass distribution and a spatial coincidence between mass structures and dust lanes near spiral arms.more » The improved method is based on iterative Bayesian marginalization, through a new algorithm we have named Bayesian Successive Priors (BSP). We have applied BSP to M51 and to a pilot sample of 90 spiral galaxies from the Ohio State University Bright Spiral Galaxy Survey. By quantitatively comparing both methods, we find that the average fraction of stellar mass missed by unresolved studies is only half what previously thought. In contrast with the previous method, the output BSP mass maps bear a better resemblance to near-infrared images.« less
Contourlet domain multiband deblurring based on color correlation for fluid lens cameras.
Tzeng, Jack; Liu, Chun-Chen; Nguyen, Truong Q
2010-10-01
Due to the novel fluid optics, unique image processing challenges are presented by the fluidic lens camera system. Developed for surgical applications, unique properties, such as no moving parts while zooming and better miniaturization than traditional glass optics, are advantages of the fluid lens. Despite these abilities, sharp color planes and blurred color planes are created by the nonuniform reaction of the liquid lens to different color wavelengths. Severe axial color aberrations are caused by this reaction. In order to deblur color images without estimating a point spread function, a contourlet filter bank system is proposed. Information from sharp color planes is used by this multiband deblurring method to improve blurred color planes. Compared to traditional Lucy-Richardson and Wiener deconvolution algorithms, significantly improved sharpness and reduced ghosting artifacts are produced by a previous wavelet-based method. Directional filtering is used by the proposed contourlet-based system to adjust to the contours of the image. An image is produced by the proposed method which has a similar level of sharpness to the previous wavelet-based method and has fewer ghosting artifacts. Conditions for when this algorithm will reduce the mean squared error are analyzed. While improving the blue color plane by using information from the green color plane is the primary focus of this paper, these methods could be adjusted to improve the red color plane. Many multiband systems such as global mapping, infrared imaging, and computer assisted surgery are natural extensions of this work. This information sharing algorithm is beneficial to any image set with high edge correlation. Improved results in the areas of deblurring, noise reduction, and resolution enhancement can be produced by the proposed algorithm.
NASA Astrophysics Data System (ADS)
Kattman, Braden R.
National culture and organizational culture impact how continuous improvement methods are received, implemented and deployed by suppliers. Previous research emphasized the dominance of national culture over organizational culture. The countries studied included Poland, Mexico, China, Taiwan, South Korea, Estonia, India, Canada, the United States, the United Kingdom, and Japan. The research found that Canada was most receptive to continuous improvement, with China being the least receptive. The study found that organizational culture was more influential than national culture. Isomorphism and benchmarking is driving continuous-improvement language and methods to be more universally known within business. Business and management practices are taking precedence in driving change within organizations.
2012-01-01
Background Detecting the borders between coding and non-coding regions is an essential step in the genome annotation. And information entropy measures are useful for describing the signals in genome sequence. However, the accuracies of previous methods of finding borders based on entropy segmentation method still need to be improved. Methods In this study, we first applied a new recursive entropic segmentation method on DNA sequences to get preliminary significant cuts. A 22-symbol alphabet is used to capture the differential composition of nucleotide doublets and stop codon patterns along three phases in both DNA strands. This process requires no prior training datasets. Results Comparing with the previous segmentation methods, the experimental results on three bacteria genomes, Rickettsia prowazekii, Borrelia burgdorferi and E.coli, show that our approach improves the accuracy for finding the borders between coding and non-coding regions in DNA sequences. Conclusions This paper presents a new segmentation method in prokaryotes based on Jensen-Rényi divergence with a 22-symbol alphabet. For three bacteria genomes, comparing to A12_JR method, our method raised the accuracy of finding the borders between protein coding and non-coding regions in DNA sequences. PMID:23282225
Ihmsen, Markus; Cornelis, Jens; Solenthaler, Barbara; Horvath, Christopher; Teschner, Matthias
2013-07-25
We propose a novel formulation of the projection method for Smoothed Particle Hydrodynamics (SPH). We combine a symmetric SPH pressure force and an SPH discretization of the continuity equation to obtain a discretized form of the pressure Poisson equation (PPE). In contrast to previous projection schemes, our system does consider the actual computation of the pressure force. This incorporation improves the convergence rate of the solver. Furthermore, we propose to compute the density deviation based on velocities instead of positions as this formulation improves the robustness of the time-integration scheme. We show that our novel formulation outperforms previous projection schemes and state-of-the-art SPH methods. Large time steps and small density deviations of down to 0.01% can be handled in typical scenarios. The practical relevance of the approach is illustrated by scenarios with up to 40 million SPH particles.
Ihmsen, Markus; Cornelis, Jens; Solenthaler, Barbara; Horvath, Christopher; Teschner, Matthias
2014-03-01
We propose a novel formulation of the projection method for Smoothed Particle Hydrodynamics (SPH). We combine a symmetric SPH pressure force and an SPH discretization of the continuity equation to obtain a discretized form of the pressure Poisson equation (PPE). In contrast to previous projection schemes, our system does consider the actual computation of the pressure force. This incorporation improves the convergence rate of the solver. Furthermore, we propose to compute the density deviation based on velocities instead of positions as this formulation improves the robustness of the time-integration scheme. We show that our novel formulation outperforms previous projection schemes and state-of-the-art SPH methods. Large time steps and small density deviations of down to 0.01 percent can be handled in typical scenarios. The practical relevance of the approach is illustrated by scenarios with up to 40 million SPH particles.
Carbohydrate-Loading: A Safe and Effective Method of Improving Endurance Performance.
ERIC Educational Resources Information Center
Beeker, Richard T.; Israel, Richard G.
Carbohydrate-loading prior to distance events is a common practice among endurance athletes. The purposes of this paper are to review previous research and to clarify misconceptions which may exist concerning carbohydrate-loading. The most effective method of carbohydrate-loading involves a training run of sufficient intensity and duration to…
An improved K-means clustering method for cDNA microarray image segmentation.
Wang, T N; Li, T J; Shao, G F; Wu, S X
2015-07-14
Microarray technology is a powerful tool for human genetic research and other biomedical applications. Numerous improvements to the standard K-means algorithm have been carried out to complete the image segmentation step. However, most of the previous studies classify the image into two clusters. In this paper, we propose a novel K-means algorithm, which first classifies the image into three clusters, and then one of the three clusters is divided as the background region and the other two clusters, as the foreground region. The proposed method was evaluated on six different data sets. The analyses of accuracy, efficiency, expression values, special gene spots, and noise images demonstrate the effectiveness of our method in improving the segmentation quality.
Scott, Brandon L; Hoppe, Adam D
2016-01-01
Fluorescence resonance energy transfer (FRET) microscopy is a powerful tool for imaging the interactions between fluorescently tagged proteins in two-dimensions. For FRET microscopy to reach its full potential, it must be able to image more than one pair of interacting molecules and image degradation from out-of-focus light must be reduced. Here we extend our previous work on the application of maximum likelihood methods to the 3-dimensional reconstruction of 3-way FRET interactions within cells. We validated the new method (3D-3Way FRET) by simulation and fluorescent protein test constructs expressed in cells. In addition, we improved the computational methods to create a 2-log reduction in computation time over our previous method (3DFSR). We applied 3D-3Way FRET to image the 3D subcellular distributions of HIV Gag assembly. Gag fused to three different FPs (CFP, YFP, and RFP), assembled into viral-like particles and created punctate FRET signals that become visible on the cell surface when 3D-3Way FRET was applied to the data. Control experiments in which YFP-Gag, RFP-Gag and free CFP were expressed, demonstrated localized FRET between YFP and RFP at sites of viral assembly that were not associated with CFP. 3D-3Way FRET provides the first approach for quantifying multiple FRET interactions while improving the 3D resolution of FRET microscopy data without introducing bias into the reconstructed estimates. This method should allow improvement of widefield, confocal and superresolution FRET microscopy data.
2012-01-01
Background While progress has been made to develop automatic segmentation techniques for mitochondria, there remains a need for more accurate and robust techniques to delineate mitochondria in serial blockface scanning electron microscopic data. Previously developed texture based methods are limited for solving this problem because texture alone is often not sufficient to identify mitochondria. This paper presents a new three-step method, the Cytoseg process, for automated segmentation of mitochondria contained in 3D electron microscopic volumes generated through serial block face scanning electron microscopic imaging. The method consists of three steps. The first is a random forest patch classification step operating directly on 2D image patches. The second step consists of contour-pair classification. At the final step, we introduce a method to automatically seed a level set operation with output from previous steps. Results We report accuracy of the Cytoseg process on three types of tissue and compare it to a previous method based on Radon-Like Features. At step 1, we show that the patch classifier identifies mitochondria texture but creates many false positive pixels. At step 2, our contour processing step produces contours and then filters them with a second classification step, helping to improve overall accuracy. We show that our final level set operation, which is automatically seeded with output from previous steps, helps to smooth the results. Overall, our results show that use of contour pair classification and level set operations improve segmentation accuracy beyond patch classification alone. We show that the Cytoseg process performs well compared to another modern technique based on Radon-Like Features. Conclusions We demonstrated that texture based methods for mitochondria segmentation can be enhanced with multiple steps that form an image processing pipeline. While we used a random-forest based patch classifier to recognize texture, it would be possible to replace this with other texture identifiers, and we plan to explore this in future work. PMID:22321695
2017-01-01
To improve point-of-care quantification using microchip capillary electrophoresis (MCE), the chip-to-chip variabilities inherent in disposable, single-use devices must be addressed. This work proposes to integrate an internal standard (ISTD) into the microchip by adding it to the background electrolyte (BGE) instead of the sample—thus eliminating the need for additional sample manipulation, microchip redesigns, and/or system expansions required for traditional ISTD usage. Cs and Li ions were added as integrated ISTDs to the BGE, and their effects on the reproducibility of Na quantification were explored. Results were then compared to the conclusions of our previous publication which used Cs and Li as traditional ISTDs. The in-house fabricated microchips, electrophoretic protocols, and solution matrixes were kept constant, allowing the proposed method to be reliably compared to the traditional method. Using the integrated ISTDs, both Cs and Li improved the Na peak area reproducibility approximately 2-fold, to final RSD values of 2.2–4.7% (n = 900). In contrast (to previous work), Cs as a traditional ISTD resulted in final RSDs of 2.5–8.8%, while the traditional Li ISTD performed poorly with RSDs of 6.3–14.2%. These findings suggest integrated ISTDs are a viable method to improve the precision of disposable MCE devices—giving matched or superior results to the traditional method in this study while neither increasing system cost nor complexity. PMID:28192985
High-resolution method for evolving complex interface networks
NASA Astrophysics Data System (ADS)
Pan, Shucheng; Hu, Xiangyu Y.; Adams, Nikolaus A.
2018-04-01
In this paper we describe a high-resolution transport formulation of the regional level-set approach for an improved prediction of the evolution of complex interface networks. The novelty of this method is twofold: (i) construction of local level sets and reconstruction of a global level set, (ii) local transport of the interface network by employing high-order spatial discretization schemes for improved representation of complex topologies. Various numerical test cases of multi-region flow problems, including triple-point advection, single vortex flow, mean curvature flow, normal driven flow, dry foam dynamics and shock-bubble interaction show that the method is accurate and suitable for a wide range of complex interface-network evolutions. Its overall computational cost is comparable to the Semi-Lagrangian regional level-set method while the prediction accuracy is significantly improved. The approach thus offers a viable alternative to previous interface-network level-set method.
Automated cloud screening of AVHRR imagery using split-and-merge clustering
NASA Technical Reports Server (NTRS)
Gallaudet, Timothy C.; Simpson, James J.
1991-01-01
Previous methods to segment clouds from ocean in AVHRR imagery have shown varying degrees of success, with nighttime approaches being the most limited. An improved method of automatic image segmentation, the principal component transformation split-and-merge clustering (PCTSMC) algorithm, is presented and applied to cloud screening of both nighttime and daytime AVHRR data. The method combines spectral differencing, the principal component transformation, and split-and-merge clustering to sample objectively the natural classes in the data. This segmentation method is then augmented by supervised classification techniques to screen clouds from the imagery. Comparisons with other nighttime methods demonstrate its improved capability in this application. The sensitivity of the method to clustering parameters is presented; the results show that the method is insensitive to the split-and-merge thresholds.
NASA Technical Reports Server (NTRS)
Jefferys, W. H.
1981-01-01
A least squares method proposed previously for solving a general class of problems is expanded in two ways. First, covariance matrices related to the solution are calculated and their interpretation is given. Second, improved methods of solving the normal equations related to those of Marquardt (1963) and Fletcher and Powell (1963) are developed for this approach. These methods may converge in cases where Newton's method diverges or converges slowly.
REMARK checklist elaborated to improve tumor prognostician
Experts have elaborated on a previously published checklist of 20 items -- including descriptions of design, methods, and analysis -- that researchers should address when publishing studies of prognostic markers. These markers are indicators that enable d
Improved Absolute Radiometric Calibration of a UHF Airborne Radar
NASA Technical Reports Server (NTRS)
Chapin, Elaine; Hawkins, Brian P.; Harcke, Leif; Hensley, Scott; Lou, Yunling; Michel, Thierry R.; Moreira, Laila; Muellerschoen, Ronald J.; Shimada, Joanne G.; Tham, Kean W.;
2015-01-01
The AirMOSS airborne SAR operates at UHF and produces fully polarimetric imagery. The AirMOSS radar data are used to produce Root Zone Soil Moisture (RZSM) depth profiles. The absolute radiometric accuracy of the imagery, ideally of better than 0.5 dB, is key to retrieving RZSM, especially in wet soils where the backscatter as a function of soil moisture function tends to flatten out. In this paper we assess the absolute radiometric uncertainty in previously delivered data, describe a method to utilize Built In Test (BIT) data to improve the radiometric calibration, and evaluate the improvement from applying the method.
Passive wireless strain monitoring of tire using capacitance change
NASA Astrophysics Data System (ADS)
Matsuzaki, Ryosuke; Todoroki, Akira
2004-07-01
In-service strain monitoring of tires of automobile is quite effective for improving the reliability of tires and Anti-lock Braking System (ABS). Since conventional strain gages have high stiffness and require lead wires, the conventional strain gages are cumbersome for the strain measurements of the tires. In a previous study, the authors proposed a new wireless strain monitoring method that adopts the tire itself as a sensor, with an oscillating circuit. This method is very simple and useful, but it requires a battery to activate the oscillating circuit. In the present study, the previous method for wireless tire monitoring is improved to produce a passive wireless sensor. A specimen made from a commercially available tire is connected to a tuning circuit comprising an inductance and a capacitance as a condenser. The capacitance change of tire causes change of the tuning frequency. This change of the tuned radio wave enables us to measure the applied strain of the specimen wirelessly, without any power supply from outside. This new passive wireless method is applied to a specimen and the static applied strain is measured. As a result, the method is experimentally shown to be effective as a passive wireless strain monitoring of tires.
DeFelice, Nicholas B; Johnston, Jill E; Gibson, Jacqueline MacDonald
2015-08-18
The magnitude and spatial variability of acute gastrointestinal illness (AGI) cases attributable to microbial contamination of U.S. community drinking water systems are not well characterized. We compared three approaches (drinking water attributable risk, quantitative microbial risk assessment, and population intervention model) to estimate the annual number of emergency department visits for AGI attributable to microorganisms in North Carolina community water systems. All three methods used 2007-2013 water monitoring and emergency department data obtained from state agencies. The drinking water attributable risk method, which was the basis for previous U.S. Environmental Protection Agency national risk assessments, estimated that 7.9% of annual emergency department visits for AGI are attributable to microbial contamination of community water systems. However, the other methods' estimates were more than 2 orders of magnitude lower, each attributing 0.047% of annual emergency department visits for AGI to community water system contamination. The differences in results between the drinking water attributable risk method, which has been the main basis for previous national risk estimates, and the other two approaches highlight the need to improve methods for estimating endemic waterborne disease risks, in order to prioritize investments to improve community drinking water systems.
Optimization of controllability and robustness of complex networks by edge directionality
NASA Astrophysics Data System (ADS)
Liang, Man; Jin, Suoqin; Wang, Dingjie; Zou, Xiufen
2016-09-01
Recently, controllability of complex networks has attracted enormous attention in various fields of science and engineering. How to optimize structural controllability has also become a significant issue. Previous studies have shown that an appropriate directional assignment can improve structural controllability; however, the evolution of the structural controllability of complex networks under attacks and cascading has always been ignored. To address this problem, this study proposes a new edge orientation method (NEOM) based on residual degree that changes the link direction while conserving topology and directionality. By comparing the results with those of previous methods in two random graph models and several realistic networks, our proposed approach is demonstrated to be an effective and competitive method for improving the structural controllability of complex networks. Moreover, numerical simulations show that our method is near-optimal in optimizing structural controllability. Strikingly, compared to the original network, our method maintains the structural controllability of the network under attacks and cascading, indicating that the NEOM can also enhance the robustness of controllability of networks. These results alter the view of the nature of controllability in complex networks, change the understanding of structural controllability and affect the design of network models to control such networks.
A simplified analytic form for generation of axisymmetric plasma boundaries
Luce, Timothy C.
2017-02-23
An improved method has been formulated for generating analytic boundary shapes as input for axisymmetric MHD equilibria. This method uses the family of superellipses as the basis function, as previously introduced. The improvements are a simplified notation, reduction of the number of simultaneous nonlinear equations to be solved, and the realization that not all combinations of input parameters admit a solution to the nonlinear constraint equations. The method tests for the existence of a self-consistent solution and, when no solution exists, it uses a deterministic method to find a nearby solution. As a result, examples of generation of boundaries, includingmore » tests with an equilibrium solver, are given.« less
A simplified analytic form for generation of axisymmetric plasma boundaries
DOE Office of Scientific and Technical Information (OSTI.GOV)
Luce, Timothy C.
An improved method has been formulated for generating analytic boundary shapes as input for axisymmetric MHD equilibria. This method uses the family of superellipses as the basis function, as previously introduced. The improvements are a simplified notation, reduction of the number of simultaneous nonlinear equations to be solved, and the realization that not all combinations of input parameters admit a solution to the nonlinear constraint equations. The method tests for the existence of a self-consistent solution and, when no solution exists, it uses a deterministic method to find a nearby solution. As a result, examples of generation of boundaries, includingmore » tests with an equilibrium solver, are given.« less
Huh, Yong; Yu, Kiyun; Park, Woojin
2016-01-01
This paper proposes a method to detect corresponding vertex pairs between planar tessellation datasets. Applying an agglomerative hierarchical co-clustering, the method finds geometrically corresponding cell-set pairs from which corresponding vertex pairs are detected. Then, the map transformation is performed with the vertex pairs. Since these pairs are independently detected for each corresponding cell-set pairs, the method presents improved matching performance regardless of locally uneven positional discrepancies between dataset. The proposed method was applied to complicated synthetic cell datasets assumed as a cadastral map and a topographical map, and showed an improved result with the F-measures of 0.84 comparing to a previous matching method with the F-measure of 0.48.
An improved semi-implicit method for structural dynamics analysis
NASA Technical Reports Server (NTRS)
Park, K. C.
1982-01-01
A semi-implicit algorithm is presented for direct time integration of the structural dynamics equations. The algorithm avoids the factoring of the implicit difference solution matrix and mitigates the unacceptable accuracy losses which plagued previous semi-implicit algorithms. This substantial accuracy improvement is achieved by augmenting the solution matrix with two simple diagonal matrices of the order of the integration truncation error.
Experimental aeroelasticity in wind tunnels - History, status, and future in brief
NASA Technical Reports Server (NTRS)
Ricketts, Rodney H.
1993-01-01
The state of the art of experimental aeroelasticity in the United States is assessed. A brief history of the development of ground test facilities, apparatus, and testing methods is presented. Several experimental programs are described that were previously conducted and helped to improve the state of the art. Some specific future directions for improving and enhancing experimental aeroelasticity are suggested.
Use of a New Set of Linguistic Features to Improve Automatic Assessment of Text Readability
ERIC Educational Resources Information Center
Yoshimi, Takehiko; Kotani, Katsunori; Isahara, Hitoshi
2012-01-01
The present paper proposes and evaluates a readability assessment method designed for Japanese learners of EFL (English as a foreign language). The proposed readability assessment method is constructed by a regression algorithm using a new set of linguistic features that were employed separately in previous studies. The results showed that the…
ERIC Educational Resources Information Center
Kilburn, Daniel; Nind, Melanie; Wiles, Rose
2014-01-01
In light of calls to improve the capacity for social science research within UK higher education, this article explores the possibilities for an emerging pedagogy for research methods. A lack of pedagogical culture in this field has been identified by previous studies. In response, we examine pedagogical literature surrounding approaches for…
Sparse Matrix for ECG Identification with Two-Lead Features.
Tseng, Kuo-Kun; Luo, Jiao; Hegarty, Robert; Wang, Wenmin; Haiting, Dong
2015-01-01
Electrocardiograph (ECG) human identification has the potential to improve biometric security. However, improvements in ECG identification and feature extraction are required. Previous work has focused on single lead ECG signals. Our work proposes a new algorithm for human identification by mapping two-lead ECG signals onto a two-dimensional matrix then employing a sparse matrix method to process the matrix. And that is the first application of sparse matrix techniques for ECG identification. Moreover, the results of our experiments demonstrate the benefits of our approach over existing methods.
Single-shot speckle reduction in numerical reconstruction of digitally recorded holograms: comment.
Maycock, Jonathan; Hennelly, Bryan; McDonald, John
2015-09-01
We comment on a recent Letter by Hincapie et al. [Opt. Lett.40, 1623 (2015)], in which the authors proposed a method to reduce the speckle noise in digital holograms. This method was previously published by us in Maycock ["Improving reconstructions of digital holograms," Ph.D. thesis (National University of Ireland, 2012)] and Maycock and Hennelly [Improving Reconstructions of Digital Holograms: Speckle Reduction and Occlusions in Digital Holography (Lambert Academic, 2014)]. We also wish to highlight an important limitation of the method resulting from the superposition of different perspectives of the object/scene, which was not addressed in their Letter.
An Improved Flow Cytometry Method For Precise Quantitation Of Natural-Killer Cell Activity
NASA Technical Reports Server (NTRS)
Crucian, Brian; Nehlsen-Cannarella, Sandra; Sams, Clarence
2006-01-01
The ability to assess NK cell cytotoxicity using flow cytometry has been previously described and can serve as a powerful tool to evaluate effector immune function in the clinical setting. Previous methods used membrane permeable dyes to identify target cells. The use of these dyes requires great care to achieve optimal staining and results in a broad spectral emission that can make multicolor cytometry difficult. Previous methods have also used negative staining (the elimination of target cells) to identify effector cells. This makes a precise quantitation of effector NK cells impossible due to the interfering presence of T and B lymphocytes, and the data highly subjective to the variable levels of NK cells normally found in human peripheral blood. In this study an improved version of the standard flow cytometry assay for NK activity is described that has several advantages of previous methods. Fluorescent antibody staining (CD45FITC) is used to positively identify target cells in place of membranepermeable dyes. Fluorescent antibody staining of target cells is less labor intensive and more easily reproducible than membrane dyes. NK cells (true effector lymphocytes) are also positively identified by fluorescent antibody staining (CD56PE) allowing a simultaneous absolute count assessment of both NK cells and target cells. Dead cells are identified by membrane disruption using the DNA intercalating dye PI. Using this method, an exact NK:target ratio may be determined for each assessment, including quantitation of NK target complexes. Backimmunoscatter gating may be used to track live vs. dead Target cells via scatter properties. If desired, NK activity may then be normalized to standardized ratios for clinical comparisons between patients, making the determination of PBMC counts or NK cell percentages prior to testing unnecessary. This method provides an exact cytometric determination of NK activity that highly reproducible and may be suitable for routine use in the clinical setting.
Boundary conditions for simulating large SAW devices using ANSYS.
Peng, Dasong; Yu, Fengqi; Hu, Jian; Li, Peng
2010-08-01
In this report, we propose improved substrate left and right boundary conditions for simulating SAW devices using ANSYS. Compared with the previous methods, the proposed method can greatly reduce computation time. Furthermore, the longer the distance from the first reflector to the last one, the more computation time can be reduced. To verify the proposed method, a design example is presented with device center frequency 971.14 MHz.
Pan, Qing; Yao, Jialiang; Wang, Ruofan; Cao, Ping; Ning, Gangmin; Fang, Luping
2017-08-01
The vessels in the microcirculation keep adjusting their structure to meet the functional requirements of the different tissues. A previously developed theoretical model can reproduce the process of vascular structural adaptation to help the study of the microcirculatory physiology. However, until now, such model lacks the appropriate methods for its parameter settings with subsequent limitation of further applications. This study proposed an improved quantum-behaved particle swarm optimization (QPSO) algorithm for setting the parameter values in this model. The optimization was performed on a real mesenteric microvascular network of rat. The results showed that the improved QPSO was superior to the standard particle swarm optimization, the standard QPSO and the previously reported Downhill algorithm. We conclude that the improved QPSO leads to a better agreement between mathematical simulation and animal experiment, rendering the model more reliable in future physiological studies.
Mostafaei, F; McNeill, F E; Chettle, D R; Prestwich, W V
2013-10-01
We previously published a method for the in vivo measurement of bone fluoride using neutron activation analysis (NAA) and demonstrated the utility of the technique in a pilot study of environmentally exposed people. The method involved activation of the hand in an irradiation cavity at the McMaster University Accelerator Laboratory and acquisition of the resultant γ-ray signals in a '4π' NaI(Tl) detector array of nine detectors. In this paper we describe a series of improvements to the method. This was investigated via measurement of hand simulating phantoms doped with varying levels of fluorine and fixed amounts of sodium, chlorine and calcium. Four improvements to the technique were tested since our first publication. The previously published detection limit for phantom measurements using this system was 0.66 mg F/g Ca. The accelerator irradiation and detection facilities were relocated to a new section of the laboratory and one more detector was added to the detection system. This was found to reduce the detection limit (possibly because of better detection shielding and additional detector) to 0.59 mg F/g Ca, a factor of 1.12. A new set of phantoms was developed and in this work we show that they improved the minimum detectable limit for fluoride in phantoms irradiated using neutrons produced by 2.15 MeV protons on lithium by a factor of 1.55. We compared the detection limits previously obtained using a summed signal from the nine detectors with the detection limit obtained by acquiring the spectra in anticoincidence mode for reduction of the disturbing signal from chlorine in bone. This was found to improve the ratio of the detection of fluorine to chlorine (an interfering signal) by a factor of 2.8 and the resultant minimum detection limit was found to be reduced by a factor of 1.2. We studied the effects of changing the timing of γ-ray acquisition. Our previously published data used a series of three 10 s acquisitions followed by a 300 s count. Changing the acquisition to a series of six 5 s acquisitions was found to further improve the detection limit by a factor of 1.4. We also present data showing that if the neutron dose is delivered to the phantom in a shorter time period, i.e. the dose rate is increased and irradiation shortened then the detection limit can be reduced by a further factor of 1.35.The overall improvement in detection limit by employing all of these changes was found to be a factor of 3.9. The technique now has an in phantom detection limit of 0.17 mg F/g Ca compared to a previous detection limit of 0.66 mg F/g Ca. The system can now be tested on human volunteers to see if individuals with diagnosed fluorosis can be distinguished from the general Canadian population using this technique.
Quantitative evaluation of pairs and RS steganalysis
NASA Astrophysics Data System (ADS)
Ker, Andrew D.
2004-06-01
We give initial results from a new project which performs statistically accurate evaluation of the reliability of image steganalysis algorithms. The focus here is on the Pairs and RS methods, for detection of simple LSB steganography in grayscale bitmaps, due to Fridrich et al. Using libraries totalling around 30,000 images we have measured the performance of these methods and suggest changes which lead to significant improvements. Particular results from the project presented here include notes on the distribution of the RS statistic, the relative merits of different "masks" used in the RS algorithm, the effect on reliability when previously compressed cover images are used, and the effect of repeating steganalysis on the transposed image. We also discuss improvements to the Pairs algorithm, restricting it to spatially close pairs of pixels, which leads to a substantial performance improvement, even to the extent of surpassing the RS statistic which was previously thought superior for grayscale images. We also describe some of the questions for a general methodology of evaluation of steganalysis, and potential pitfalls caused by the differences between uncompressed, compressed, and resampled cover images.
NASA Astrophysics Data System (ADS)
Xie, Shi-Peng; Luo, Li-Min
2012-06-01
The authors propose a combined scatter reduction and correction method to improve image quality in cone beam computed tomography (CBCT). The scatter kernel superposition (SKS) method has been used occasionally in previous studies. However, this method differs in that a scatter detecting blocker (SDB) was used between the X-ray source and the tested object to model the self-adaptive scatter kernel. This study first evaluates the scatter kernel parameters using the SDB, and then isolates the scatter distribution based on the SKS. The quality of image can be improved by removing the scatter distribution. The results show that the method can effectively reduce the scatter artifacts, and increase the image quality. Our approach increases the image contrast and reduces the magnitude of cupping. The accuracy of the SKS technique can be significantly improved in our method by using a self-adaptive scatter kernel. This method is computationally efficient, easy to implement, and provides scatter correction using a single scan acquisition.
An Improved Experimental Method for Simulating Erosion Processes by Concentrated Channel Flow
Chen, Xiao-Yan; Zhao, Yu; Mo, Bin; Mi, Hong-Xing
2014-01-01
Rill erosion is an important process that occurs on hill slopes, including sloped farmland. Laboratory simulations have been vital to understanding rill erosion. Previous experiments obtained sediment yields using rills of various lengths to get the sedimentation process, which disrupted the continuity of the rill erosion process and was time-consuming. In this study, an improved experimental method was used to measure the rill erosion processes by concentrated channel flow. By using this method, a laboratory platform, 12 m long and 3 m wide, was used to construct rills of 0.1 m wide and 12 m long for experiments under five slope gradients (5, 10, 15, 20, and 25 degrees) and three flow rates (2, 4, and 8 L min−1). Sediment laden water was simultaneously sampled along the rill at locations 0.5 m, 1 m, 2 m, 3 m, 4 m, 5 m, 6 m, 7 m, 8 m, 10 m, and 12 m from the water inlet to determine the sediment concentration distribution. The rill erosion process measured by the method used in this study and that by previous experimental methods are approximately the same. The experimental data indicated that sediment concentrations increase with slope gradient and flow rate, which highlights the hydraulic impact on rill erosion. Sediment concentration increased rapidly at the initial section of the rill, and the rate of increase in sediment concentration reduced with the rill length. Overall, both experimental methods are feasible and applicable. However, the method proposed in this study is more efficient and easier to operate. This improved method will be useful in related research. PMID:24949621
A comprehensive study on pavement edge line implementation.
DOT National Transportation Integrated Search
2014-04-01
The previous 2011 study Safety Improvement from Edge Lines on Rural Two-Lane Highways analyzed the crash data of : three years before and one year after edge line implementation by using the latest safety analysis statistical method. It : concl...
Methods Used in EnviroAtlas to Assess Urban Natural Infrastructure
Previous studies have positively correlated human exposures to natural features with health promoting outcomes such as increased physical activity, improved cognitive function, increased social engagement, and reduced ambient air pollution. When using remotely-sensed data to inve...
Armstrong, M Stuart; Finn, Paul W; Morris, Garrett M; Richards, W Graham
2011-08-01
In a previous paper, we presented the ElectroShape method, which we used to achieve successful ligand-based virtual screening. It extended classical shape-based methods by applying them to the four-dimensional shape of the molecule where partial charge was used as the fourth dimension to capture electrostatic information. This paper extends the approach by using atomic lipophilicity (alogP) as an additional molecular property and validates it using the improved release 2 of the Directory of Useful Decoys (DUD). When alogP replaced partial charge, the enrichment results were slightly below those of ElectroShape, though still far better than purely shape-based methods. However, when alogP was added as a complement to partial charge, the resulting five-dimensional enrichments shows a clear improvement in performance. This demonstrates the utility of extending the ElectroShape virtual screening method by adding other atom-based descriptors.
NASA Astrophysics Data System (ADS)
Ho, Tung-Cheng; Satake, Kenji; Watada, Shingo
2017-12-01
Systemic travel time delays of up to 15 min relative to the linear long waves for transoceanic tsunamis have been reported. A phase correction method, which converts the linear long waves into dispersive waves, was previously proposed to consider seawater compressibility, the elasticity of the Earth, and gravitational potential change associated with tsunami motion. In the present study, we improved this method by incorporating the effects of ocean density stratification, actual tsunami raypath, and actual bathymetry. The previously considered effects amounted to approximately 74% for correction of the travel time delay, while the ocean density stratification, actual raypath, and actual bathymetry, contributed to approximately 13%, 4%, and 9% on average, respectively. The improved phase correction method accounted for almost all the travel time delay at far-field stations. We performed single and multiple time window inversions for the 2011 Tohoku tsunami using the far-field data (>3 h travel time) to investigate the initial sea surface displacement. The inversion result from only far-field data was similar to but smoother than that from near-field data and all stations, including a large sea surface rise increasing toward the trench followed by a migration northward along the trench. For the forward simulation, our results showed good agreement between the observed and computed waveforms at both near-field and far-field tsunami gauges, as well as with satellite altimeter data. The present study demonstrates that the improved method provides a more accurate estimate for the waveform inversion and forward prediction of far-field data.
Improved Technique for Finding Vibration Parameters
NASA Technical Reports Server (NTRS)
Andrew, L. V.; Park, C. C.
1986-01-01
Filtering and sample manipulation reduce noise effects. Analysis technique improves extraction of vibrational frequencies and damping rates from measurements of vibrations of complicated structure. Structural vibrations measured by accelerometers. Outputs digitized at frequency high enough to cover all modes of interest. Use of method on set of vibrational measurements from Space Shuttle, raised level of coherence from previous values below 50 percent to values between 90 and 99 percent
Reynolds, H. G.; Schoff, M. E.; Farrell, M. P.; ...
2017-03-23
The magnetic recoil spectrometer uses a deuterated polyethylene polymer (CD 2) foil to measure neutron yield in inertial confinement fusion experiments. Higher neutron yields in recent experiments have resulted in primary signal saturation in the detector CR-39 foils, necessitating the fabrication of thinner CD 2 foils than established methods could provide. A novel method of fabricating deuterated polymer foils is described. The resulting foils are thinner, smoother, and more uniform in thickness than the foils produced by previous methods. Here, these new foils have successfully been deployed at the National Ignition Facility, enabling higher neutron yield measurements than previous foils,more » with no primary signal saturation.« less
NASA Astrophysics Data System (ADS)
Harabuchi, Yu; Taketsugu, Tetsuya; Maeda, Satoshi
2017-04-01
We report a new approach to search for structures of minimum energy conical intersection (MECIs) automatically. Gradient projection (GP) method and single component artificial force induced reaction (SC-AFIR) method were combined in the present approach. As case studies, MECIs of benzene and naphthalene between their ground and first excited singlet electronic states (S0/S1-MECIs) were explored. All S0/S1-MECIs reported previously were obtained automatically. Furthermore, the number of force calculations was reduced compared to the one required in the previous search. Improved convergence in a step in which various geometrical displacements are induced by SC-AFIR would contribute to the cost reduction.
Gold-standard evaluation of a folksonomy-based ontology learning model
NASA Astrophysics Data System (ADS)
Djuana, E.
2018-03-01
Folksonomy, as one result of collaborative tagging process, has been acknowledged for its potential in improving categorization and searching of web resources. However, folksonomy contains ambiguities such as synonymy and polysemy as well as different abstractions or generality problem. To maximize its potential, some methods for associating tags of folksonomy with semantics and structural relationships have been proposed such as using ontology learning method. This paper evaluates our previous work in ontology learning according to gold-standard evaluation approach in comparison to a notable state-of-the-art work and several baselines. The results show that our method is comparable to the state-of the art work which further validate our approach as has been previously validated using task-based evaluation approach.
Improved determination of particulate absorption from combined filter pad and PSICAM measurements.
Lefering, Ina; Röttgers, Rüdiger; Weeks, Rebecca; Connor, Derek; Utschig, Christian; Heymann, Kerstin; McKee, David
2016-10-31
Filter pad light absorption measurements are subject to two major sources of experimental uncertainty: the so-called pathlength amplification factor, β, and scattering offsets, o, for which previous null-correction approaches are limited by recent observations of non-zero absorption in the near infrared (NIR). A new filter pad absorption correction method is presented here which uses linear regression against point-source integrating cavity absorption meter (PSICAM) absorption data to simultaneously resolve both β and the scattering offset. The PSICAM has previously been shown to provide accurate absorption data, even in highly scattering waters. Comparisons of PSICAM and filter pad particulate absorption data reveal linear relationships that vary on a sample by sample basis. This regression approach provides significantly improved agreement with PSICAM data (3.2% RMS%E) than previously published filter pad absorption corrections. Results show that direct transmittance (T-method) filter pad absorption measurements perform effectively at the same level as more complex geometrical configurations based on integrating cavity measurements (IS-method and QFT-ICAM) because the linear regression correction compensates for the sensitivity to scattering errors in the T-method. This approach produces accurate filter pad particulate absorption data for wavelengths in the blue/UV and in the NIR where sensitivity issues with PSICAM measurements limit performance. The combination of the filter pad absorption and PSICAM is therefore recommended for generating full spectral, best quality particulate absorption data as it enables correction of multiple errors sources across both measurements.
Pan, Yijie; Wang, Yongtian; Liu, Juan; Li, Xin; Jia, Jia
2014-03-01
Previous research [Appl. Opt.52, A290 (2013)] has revealed that Fourier analysis of three-dimensional affine transformation theory can be used to improve the computation speed of the traditional polygon-based method. In this paper, we continue our research and propose an improved full analytical polygon-based method developed upon this theory. Vertex vectors of primitive and arbitrary triangles and the pseudo-inverse matrix were used to obtain an affine transformation matrix representing the spatial relationship between the two triangles. With this relationship and the primitive spectrum, we analytically obtained the spectrum of the arbitrary triangle. This algorithm discards low-level angular dependent computations. In order to add diffusive reflection to each arbitrary surface, we also propose a whole matrix computation approach that takes advantage of the affine transformation matrix and uses matrix multiplication to calculate shifting parameters of similar sub-polygons. The proposed method improves hologram computation speed for the conventional full analytical approach. Optical experimental results are demonstrated which prove that the proposed method can effectively reconstruct three-dimensional scenes.
Augmented reality glass-free three-dimensional display with the stereo camera
NASA Astrophysics Data System (ADS)
Pang, Bo; Sang, Xinzhu; Chen, Duo; Xing, Shujun; Yu, Xunbo; Yan, Binbin; Wang, Kuiru; Yu, Chongxiu
2017-10-01
An improved method for Augmented Reality (AR) glass-free three-dimensional (3D) display based on stereo camera used for presenting parallax contents from different angle with lenticular lens array is proposed. Compared with the previous implementation method of AR techniques based on two-dimensional (2D) panel display with only one viewpoint, the proposed method can realize glass-free 3D display of virtual objects and real scene with 32 virtual viewpoints. Accordingly, viewers can get abundant 3D stereo information from different viewing angles based on binocular parallax. Experimental results show that this improved method based on stereo camera can realize AR glass-free 3D display, and both of virtual objects and real scene have realistic and obvious stereo performance.
A Novel Method for Rearing Zebrafish by Using Freshwater Rotifers (Brachionus calyciflorus)
Aoyama, Yuta; Moriya, Natsumi; Tanaka, Shingo; Taniguchi, Tomoko; Hosokawa, Hiroshi
2015-01-01
Abstract The zebrafish (Danio rerio) has become a powerful model organism for studying developmental processes and genetic diseases. However, there remain several problems in previous rearing methods. In this study, we demonstrate a novel method for rearing zebrafish larvae by using a new first food, freshwater rotifers (Brachionus calyciflorus). Feeding experiments indicated that freshwater rotifers are suitable as the first food for newly hatched larval fish. In addition, we revisited and improved a feeding schedule from 5 to 40 days postfertilization. Our feeding method using freshwater rotifers accelerated larval growth. At 49 dpf, one pair out of 10 pairs successfully produced six fertilized eggs. At 56, 63, and 71 dpf, 6 out of the 10 pairs constantly produced normal embryos. Our method will improve the husbandry of the zebrafish. PMID:25938499
ERIC Educational Resources Information Center
Prouty, Kenneth E.
2004-01-01
This essay examines how jazz educators construct methods for teaching the art of improvisation in institutionalized jazz studies programs. Unlike previous studies of the processes and philosophies of jazz instruction, I examine such processes from a cultural standpoint, to identify why certain methods might be favored over others. Specifically,…
NASA Astrophysics Data System (ADS)
Li, Xiongwei; Wang, Zhe; Lui, Siu-Lung; Fu, Yangting; Li, Zheng; Liu, Jianming; Ni, Weidou
2013-10-01
A bottleneck of the wide commercial application of laser-induced breakdown spectroscopy (LIBS) technology is its relatively high measurement uncertainty. A partial least squares (PLS) based normalization method was proposed to improve pulse-to-pulse measurement precision for LIBS based on our previous spectrum standardization method. The proposed model utilized multi-line spectral information of the measured element and characterized the signal fluctuations due to the variation of plasma characteristic parameters (plasma temperature, electron number density, and total number density) for signal uncertainty reduction. The model was validated by the application of copper concentration prediction in 29 brass alloy samples. The results demonstrated an improvement on both measurement precision and accuracy over the generally applied normalization as well as our previously proposed simplified spectrum standardization method. The average relative standard deviation (RSD), average of the standard error (error bar), the coefficient of determination (R2), the root-mean-square error of prediction (RMSEP), and average value of the maximum relative error (MRE) were 1.80%, 0.23%, 0.992, 1.30%, and 5.23%, respectively, while those for the generally applied spectral area normalization were 3.72%, 0.71%, 0.973, 1.98%, and 14.92%, respectively.
Morison, Gordon; Boreham, Philip
2018-01-01
Electromagnetic Interference (EMI) is a technique for capturing Partial Discharge (PD) signals in High-Voltage (HV) power plant apparatus. EMI signals can be non-stationary which makes their analysis difficult, particularly for pattern recognition applications. This paper elaborates upon a previously developed software condition-monitoring model for improved EMI events classification based on time-frequency signal decomposition and entropy features. The idea of the proposed method is to map multiple discharge source signals captured by EMI and labelled by experts, including PD, from the time domain to a feature space, which aids in the interpretation of subsequent fault information. Here, instead of using only one permutation entropy measure, a more robust measure, called Dispersion Entropy (DE), is added to the feature vector. Multi-Class Support Vector Machine (MCSVM) methods are utilized for classification of the different discharge sources. Results show an improved classification accuracy compared to previously proposed methods. This yields to a successful development of an expert’s knowledge-based intelligent system. Since this method is demonstrated to be successful with real field data, it brings the benefit of possible real-world application for EMI condition monitoring. PMID:29385030
Jurrus, Elizabeth; Paiva, Antonio R C; Watanabe, Shigeki; Anderson, James R; Jones, Bryan W; Whitaker, Ross T; Jorgensen, Erik M; Marc, Robert E; Tasdizen, Tolga
2010-12-01
Study of nervous systems via the connectome, the map of connectivities of all neurons in that system, is a challenging problem in neuroscience. Towards this goal, neurobiologists are acquiring large electron microscopy datasets. However, the shear volume of these datasets renders manual analysis infeasible. Hence, automated image analysis methods are required for reconstructing the connectome from these very large image collections. Segmentation of neurons in these images, an essential step of the reconstruction pipeline, is challenging because of noise, anisotropic shapes and brightness, and the presence of confounding structures. The method described in this paper uses a series of artificial neural networks (ANNs) in a framework combined with a feature vector that is composed of image intensities sampled over a stencil neighborhood. Several ANNs are applied in series allowing each ANN to use the classification context provided by the previous network to improve detection accuracy. We develop the method of serial ANNs and show that the learned context does improve detection over traditional ANNs. We also demonstrate advantages over previous membrane detection methods. The results are a significant step towards an automated system for the reconstruction of the connectome. Copyright 2010 Elsevier B.V. All rights reserved.
Samak, M. Mosleh E. Abu; Bakar, A. Ashrif A.; Kashif, Muhammad; Zan, Mohd Saiful Dzulkifly
2016-01-01
This paper discusses numerical analysis methods for different geometrical features that have limited interval values for typically used sensor wavelengths. Compared with existing Finite Difference Time Domain (FDTD) methods, the alternating direction implicit (ADI)-FDTD method reduces the number of sub-steps by a factor of two to three, which represents a 33% time savings in each single run. The local one-dimensional (LOD)-FDTD method has similar numerical equation properties, which should be calculated as in the previous method. Generally, a small number of arithmetic processes, which result in a shorter simulation time, are desired. The alternating direction implicit technique can be considered a significant step forward for improving the efficiency of unconditionally stable FDTD schemes. This comparative study shows that the local one-dimensional method had minimum relative error ranges of less than 40% for analytical frequencies above 42.85 GHz, and the same accuracy was generated by both methods.
1999-01-01
contaminating the surface. Research efforts to develop an improved sampling method have previously been limited to deposits made from solutions of explosives...explosive per fingerprint calculated in this way has too much variation to allow determination of sampling efficiency or to use this method to prepare...crystals is put into suspension, the actual amount is determined by usual methods including high-performance liquid chromatography (HPLC), gas
Limited-memory trust-region methods for sparse relaxation
NASA Astrophysics Data System (ADS)
Adhikari, Lasith; DeGuchy, Omar; Erway, Jennifer B.; Lockhart, Shelby; Marcia, Roummel F.
2017-08-01
In this paper, we solve the l2-l1 sparse recovery problem by transforming the objective function of this problem into an unconstrained differentiable function and applying a limited-memory trust-region method. Unlike gradient projection-type methods, which uses only the current gradient, our approach uses gradients from previous iterations to obtain a more accurate Hessian approximation. Numerical experiments show that our proposed approach eliminates spurious solutions more effectively while improving computational time.
A Method that Will Captivate U.
Martin, Sophie; Coller, Jeff
2015-09-03
In an age of next-generation sequencing, the ability to purify RNA transcripts has become a critical issue. In this issue, Duffy et al. (2015) improve on a pre-existing technique of RNA labeling and purification by 4-thiouridine tagging. By increasing the efficiency of RNA capture, this method will enhance the ability to study RNA dynamics, especially for transcripts normally inefficiently captured by previous methods. Copyright © 2015 Elsevier Inc. All rights reserved.
The red supergiant population in the Perseus arm
NASA Astrophysics Data System (ADS)
Dorda, R.; Negueruela, I.; González-Fernández, C.
2018-04-01
We present a new catalogue of cool supergiants in a section of the Perseus arm, most of which had not been previously identified. To generate it, we have used a set of well-defined photometric criteria to select a large number of candidates (637) that were later observed at intermediate resolution in the infrared calcium triplet spectral range, using a long-slit spectrograph. To separate red supergiants from luminous red giants, we used a statistical method, developed in previous works and improved in the present paper. We present a method to assign probabilities of being a red supergiant to a given spectrum and use the properties of a population to generate clean samples, without contamination from lower luminosity stars. We compare our identification with a classification done using classical criteria and discuss their respective efficiencies and contaminations as identification methods. We confirm that our method is as efficient at finding supergiants as the best classical methods, but with a far lower contamination by red giants than any other method. The result is a catalogue with 197 cool supergiants, 191 of which did not appear in previous lists of red supergiants. This is the largest coherent catalogue of cool supergiants in the Galaxy.
Low cost fabrication of ablative heat shields
NASA Technical Reports Server (NTRS)
Cecka, A. M.; Schofield, W. C.
1972-01-01
A material and process study was performed using subscale panels in an attempt to reduce the cost of fabricating ablative heat shield panels. Although no improvements were made in the material formulation, a significant improvement was obtained in the processing methods compared to those employed in the previous work. The principal feature of the new method is the press filling and curing of the ablation material in a single step with the bonding and curing of the face sheet. This method was chosen to replace the hand troweling and autoclave curing procedure used previously. Double-curvature panels of the same size as the flat panels were fabricated to investigate fabrication problems. It was determined that the same materials and processes used for flat panels can be used to produce the curved panels. A design with severe curvatures consisting of radii of 24 x 48 inches was employed for evaluation. Ten low-density and ten high-density panels were fabricated. With the exception of difficulties related to short run non-optimum tooling, excellent panel filling and density uniformity were obtained.
Shem-Tov, Doron; Halperin, Eran
2014-06-01
Recent technological improvements in the field of genetic data extraction give rise to the possibility of reconstructing the historical pedigrees of entire populations from the genotypes of individuals living today. Current methods are still not practical for real data scenarios as they have limited accuracy and assume unrealistic assumptions of monogamy and synchronized generations. In order to address these issues, we develop a new method for pedigree reconstruction, [Formula: see text], which is based on formulations of the pedigree reconstruction problem as variants of graph coloring. The new formulation allows us to consider features that were overlooked by previous methods, resulting in a reconstruction of up to 5 generations back in time, with an order of magnitude improvement of false-negatives rates over the state of the art, while keeping a lower level of false positive rates. We demonstrate the accuracy of [Formula: see text] compared to previous approaches using simulation studies over a range of population sizes, including inbred and outbred populations, monogamous and polygamous mating patterns, as well as synchronous and asynchronous mating.
Assessing Stream Channel Stability at Bridges in Physiographic Regions
DOT National Transportation Integrated Search
2006-07-01
The objective of this study was to expand and improve a rapid channel stability assessment method developed previously by Johnson et al. to include additional factors, such as major physiographic units across the United States, a greater range of ban...
Why is it important to improve dietary assessment methods?
Food frequency questionnaires, which measure a person's usual intake over a defined period of time, and 24-hour recalls, in which a person records everything eaten or drunk during the previous 24 hours, are commonly used to collect dietary information.
NASA Technical Reports Server (NTRS)
Henderson, R. G.; Thomas, G. S.; Nalepka, R. F.
1975-01-01
Methods of performing signature extension, using LANDSAT-1 data, are explored. The emphasis is on improving the performance and cost-effectiveness of large area wheat surveys. Two methods were developed: ASC, and MASC. Two methods, Ratio, and RADIFF, previously used with aircraft data were adapted to and tested on LANDSAT-1 data. An investigation into the sources and nature of between scene data variations was included. Initial investigations into the selection of training fields without in situ ground truth were undertaken.
Takahashi, Daisuke; Inomata, Tatsuji; Fukui, Tatsuya
2017-06-26
We previously reported an efficient peptide synthesis method, AJIPHASE®, that comprises repeated reactions and isolations by precipitation. This method utilizes an anchor molecule with long-chain alkyl groups as a protecting group for the C-terminus. To further improve this method, we developed a one-pot synthesis of a peptide sequence wherein the synthetic intermediates were isolated by solvent extraction instead of precipitation. A branched-chain anchor molecule was used in the new process, significantly enhancing the solubility of long peptides and the operational efficiency compared with the previous method, which employed precipitation for isolation and a straight-chain aliphatic group. Another prerequisite for this solvent-extraction-based strategy was the use of thiomalic acid and DBU for Fmoc deprotection, which facilitates the removal of byproducts, such as the fulvene adduct. © 2017 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.
The Work Compatibility Improvement Framework: an integrated perspective of the human-at-work system.
Genaidy, Ash; Salem, Sam; Karwowski, Waldemar; Paez, Omar; Tuncel, Setenay
2007-01-15
The industrial revolution demonstrated the limitations of a pure mechanistic approach towards work design. Human work is now seen as a complex entity that involves different scientific branches and blurs the line between mental and physical activities. Job design has been a traditional concern of applied psychology, which has provided insight into the interaction between the individual and the work environment. The goal of this paper is to introduce the human-at-work system as a holistic approach to organizational design. It postulates that the well-being of workers and work outcomes are issues that need to be addressed jointly, moving beyond traditional concepts of job satisfaction and work stress. The work compatibility model (WCM) is introduced as an engineering approach that seeks to integrate previous constructs of job and organizational design. The WCM seeks a balance between energy expenditure and replenishment. The implementation of the WCM in industrial settings is described within the context of the Work Compatibility Improvement Framework. A sample review of six models (motivation-hygiene theory; job characteristics theory; person-environment fit; demand-control model; and balance theory) provides the foundation for the interaction between the individual and the work environment. A review of three workload assessment methods (position analysis questionnaire, job task analysis and NASA task load index) gives an example of the foundation for the taxonomy of work environment domains. Previous models have sought to identify a balance state for the human-at-work system. They differentiated between the objective and subjective effects of the environment and the worker. An imbalance between the person and the environment has been proven to increase health risks. The WCM works with a taxonomy of 12 work domains classified in terms of the direct (acting) or indirect (experienced) effect on the worker. In terms of measurement, two quantitative methods are proposed to measure the state of the system. The first method introduced by Abdallah et al. (2004) identifies operating zones. The second method introduced by Salem et al. (2006) identifies the distribution of the work elements on the x/y coordinate plane. While previous efforts have identified some relevant elements of the systems, they failed to provide a holistic, quantitative approach combining organizational and human factors into a common framework. It is postulated that improving the well-being of workers will simultaneously improve organizational outcomes. The WCM moves beyond previous models by providing a hierarchical structure of work domains and a combination of methods to diagnose any organizational setting. The WCM is an attempt to achieve organizational excellence in human resource management, moving beyond job design to an integrated improvement strategy. A joint approach to organizational and job design will not only result in decreased prevalence of health risks, but in enhanced organizational effectiveness as well. The implementation of the WCM, that is, the Work Compatibility Improvement Framework, provides the basis for integrating different elements of the work environment into a single reliable construct. An improvement framework is essential to ensure that the measures of the WCM result in a system that is adaptive and self-regulated.
DAMBE7: New and Improved Tools for Data Analysis in Molecular Biology and Evolution.
Xia, Xuhua
2018-06-01
DAMBE is a comprehensive software package for genomic and phylogenetic data analysis on Windows, Linux, and Macintosh computers. New functions include imputing missing distances and phylogeny simultaneously (paving the way to build large phage and transposon trees), new bootstrapping/jackknifing methods for PhyPA (phylogenetics from pairwise alignments), and an improved function for fast and accurate estimation of the shape parameter of the gamma distribution for fitting rate heterogeneity over sites. Previous method corrects multiple hits for each site independently. DAMBE's new method uses all sites simultaneously for correction. DAMBE, featuring a user-friendly graphic interface, is freely available from http://dambe.bio.uottawa.ca (last accessed, April 17, 2018).
LETTER TO THE EDITOR: Two-centre exchange integrals for complex exponent Slater orbitals
NASA Astrophysics Data System (ADS)
Kuang, Jiyun; Lin, C. D.
1996-12-01
The one-dimensional integral representation for the Fourier transform of a two-centre product of B functions (finite linear combinations of Slater orbitals) with real parameters is generalized to include B functions with complex parameters. This one-dimensional integral representation allows for an efficient method of calculating two-centre exchange integrals with plane-wave electronic translational factors (ETF) over Slater orbitals of real/complex exponents. This method is a significant improvement on the previous two-dimensional quadrature method of the integrals. A new basis set of the form 0953-4075/29/24/005/img1 is proposed to improve the description of pseudo-continuum states in the close-coupling treatment of ion - atom collisions.
Improving Control System Cyber-State Awareness using Known Secure Sensor Measurements
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ondrej Linda; Milos Manic; Miles McQueen
Abstract—This paper presents design and simulation of a low cost and low false alarm rate method for improved cyber-state awareness of critical control systems - the Known Secure Sensor Measurements (KSSM) method. The KSSM concept relies on physical measurements to detect malicious falsification of the control systems state. The KSSM method can be incrementally integrated with already installed control systems for enhanced resilience. This paper reviews the previously developed theoretical KSSM concept and then describes a simulation of the KSSM system. A simulated control system network is integrated with the KSSM components. The effectiveness of detection of various intrusion scenariosmore » is demonstrated on several control system network topologies.« less
NASA Astrophysics Data System (ADS)
Kim, D.; Lee, H.; Yu, H.; Beighley, E.; Durand, M. T.; Alsdorf, D. E.; Hwang, E.
2017-12-01
River discharge is a prerequisite for an understanding of flood hazard and water resource management, yet we have poor knowledge of it, especially over remote basins. Previous studies have successfully used a classic hydraulic geometry, at-many-stations hydraulic geometry (AMHG), and Manning's equation to estimate the river discharge. Theoretical bases of these empirical methods were introduced by Leopold and Maddock (1953) and Manning (1889), and those have been long used in the field of hydrology, water resources, and geomorphology. However, the methods to estimate the river discharge from remotely sensed data essentially require bathymetric information of the river or are not applicable to braided rivers. Furthermore, the methods used in the previous studies adopted assumptions of river conditions to be steady and uniform. Consequently, those methods have limitations in estimating the river discharge in complex and unsteady flow in nature. In this study, we developed a novel approach to estimating river discharges by applying the weak learner method (here termed WLQ), which is one of the ensemble methods using multiple classifiers, to the remotely sensed measurements of water levels from Envisat altimetry, effective river widths from PALSAR images, and multi-temporal surface water slopes over a part of the mainstem Congo. Compared with the methods used in the previous studies, the root mean square error (RMSE) decreased from 5,089 m3s-1 to 3,701 m3s-1, and the relative RMSE (RRMSE) improved from 12% to 8%. It is expected that our method can provide improved estimates of river discharges in complex and unsteady flow conditions based on the data-driven prediction model by machine learning (i.e. WLQ), even when the bathymetric data is not available or in case of the braided rivers. Moreover, it is also expected that the WLQ can be applied to the measurements of river levels, slopes and widths from the future Surface Water Ocean Topography (SWOT) mission to be launched in 2021.
Improved heat switch for gas sorption compressor
NASA Technical Reports Server (NTRS)
Chan, C. K.
1985-01-01
Thermal conductivities of the charcoal bed and the copper matrix for the gas adsorption compressor were measured by the concentric-cylinder method. The presence of the copper matrix in the charcoal bed enhanced the bed conductance by at least an order of magnitude. Thermal capacities of the adsorbent cell and the heat leaks to two compressor designs were measured by the transient method. The new gas adsorption compressor had a heat switch that could transfer eight times more heat than the previous one. The cycle time for the new prototype compressor is also improved by a factor of eight to within the minute range.
Improvement of the Owner Distinction Method for Healing-Type Pet Robots
NASA Astrophysics Data System (ADS)
Nambo, Hidetaka; Kimura, Haruhiko; Hara, Mirai; Abe, Koji; Tajima, Takuya
In order to decrease human stress, Animal Assisted Therapy which applies pets to heal humans is attracted. However, since animals are insanitary and unsafe, it is difficult to practically apply animal pets in hospitals. For the reason, on behalf of animal pets, pet robots have been attracted. Since pet robots would have no problems in sanitation and safety, they are able to be applied as a substitute for animal pets in the therapy. In our previous study where pet robots distinguish their owners like an animal pet, we used a puppet type pet robot which has pressure type touch sensors. However, the accuracy of our method was not sufficient to practical use. In this paper, we propose a method to improve the accuracy of the distinction. The proposed method can be applied for capacitive touch sensors such as installed in AIBO in addition to pressure type touch sensors. Besides, this paper shows performance of the proposed method from experimental results and confirms the proposed method has improved performance of the distinction in the conventional method.
Gai, Jiading; Obeid, Nady; Holtrop, Joseph L.; Wu, Xiao-Long; Lam, Fan; Fu, Maojing; Haldar, Justin P.; Hwu, Wen-mei W.; Liang, Zhi-Pei; Sutton, Bradley P.
2013-01-01
Several recent methods have been proposed to obtain significant speed-ups in MRI image reconstruction by leveraging the computational power of GPUs. Previously, we implemented a GPU-based image reconstruction technique called the Illinois Massively Parallel Acquisition Toolkit for Image reconstruction with ENhanced Throughput in MRI (IMPATIENT MRI) for reconstructing data collected along arbitrary 3D trajectories. In this paper, we improve IMPATIENT by removing computational bottlenecks by using a gridding approach to accelerate the computation of various data structures needed by the previous routine. Further, we enhance the routine with capabilities for off-resonance correction and multi-sensor parallel imaging reconstruction. Through implementation of optimized gridding into our iterative reconstruction scheme, speed-ups of more than a factor of 200 are provided in the improved GPU implementation compared to the previous accelerated GPU code. PMID:23682203
Improved Estimates of Thermodynamic Parameters
NASA Technical Reports Server (NTRS)
Lawson, D. D.
1982-01-01
Techniques refined for estimating heat of vaporization and other parameters from molecular structure. Using parabolic equation with three adjustable parameters, heat of vaporization can be used to estimate boiling point, and vice versa. Boiling points and vapor pressures for some nonpolar liquids were estimated by improved method and compared with previously reported values. Technique for estimating thermodynamic parameters should make it easier for engineers to choose among candidate heat-exchange fluids for thermochemical cycles.
Matsumoto, Hirotaka; Kiryu, Hisanori
2016-06-08
Single-cell technologies make it possible to quantify the comprehensive states of individual cells, and have the power to shed light on cellular differentiation in particular. Although several methods have been developed to fully analyze the single-cell expression data, there is still room for improvement in the analysis of differentiation. In this paper, we propose a novel method SCOUP to elucidate differentiation process. Unlike previous dimension reduction-based approaches, SCOUP describes the dynamics of gene expression throughout differentiation directly, including the degree of differentiation of a cell (in pseudo-time) and cell fate. SCOUP is superior to previous methods with respect to pseudo-time estimation, especially for single-cell RNA-seq. SCOUP also successfully estimates cell lineage more accurately than previous method, especially for cells at an early stage of bifurcation. In addition, SCOUP can be applied to various downstream analyses. As an example, we propose a novel correlation calculation method for elucidating regulatory relationships among genes. We apply this method to a single-cell RNA-seq data and detect a candidate of key regulator for differentiation and clusters in a correlation network which are not detected with conventional correlation analysis. We develop a stochastic process-based method SCOUP to analyze single-cell expression data throughout differentiation. SCOUP can estimate pseudo-time and cell lineage more accurately than previous methods. We also propose a novel correlation calculation method based on SCOUP. SCOUP is a promising approach for further single-cell analysis and available at https://github.com/hmatsu1226/SCOUP.
Search Radar Track-Before-Detect Using the Hough Transform.
1995-03-01
before - detect processing method which allows previous data to help in target detection. The technique provides many advantages compared to...improved target detection scheme, applicable to search radars, using the Hough transform image processing technique. The system concept involves a track
A pseudospectral Legendre method for hyperbolic equations with an improved stability condition
NASA Technical Reports Server (NTRS)
Tal-Ezer, Hillel
1986-01-01
A new pseudospectral method is introduced for solving hyperbolic partial differential equations. This method uses different grid points than previously used pseudospectral methods: in fact the grid points are related to the zeroes of the Legendre polynomials. The main advantage of this method is that the allowable time step is proportional to the inverse of the number of grid points 1/N rather than to 1/n(2) (as in the case of other pseudospectral methods applied to mixed initial boundary value problems). A highly accurate time discretization suitable for these spectral methods is discussed.
A pseudospectral Legendre method for hyperbolic equations with an improved stability condition
NASA Technical Reports Server (NTRS)
Tal-Ezer, H.
1984-01-01
A new pseudospectral method is introduced for solving hyperbolic partial differential equations. This method uses different grid points than previously used pseudospectral methods: in fact the grid are related to the zeroes of the Legendre polynomials. The main advantage of this method is that the allowable time step is proportional to the inverse of the number of grid points 1/N rather than to 1/n(2) (as in the case of other pseudospectral methods applied to mixed initial boundary value problems). A highly accurate time discretization suitable for these spectral methods is discussed.
Integrated control/structure optimization by multilevel decomposition
NASA Technical Reports Server (NTRS)
Zeiler, Thomas A.; Gilbert, Michael G.
1990-01-01
A method for integrated control/structure optimization by multilevel decomposition is presented. It is shown that several previously reported methods were actually partial decompositions wherein only the control was decomposed into a subsystem design. One of these partially decomposed problems was selected as a benchmark example for comparison. The system is fully decomposed into structural and control subsystem designs and an improved design is produced. Theory, implementation, and results for the method are presented and compared with the benchmark example.
Development efforts to improve curved-channel microchannel plates
NASA Technical Reports Server (NTRS)
Corbett, M. B.; Feller, W. B.; Laprade, B. N.; Cochran, R.; Bybee, R.; Danks, A.; Joseph, C.
1993-01-01
Curved-channel microchannel plate (C-plate) improvements resulting from an ongoing NASA STIS microchannel plate (MCP) development program are described. Performance limitations of previous C-plates led to a development program in support of the STIS MAMA UV photon counter, a second generation instrument on the Hubble Space Telescope. C-plate gain, quantum detection efficiency, dark noise, and imaging distortion, which are influenced by channel curvature non-uniformities, have all been improved through use of a new centrifuge fabrication technique. This technique will be described, along with efforts to improve older, more conventional shearing methods. Process optimization methods used to attain targeted C-plate performance goals will be briefly characterized. Newly developed diagnostic measurement techniques to study image distortion, gain uniformity, input bias angle, channel curvature, and ion feedback, will be described. Performance characteristics and initial test results of the improved C-plates will be reported. Future work and applications will also be discussed.
Oku, H; Yamashita, M; Iwasaki, H; Chinen, I
1999-02-01
The present study further improved the serum-free method of culturing rat keratinocytes. To obtain the best growth of rat keratinocytes, we modified our previous serum-free medium (MCDB153 based medium), particularly the amounts of glucose and sodium chloride (NaCl). Titration experiments showed the optimal concentration to be 0.8 mM for glucose and 100 mM for NaCl. This modification eliminated the requirement for albumin, which had been essential for colony formation when our previous medium was used. Titration of glucose and NaCl, followed by adjustment of essential amino acids and growth factors, produced a new formulation. More satisfactory and better growth was achieved with the new medium than with the previous medium. Accumulation of monoalkyldiacylglycerol (MADAG) was consistently noted in this study, representing the unusual lipid profile. A tendency toward normalization was, however, noted with the neutral lipid profile of keratinocytes cultivated in the new medium: lower production of MADAG was obtained with the new formulation, rather than the previous one.
Improved method for HPLC analysis of polyamines, agmatine and aromatic monoamines in plant tissue
NASA Technical Reports Server (NTRS)
Slocum, R. D.; Flores, H. E.; Galston, A. W.; Weinstein, L. H.
1989-01-01
The high performance liquid chromatographic (HPLC) method of Flores and Galston (1982 Plant Physiol 69: 701) for the separation and quantitation of benzoylated polyamines in plant tissues has been widely adopted by other workers. However, due to previously unrecognized problems associated with the derivatization of agmatine, this important intermediate in plant polyamine metabolism cannot be quantitated using this method. Also, two polyamines, putrescine and diaminopropane, also are not well resolved using this method. A simple modification of the original HPLC procedure greatly improves the separation and quantitation of these amines, and further allows the simulation analysis of phenethylamine and tyramine, which are major monoamine constituents of tobacco and other plant tissues. We have used this modified HPLC method to characterize amine titers in suspension cultured carrot (Daucas carota L.) cells and tobacco (Nicotiana tabacum L.) leaf tissues.
Chemically etched fiber tips for near-field optical microscopy: a process for smoother tips.
Lambelet, P; Sayah, A; Pfeffer, M; Philipona, C; Marquis-Weible, F
1998-11-01
An improved method for producing fiber tips for scanning near-field optical microscopy is presented. The improvement consists of chemically etching quartz optical fibers through their acrylate jacket. This new method is compared with the previous one in which bare fibers were etched. With the new process the meniscus formed by the acid along the fiber does not move during etching, leading to a much smoother surface of the tip cone. Subsequent metallization is thus improved, resulting in better coverage of the tip with an aluminum opaque layer. Our results show that leakage can be avoided along the cone, and light transmission through the tip is spatially limited to an optical aperture of a 100-nm dimension.
Polynomial mixture method of solving ordinary differential equations
NASA Astrophysics Data System (ADS)
Shahrir, Mohammad Shazri; Nallasamy, Kumaresan; Ratnavelu, Kuru; Kamali, M. Z. M.
2017-11-01
In this paper, a numerical solution of fuzzy quadratic Riccati differential equation is estimated using a proposed new approach that provides mixture of polynomials where iteratively the right mixture will be generated. This mixture provide a generalized formalism of traditional Neural Networks (NN). Previous works have shown reliable results using Runge-Kutta 4th order (RK4). This can be achieved by solving the 1st Order Non-linear Differential Equation (ODE) that is found commonly in Riccati differential equation. Research has shown improved results relatively to the RK4 method. It can be said that Polynomial Mixture Method (PMM) shows promising results with the advantage of continuous estimation and improved accuracy that can be produced over Mabood et al, RK-4, Multi-Agent NN and Neuro Method (NM).
An improved 2D MoF method by using high order derivatives
NASA Astrophysics Data System (ADS)
Chen, Xiang; Zhang, Xiong
2017-11-01
The MoF (Moment of Fluid) method is one of the most accurate approaches among various interface reconstruction algorithms. Alike other second order methods, the MoF method needs to solve an implicit optimization problem to obtain the optimal approximate interface, so an iteration process is inevitable under most circumstances. In order to solve the optimization efficiently, the properties of the objective function are worthy of studying. In 2D problems, the first order derivative has been deduced and applied in the previous researches. In this paper, the high order derivatives of the objective function are deduced on the convex polygon. We show that the nth (n ≥ 2) order derivatives are discontinuous, and the number of the discontinuous points is two times the number of the polygon edge. A rotation algorithm is proposed to successively calculate these discontinuous points, thus the target interval where the optimal solution is located can be determined. Since the high order derivatives of the objective function are continuous in the target interval, the iteration schemes based on high order derivatives can be used to improve the convergence rate. Moreover, when iterating in the target interval, the value of objective function and its derivatives can be directly updated without explicitly solving the volume conservation equation. The direct update makes a further improvement of the efficiency especially when the number of edges of the polygon is increasing. The Halley's method, which is based on the first three order derivatives, is applied as the iteration scheme in this paper and the numerical results indicate that the CPU time is about half of the previous method on the quadrilateral cell and is about one sixth on the decagon cell.
Advanced Feedback Methods in Information Retrieval.
ERIC Educational Resources Information Center
Salton, G.; And Others
1985-01-01
In this study, automatic feedback techniques are applied to Boolean query statements in online information retrieval to generate improved query statements based on information contained in previously retrieved documents. Feedback operations are carried out using conventional Boolean logic and extended logic. Experimental output is included to…
Improving the Accuracy of the Chebyshev Rational Approximation Method Using Substeps
Isotalo, Aarno; Pusa, Maria
2016-05-01
The Chebyshev Rational Approximation Method (CRAM) for solving the decay and depletion of nuclides is shown to have a remarkable decrease in error when advancing the system with the same time step and microscopic reaction rates as the previous step. This property is exploited here to achieve high accuracy in any end-of-step solution by dividing a step into equidistant sub-steps. The computational cost of identical substeps can be reduced significantly below that of an equal number of regular steps, as the LU decompositions for the linear solves required in CRAM only need to be formed on the first substep. Themore » improved accuracy provided by substeps is most relevant in decay calculations, where there have previously been concerns about the accuracy and generality of CRAM. Lastly, with substeps, CRAM can solve any decay or depletion problem with constant microscopic reaction rates to an extremely high accuracy for all nuclides with concentrations above an arbitrary limit.« less
NASA Astrophysics Data System (ADS)
Marques, Manuel J.; Rivet, Sylvain; Bradu, Adrian; Podoleanu, Adrian
2018-02-01
In this communication, we present a proof-of-concept polarization-sensitive Optical Coherence Tomography (PS-OCT) which can be used to characterize the retardance and the axis orientation of a linear birefringent sample. This module configuration is an improvement from our previous work1, 2 since it encodes the two polarization channels on the optical path difference, effectively carrying out the polarization measurements simultaneously (snapshot measurement), whilst retaining all the advantages (namely the insensitivity to environmental parameters when using SM fibers) of these two previous configurations. Further progress consists in employing Master Slave OCT technology,3 which is used to automatically compensate for the dispersion mismatch introduced by the elements in the module. This is essential given the encoding of the polarization states on two different optical path lengths, each of them having dissimilar dispersive properties. By utilizing this method instead of the commonly used re-linearization and numerical dispersion compensation methods an improvement in terms of the calculation time required can be achieved.
Tissue classification using depth-dependent ultrasound time series analysis: in-vitro animal study
NASA Astrophysics Data System (ADS)
Imani, Farhad; Daoud, Mohammad; Moradi, Mehdi; Abolmaesumi, Purang; Mousavi, Parvin
2011-03-01
Time series analysis of ultrasound radio-frequency (RF) signals has been shown to be an effective tissue classification method. Previous studies of this method for tissue differentiation at high and clinical-frequencies have been reported. In this paper, analysis of RF time series is extended to improve tissue classification at the clinical frequencies by including novel features extracted from the time series spectrum. The primary feature examined is the Mean Central Frequency (MCF) computed for regions of interest (ROIs) in the tissue extending along the axial axis of the transducer. In addition, the intercept and slope of a line fitted to the MCF-values of the RF time series as a function of depth have been included. To evaluate the accuracy of the new features, an in vitro animal study is performed using three tissue types: bovine muscle, bovine liver, and chicken breast, where perfect two-way classification is achieved. The results show statistically significant improvements over the classification accuracies with previously reported features.
Automated selection of stabilizing mutations in designed and natural proteins.
Borgo, Benjamin; Havranek, James J
2012-01-31
The ability to engineer novel protein folds, conformations, and enzymatic activities offers enormous potential for the development of new protein therapeutics and biocatalysts. However, many de novo and redesigned proteins exhibit poor hydrophobic packing in their predicted structures, leading to instability or insolubility. The general utility of rational, structure-based design would greatly benefit from an improved ability to generate well-packed conformations. Here we present an automated protocol within the RosettaDesign framework that can identify and improve poorly packed protein cores by selecting a series of stabilizing point mutations. We apply our method to previously characterized designed proteins that exhibited a decrease in stability after a full computational redesign. We further demonstrate the ability of our method to improve the thermostability of a well-behaved native protein. In each instance, biophysical characterization reveals that we were able to stabilize the original proteins against chemical and thermal denaturation. We believe our method will be a valuable tool for both improving upon designed proteins and conferring increased stability upon native proteins.
Automated selection of stabilizing mutations in designed and natural proteins
Borgo, Benjamin; Havranek, James J.
2012-01-01
The ability to engineer novel protein folds, conformations, and enzymatic activities offers enormous potential for the development of new protein therapeutics and biocatalysts. However, many de novo and redesigned proteins exhibit poor hydrophobic packing in their predicted structures, leading to instability or insolubility. The general utility of rational, structure-based design would greatly benefit from an improved ability to generate well-packed conformations. Here we present an automated protocol within the RosettaDesign framework that can identify and improve poorly packed protein cores by selecting a series of stabilizing point mutations. We apply our method to previously characterized designed proteins that exhibited a decrease in stability after a full computational redesign. We further demonstrate the ability of our method to improve the thermostability of a well-behaved native protein. In each instance, biophysical characterization reveals that we were able to stabilize the original proteins against chemical and thermal denaturation. We believe our method will be a valuable tool for both improving upon designed proteins and conferring increased stability upon native proteins. PMID:22307603
Guo, Yang; Riplinger, Christoph; Becker, Ute; Liakos, Dimitrios G; Minenkov, Yury; Cavallo, Luigi; Neese, Frank
2018-01-07
In this communication, an improved perturbative triples correction (T) algorithm for domain based local pair-natural orbital singles and doubles coupled cluster (DLPNO-CCSD) theory is reported. In our previous implementation, the semi-canonical approximation was used and linear scaling was achieved for both the DLPNO-CCSD and (T) parts of the calculation. In this work, we refer to this previous method as DLPNO-CCSD(T 0 ) to emphasize the semi-canonical approximation. It is well-established that the DLPNO-CCSD method can predict very accurate absolute and relative energies with respect to the parent canonical CCSD method. However, the (T 0 ) approximation may introduce significant errors in absolute energies as the triples correction grows up in magnitude. In the majority of cases, the relative energies from (T 0 ) are as accurate as the canonical (T) results of themselves. Unfortunately, in rare cases and in particular for small gap systems, the (T 0 ) approximation breaks down and relative energies show large deviations from the parent canonical CCSD(T) results. To address this problem, an iterative (T) algorithm based on the previous DLPNO-CCSD(T 0 ) algorithm has been implemented [abbreviated here as DLPNO-CCSD(T)]. Using triples natural orbitals to represent the virtual spaces for triples amplitudes, storage bottlenecks are avoided. Various carefully designed approximations ease the computational burden such that overall, the increase in the DLPNO-(T) calculation time over DLPNO-(T 0 ) only amounts to a factor of about two (depending on the basis set). Benchmark calculations for the GMTKN30 database show that compared to DLPNO-CCSD(T 0 ), the errors in absolute energies are greatly reduced and relative energies are moderately improved. The particularly problematic case of cumulene chains of increasing lengths is also successfully addressed by DLPNO-CCSD(T).
Laser-Induced-Fluorescence Photogrammetry and Videogrammetry
NASA Technical Reports Server (NTRS)
Danehy, Paul; Jones, Tom; Connell, John; Belvin, Keith; Watson, Kent
2004-01-01
An improved method of dot-projection photogrammetry and an extension of the method to encompass dot-projection videogrammetry overcome some deficiencies of dot-projection photogrammetry as previously practiced. The improved method makes it possible to perform dot-projection photogrammetry or videogrammetry on targets that have previously not been amenable to dot-projection photogrammetry because they do not scatter enough light. Such targets include ones that are transparent, specularly reflective, or dark. In standard dot-projection photogrammetry, multiple beams of white light are projected onto the surface of an object of interest (denoted the target) to form a known pattern of bright dots. The illuminated surface is imaged in one or more cameras oriented at a nonzero angle or angles with respect to a central axis of the illuminating beams. The locations of the dots in the image(s) contain stereoscopic information on the locations of the dots, and, hence, on the location, shape, and orientation of the illuminated surface of the target. The images are digitized and processed to extract this information. Hardware and software to implement standard dot-projection photogrammetry are commercially available. Success in dot-projection photogrammetry depends on achieving sufficient signal-to-noise ratios: that is, it depends on scattering of enough light by the target so that the dots as imaged in the camera(s) stand out clearly against the ambient-illumination component of the image of the target. In one technique used previously to increase the signal-to-noise ratio, the target is illuminated by intense, pulsed laser light and the light entering the camera(s) is band-pass filtered at the laser wavelength. Unfortunately, speckle caused by the coherence of the laser light engenders apparent movement in the projected dots, thereby giving rise to errors in the measurement of the centroids of the dots and corresponding errors in the computed shape and location of the surface of the target. The improved method is denoted laser-induced-fluorescence photogrammetry.
NASA Astrophysics Data System (ADS)
Guo, Yang; Riplinger, Christoph; Becker, Ute; Liakos, Dimitrios G.; Minenkov, Yury; Cavallo, Luigi; Neese, Frank
2018-01-01
In this communication, an improved perturbative triples correction (T) algorithm for domain based local pair-natural orbital singles and doubles coupled cluster (DLPNO-CCSD) theory is reported. In our previous implementation, the semi-canonical approximation was used and linear scaling was achieved for both the DLPNO-CCSD and (T) parts of the calculation. In this work, we refer to this previous method as DLPNO-CCSD(T0) to emphasize the semi-canonical approximation. It is well-established that the DLPNO-CCSD method can predict very accurate absolute and relative energies with respect to the parent canonical CCSD method. However, the (T0) approximation may introduce significant errors in absolute energies as the triples correction grows up in magnitude. In the majority of cases, the relative energies from (T0) are as accurate as the canonical (T) results of themselves. Unfortunately, in rare cases and in particular for small gap systems, the (T0) approximation breaks down and relative energies show large deviations from the parent canonical CCSD(T) results. To address this problem, an iterative (T) algorithm based on the previous DLPNO-CCSD(T0) algorithm has been implemented [abbreviated here as DLPNO-CCSD(T)]. Using triples natural orbitals to represent the virtual spaces for triples amplitudes, storage bottlenecks are avoided. Various carefully designed approximations ease the computational burden such that overall, the increase in the DLPNO-(T) calculation time over DLPNO-(T0) only amounts to a factor of about two (depending on the basis set). Benchmark calculations for the GMTKN30 database show that compared to DLPNO-CCSD(T0), the errors in absolute energies are greatly reduced and relative energies are moderately improved. The particularly problematic case of cumulene chains of increasing lengths is also successfully addressed by DLPNO-CCSD(T).
Orthodontic informed consent considering information load and serial position effect.
Pawlak, Caroline E; Fields, Henry W; Beck, F Michael; Firestone, Allen R
2015-03-01
Previous research has demonstrated that current methods of informed consent are relatively ineffective as shown by poor recall and comprehension by adolescent patients and their parents. The purpose of this study was to determine whether adding a short videotape presentation reiterating the issues related to informed consent to a modified informed consent document that emphasizes a limited number of core and patient-specific custom "chunks" at the beginning of an informed consent presentation improved the recall and comprehension of the risks, benefits, and alternatives of orthodontic treatment. A second objective was to evaluate the current related data for recommendable practices. Seventy patient-parent pairs were randomly divided into 2 groups. The intervention group (group A) patients and parents together reviewed a customized slide show and a short videotape presentation describing the key risks of orthodontic treatment. Group B followed the same protocol without viewing the videotape. All patients and parents were interviewed independently by research assistants using an established measurement tool with open-ended questions. Interviews were transcribed and scored for the appropriateness of responses using a previously established codebook. Lastly, the patients and parents were given 2 reading literacy tests, 1 related to health and 1 with general content followed by the self-administered demographic and psychological state questionnaires. There were no significant differences between the groups for sociodemographic variables. There were no significant differences between the groups for overall recall and comprehension; recall and comprehension for the domains of treatment, risk, and responsibility; and recall and comprehension for core, general, and custom items. The positional effects were limited in impact. When compared with previous studies, these data further demonstrate the benefit of improved readability and audiovisual supplementation with the addition of "chunking." There is no benefit to adding a short video to the previously established improved readability and audiovisual supplementation. There is a significant benefit of improved readability and audiovisual slide supplementation with the addition of "chunking" over traditional informed consent methods in terms of patient improvement in overall comprehension, treatment recall, and treatment comprehension. The treatment domain is the most affected. Copyright © 2015 American Association of Orthodontists. Published by Elsevier Inc. All rights reserved.
Chou, Bin-Kuan; Gu, Haihui; Gao, Yongxing; Dowey, Sarah N.; Wang, Ying; Shi, Jun; Li, Yanxin; Ye, Zhaohui; Cheng, Tao
2015-01-01
Reprogramming human adult blood mononuclear cells (MNCs) cells by transient plasmid expression is becoming increasingly popular as an attractive method for generating induced pluripotent stem (iPS) cells without the genomic alteration caused by genome-inserting vectors. However, its efficiency is relatively low with adult MNCs compared with cord blood MNCs and other fetal cells and is highly variable among different adult individuals. We report highly efficient iPS cell derivation under clinically compliant conditions via three major improvements. First, we revised a combination of three EBNA1/OriP episomal vectors expressing five transgenes, which increased reprogramming efficiency by ≥10–50-fold from our previous vectors. Second, human recombinant vitronectin proteins were used as cell culture substrates, alleviating the need for feeder cells or animal-sourced proteins. Finally, we eliminated the previously critical step of manually picking individual iPS cell clones by pooling newly emerged iPS cell colonies. Pooled cultures were then purified based on the presence of the TRA-1-60 pluripotency surface antigen, resulting in the ability to rapidly expand iPS cells for subsequent applications. These new improvements permit a consistent and reliable method to generate human iPS cells with minimal clonal variations from blood MNCs, including previously difficult samples such as those from patients with paroxysmal nocturnal hemoglobinuria. In addition, this method of efficiently generating iPS cells under feeder-free and xeno-free conditions allows for the establishment of clinically compliant iPS cell lines for future therapeutic applications. PMID:25742692
Sliding mode control method having terminal convergence in finite time
NASA Technical Reports Server (NTRS)
Venkataraman, Subramanian T. (Inventor); Gulati, Sandeep (Inventor)
1994-01-01
An object of this invention is to provide robust nonlinear controllers for robotic operations in unstructured environments based upon a new class of closed loop sliding control methods, sometimes denoted terminal sliders, where the new class will enforce closed-loop control convergence to equilibrium in finite time. Improved performance results from the elimination of high frequency control switching previously employed for robustness to parametric uncertainties. Improved performance also results from the dependence of terminal slider stability upon the rate of change of uncertainties over the sliding surface rather than the magnitude of the uncertainty itself for robust control. Terminal sliding mode control also yields improved convergence where convergence time is finite and is to be controlled. A further object is to apply terminal sliders to robot manipulator control and benchmark performance with the traditional computed torque control method and provide for design of control parameters.
NASA Astrophysics Data System (ADS)
Vidanović, Ivana; Bogojević, Aleksandar; Balaž, Antun; Belić, Aleksandar
2009-12-01
In this paper, building on a previous analysis [I. Vidanović, A. Bogojević, and A. Belić, preceding paper, Phys. Rev. E 80, 066705 (2009)] of exact diagonalization of the space-discretized evolution operator for the study of properties of nonrelativistic quantum systems, we present a substantial improvement to this method. We apply recently introduced effective action approach for obtaining short-time expansion of the propagator up to very high orders to calculate matrix elements of space-discretized evolution operator. This improves by many orders of magnitude previously used approximations for discretized matrix elements and allows us to numerically obtain large numbers of accurate energy eigenvalues and eigenstates using numerical diagonalization. We illustrate this approach on several one- and two-dimensional models. The quality of numerically calculated higher-order eigenstates is assessed by comparison with semiclassical cumulative density of states.
Transformation of the Fungal Soybean Pathogen Cercospora kikuchii with the Selectable Marker bar
Upchurch, Robert G.; Meade, Maura J.; Hightower, Robin C.; Thomas, Robert S.; Callahan, Terrence M.
1994-01-01
An improved transformation protocol, utilizing selection for resistance to the herbicide bialaphos, has been developed for the plant pathogenic fungus Cercospora kikuchii. Stable, bialaphos-resistant transformants are recovered at frequencies eight times higher than those achieved with the previous system that was based on selection for benomyl resistance. In addition to C. kikuchii, this improved method can also be used to transform other species of Cercospora. Images PMID:16349469
Linear modeling of steady-state behavioral dynamics.
Palya, William L; Walter, Donald; Kessel, Robert; Lucke, Robert
2002-01-01
The observed steady-state behavioral dynamics supported by unsignaled periods of reinforcement within repeating 2,000-s trials were modeled with a linear transfer function. These experiments employed improved schedule forms and analytical methods to improve the precision of the measured transfer function, compared to previous work. The refinements include both the use of multiple reinforcement periods that improve spectral coverage and averaging of independently determined transfer functions. A linear analysis was then used to predict behavior observed for three different test schedules. The fidelity of these predictions was determined. PMID:11831782
Kodamatani, Hitoshi; Yamasaki, Hitomi; Sakaguchi, Takeru; Itoh, Shinya; Iwaya, Yoshimi; Saga, Makoto; Saito, Keiitsu; Kanzaki, Ryo; Tomiyasu, Takashi
2016-08-19
As a contaminant in drinking water, N-nitrosodimethylamine (NDMA) is of great concern because of its carcinogenicity; it has been limited to levels of ng/L by regulatory bodies worldwide. Consequently, a rapid and sensitive method for monitoring NDMA in drinking water is urgently required. In this study, we report an improvement of our previously proposed HPLC-based system for NDMA determination. The approach consists of the HPLC separation of NDMA, followed by NDMA photolysis to form peroxynitrite and detection with a luminol chemiluminescence reaction. The detection limit for the improved HPLC method was 0.2ng/L, which is 10 times more sensitive than our previously reported system. For tap water measurements, only the addition of an ascorbic acid solution to eliminate residual chlorine and passage through an Oasis MAX solid-phase extraction cartridge are needed. The proposed NDMA determination method requires a sample volume of less than 2mL and a complete analysis time of less than 15min per sample. The method was utilized for the long-term monitoring of NDMA in tap water. The NDMA level measured in the municipal water survey was 4.9ng/L, and a seasonal change of the NDMA concentration in tap water was confirmed. The proposed method should constitute a useful NDMA monitoring method for protecting drinking water quality. Copyright © 2016 Elsevier B.V. All rights reserved.
Estimation of glycaemic control in the past month using ratio of glycated albumin to HbA1c.
Musha, I; Mochizuki, M; Kikuchi, T; Akatsuka, J; Ohtake, A; Kobayashi, K; Kikuchi, N; Kawamura, T; Yokota, I; Urakami, T; Sugihara, S; Amemiya, S
2018-04-13
To evaluate comprehensively the use of the glycated albumin to HbA 1c ratio for estimation of glycaemic control in the previous month. A total of 306 children with Type 1 diabetes mellitus underwent ≥10 simultaneous measurements of glycated albumin and HbA 1c . Correlation and concordance rates were examined between HbA 1c measurements taken 1 month apart (ΔHbA 1c ) and glycated albumin/HbA 1c ratio fluctuations were calculated as Z-scores from the cohort value at enrolment of this study cohort (method A) or the percent difference from the individual mean over time (method B). Fluctuations in glycated albumin/HbA 1c ratio (using both methods) were weakly but significantly correlated with ΔHbA 1c , whereas concordance rates were significant for glycaemic deterioration but not for glycaemic improvement. Concordance rates were higher using method B than method A. The glycated albumin/HbA 1c ratio was able to estimate glycaemic deterioration in the previous month, while estimation of glycaemic improvement in the preceding month was limited. Because method B provided a better estimate of recent glycaemic control than method A, the individual mean of several measurements of the glycated albumin/HbA 1c ratio over time may also identify individuals with high or low haemoglobin glycation phenotypes in a given population, such as Japanese children with Type 1 diabetes, thereby allowing more effective diabetes management. © 2018 Diabetes UK.
An improved method for LCD displays colorimetric characterization
NASA Astrophysics Data System (ADS)
Li, Tong; Xie, Kai; Wang, Qiaojie; He, Nannan; Ye, Yushan
2018-03-01
The colorimetric characterization of the display can achieve the purpose of precisely controlling the color of the monitor. This paper describes an improved method for estimating the gamma value of liquid-crystal displays (LCDs) without using a measurement device was described by Xiao et al. It relies on observer's luminance matching by presenting eight half-tone patterns with luminance from 1/9 to 8/9 of the maximum value of each color channel. Since the previous method lacked partial low frequency information, we partially replaced the half-tone patterns. A large number of experiments show that the color difference is reduced from 3.726 to 2.835, and our half-tone pattern can better estimate the visual gamma value of LCDs.
Genetic engineering of Ganoderma lucidum for the efficient production of ganoderic acids.
Xu, Jun-Wei; Zhong, Jian-Jiang
2015-01-01
Ganoderma lucidum is a well-known traditional medicinal mushroom that produces ganoderic acids with numerous interesting bioactivities. Genetic engineering is an efficient approach to improve ganoderic acid biosynthesis. However, reliable genetic transformation methods and appropriate genetic manipulation strategies remain underdeveloped and thus should be enhanced. We previously established a homologous genetic transformation method for G. lucidum; we also applied the established method to perform the deregulated overexpression of a homologous 3-hydroxy-3-methyl-glutaryl coenzyme A reductase gene in G. lucidum. Engineered strains accumulated more ganoderic acids than wild-type strains. In this report, the genetic transformation systems of G. lucidum are described; current trends are also presented to improve ganoderic acid production through the genetic manipulation of G. lucidum.
Optimization of the design of Gas Cherenkov Detectors for ICF diagnosis
NASA Astrophysics Data System (ADS)
Liu, Bin; Hu, Huasi; Han, Hetong; Lv, Huanwen; Li, Lan
2018-07-01
A design method, which combines a genetic algorithm (GA) with Monte-Carlo simulation, is established and applied to two different types of Cherenkov detectors, namely, Gas Cherenkov Detector (GCD) and Gamma Reaction History (GRH). For accelerating the optimization program, open Message Passing Interface (MPI) is used in the Geant4 simulation. Compared with the traditional optical ray-tracing method, the performances of these detectors have been improved with the optimization method. The efficiency for GCD system, with a threshold of 6.3 MeV, is enhanced by ∼20% and time response improved by ∼7.2%. For the GRH system, with threshold of 10 MeV, the efficiency is enhanced by ∼76% in comparison with previously published results.
NASA Technical Reports Server (NTRS)
Ryan, Robert S.; Townsend, John S.
1993-01-01
The prospective improvement of probabilistic methods for space program analysis/design entails the further development of theories, codes, and tools which match specific areas of application, the drawing of lessons from previous uses of probability and statistics data bases, the enlargement of data bases (especially in the field of structural failures), and the education of engineers and managers on the advantages of these methods. An evaluation is presently made of the current limitations of probabilistic engineering methods. Recommendations are made for specific applications.
NASA Astrophysics Data System (ADS)
Wu, Zhihao; Lin, Youfang; Zhao, Yiji; Yan, Hongyan
2018-02-01
Networks can represent a wide range of complex systems, such as social, biological and technological systems. Link prediction is one of the most important problems in network analysis, and has attracted much research interest recently. Many link prediction methods have been proposed to solve this problem with various techniques. We can note that clustering information plays an important role in solving the link prediction problem. In previous literatures, we find node clustering coefficient appears frequently in many link prediction methods. However, node clustering coefficient is limited to describe the role of a common-neighbor in different local networks, because it cannot distinguish different clustering abilities of a node to different node pairs. In this paper, we shift our focus from nodes to links, and propose the concept of asymmetric link clustering (ALC) coefficient. Further, we improve three node clustering based link prediction methods via the concept of ALC. The experimental results demonstrate that ALC-based methods outperform node clustering based methods, especially achieving remarkable improvements on food web, hamster friendship and Internet networks. Besides, comparing with other methods, the performance of ALC-based methods are very stable in both globalized and personalized top-L link prediction tasks.
Integrated control/structure optimization by multilevel decomposition
NASA Technical Reports Server (NTRS)
Zeiler, Thomas A.; Gilbert, Michael G.
1990-01-01
A method for integrated control/structure optimization by multilevel decomposition is presented. It is shown that several previously reported methods were actually partial decompositions wherein only the control was decomposed into a subsystem design. One of these partially decomposed problems was selected as a benchmark example for comparison. The present paper fully decomposes the system into structural and control subsystem designs and produces an improved design. Theory, implementation, and results for the method are presented and compared with the benchmark example.
Kong, Jianlei; Ding, Xiaokang; Liu, Jinhao; Yan, Lei; Wang, Jianli
2015-01-01
In this paper, a new algorithm to improve the accuracy of estimating diameter at breast height (DBH) for tree trunks in forest areas is proposed. First, the information is collected by a two-dimensional terrestrial laser scanner (2DTLS), which emits laser pulses to generate a point cloud. After extraction and filtration, the laser point clusters of the trunks are obtained, which are optimized by an arithmetic means method. Then, an algebraic circle fitting algorithm in polar form is non-linearly optimized by the Levenberg-Marquardt method to form a new hybrid algorithm, which is used to acquire the diameters and positions of the trees. Compared with previous works, this proposed method improves the accuracy of diameter estimation of trees significantly and effectively reduces the calculation time. Moreover, the experimental results indicate that this method is stable and suitable for the most challenging conditions, which has practical significance in improving the operating efficiency of forest harvester and reducing the risk of causing accidents. PMID:26147726
Al-Amin, Mohammad; Arai, Satoshi; Hoshiya, Naoyoki; Honma, Tetsuo; Tamenori, Yusuke; Sato, Takatoshi; Yokoyama, Mami; Ishii, Akira; Takeuchi, Masashi; Maruko, Tomohiro; Shuto, Satoshi; Arisawa, Mitsuhiro
2013-08-02
An improved process for the preparation of sulfur-modified gold-supported palladium material [SAPd, second generation] is presented. The developed preparation method is safer and generates less heat (aqueous Na2S2O8 and H2SO4) for sulfur fixation on a gold surface, and it is superior to the previous method of preparing SAPd (first generation), which requires the use of the more heat-generating and dangerous piranha solution (concentrated H2SO4 and 35% H2O2) in the sulfur fixation step. This safer and improved preparation method is particularly important for the mass production of SAPd (second generation) for which the catalytic activity was examined in ligand-free Buchwald-Hartwig cross-coupling reactions. The catalytic activities were the same between the first and second generation SAPds in aromatic aminations, but the lower palladium leaching properties and safer preparative method of second generation SAPd are a significant improvement over the first generation SAPd.
Solid state synthesis of poly(dichlorophosphazene)
Allen, Christopher W.; Hneihen, Azzam S.; Peterson, Eric S.
2001-01-01
A method for making poly(dichlorophosphazene) using solid state reactants is disclosed and described. The present invention improves upon previous methods by removing the need for chlorinated hydrocarbon solvents, eliminating complicated equipment and simplifying the overall process by providing a "single pot" two step reaction sequence. This may be accomplished by the condensation reaction of raw materials in the melt phase of the reactants and in the absence of an environmentally damaging solvent.
ERIC Educational Resources Information Center
Hazell, Philip L.; Kohn, Michael R.; Dickson, Ruth; Walton, Richard J.; Granger, Renee E.; van Wyk, Gregory W.
2011-01-01
Objective: Previous studies comparing atomoxetine and methylphenidate to treat ADHD symptoms have been equivocal. This noninferiority meta-analysis compared core ADHD symptom response between atomoxetine and methylphenidate in children and adolescents. Method: Selection criteria included randomized, controlled design; duration 6 weeks; and…
Assessment of Agricultural Drainage Pipe Conditions Using Ground Penetrating Radar
USDA-ARS?s Scientific Manuscript database
Farmers and land improvement contractors, especially in the Midwest U.S., need methods to not only locate buried agricultural drainage pipe, but also to determine if the pipes are functioning properly with respect to water delivery. Previous investigations have already demonstrated the feasibility o...
AptRank: an adaptive PageRank model for protein function prediction on bi-relational graphs.
Jiang, Biaobin; Kloster, Kyle; Gleich, David F; Gribskov, Michael
2017-06-15
Diffusion-based network models are widely used for protein function prediction using protein network data and have been shown to outperform neighborhood-based and module-based methods. Recent studies have shown that integrating the hierarchical structure of the Gene Ontology (GO) data dramatically improves prediction accuracy. However, previous methods usually either used the GO hierarchy to refine the prediction results of multiple classifiers, or flattened the hierarchy into a function-function similarity kernel. No study has taken the GO hierarchy into account together with the protein network as a two-layer network model. We first construct a Bi-relational graph (Birg) model comprised of both protein-protein association and function-function hierarchical networks. We then propose two diffusion-based methods, BirgRank and AptRank, both of which use PageRank to diffuse information on this two-layer graph model. BirgRank is a direct application of traditional PageRank with fixed decay parameters. In contrast, AptRank utilizes an adaptive diffusion mechanism to improve the performance of BirgRank. We evaluate the ability of both methods to predict protein function on yeast, fly and human protein datasets, and compare with four previous methods: GeneMANIA, TMC, ProteinRank and clusDCA. We design four different validation strategies: missing function prediction, de novo function prediction, guided function prediction and newly discovered function prediction to comprehensively evaluate predictability of all six methods. We find that both BirgRank and AptRank outperform the previous methods, especially in missing function prediction when using only 10% of the data for training. The MATLAB code is available at https://github.rcac.purdue.edu/mgribsko/aptrank . gribskov@purdue.edu. Supplementary data are available at Bioinformatics online. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com
NASA Astrophysics Data System (ADS)
Zboray, Robert; Dangendorf, Volker; Mor, Ilan; Bromberger, Benjamin; Tittelmeier, Kai
2015-07-01
In a previous work, we have demonstrated the feasibility of high-frame-rate, fast-neutron radiography of generic air-water two-phase flows in a 1.5 cm thick, rectangular flow channel. The experiments have been carried out at the high-intensity, white-beam facility of the Physikalisch-Technische Bundesanstalt, Germany, using an multi-frame, time-resolved detector developed for fast neutron resonance radiography. The results were however not fully optimal and therefore we have decided to modify the detector and optimize it for the given application, which is described in the present work. Furthermore, we managed to improve the image post-processing methodology and the noise suppression. Using the tailored detector and the improved post-processing, significant increase in the image quality and an order of magnitude lower exposure times, down to 3.33 ms, have been achieved with minimized motion artifacts. Similar to the previous study, different two-phase flow regimes such as bubbly slug and churn flows have been examined. The enhanced imaging quality enables an improved prediction of two-phase flow parameters like the instantaneous volumetric gas fraction, bubble size, and bubble velocities. Instantaneous velocity fields around the gas enclosures can also be more robustly predicted using optical flow methods as previously.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Carletta, Nicholas D.; Mullendore, Gretchen L.; Starzec, Mariusz
Convective mass transport is the transport of mass from near the surface up to the upper troposphere and lower stratosphere (UTLS) by a deep convective updraft. This transport can alter the chemical makeup and water vapor balance of the UTLS, which affects cloud formation and the radiative properties of the atmosphere. It is therefore important to understand the exact altitudes at which mass is detrained from convection. The purpose of this study was to improve upon previously published methodologies for estimating the level of maximum detrainment (LMD) within convection using data from a single ground-based radar. Four methods were usedmore » to identify the LMD and validated against dual-Doppler derived vertical mass divergence fields for six cases with a variety of storm types. The best method for locating the LMD was determined to be the method that used a reflectivity texture technique to determine convective cores and a multi-layer echo identification to determine anvil locations. Although an improvement over previously published methods, the new methodology still produced unreliable results in certain regimes. The methodology worked best when applied to mature updrafts, as the anvil needs time to grow to a detectable size. Thus, radar reflectivity is found to be valuable in estimating the LMD, but storm maturity must also be considered for best results.« less
Improving Estimation of Ground Casualty Risk From Reentering Space Objects
NASA Technical Reports Server (NTRS)
Ostrom, Chris L.
2017-01-01
A recent improvement to the long-term estimation of ground casualties from reentering space debris is the further refinement and update to the human population distribution. Previous human population distributions were based on global totals with simple scaling factors for future years, or a coarse grid of population counts in a subset of the world's countries, each cell having its own projected growth rate. The newest population model includes a 5-fold refinement in both latitude and longitude resolution. All areas along a single latitude are combined to form a global population distribution as a function of latitude, creating a more accurate population estimation based on non-uniform growth at the country and area levels. Previous risk probability calculations used simplifying assumptions that did not account for the ellipsoidal nature of the Earth. The new method uses first, a simple analytical method to estimate the amount of time spent above each latitude band for a debris object with a given orbit inclination and second, a more complex numerical method that incorporates the effects of a non-spherical Earth. These new results are compared with the prior models to assess the magnitude of the effects on reentry casualty risk.
De Nicola, F; Concha Graña, E; Aboal, J R; Carballeira, A; Fernández, J Á; López Mahía, P; Prada Rodríguez, D; Muniategui Lorenzo, S
2016-06-01
Due to the complexity and heterogeneity of plant matrices, new procedure should be standardized for each single biomonitor. Thus, here is described a matrix solid-phase dispersion extraction method, previously used for moss samples, improved and modified for the analyses of PAHs in Quercus robur leaves and Pinus pinaster needles, species widely used in biomonitoring studies across Europe. The improvements compared to the previous procedure are the use of Florisil added with further clean-up sorbents, 10% deactivated silica for pine needles and PSA for oak leaves, being these matrices rich in interfering compounds, as shown by the gas chromatography-mass spectrometry analyses acquired in full scan mode. Good trueness, with values in the range 90-120% for the most of compounds, high precision (intermediate precision between 2% and 12%) and good sensitivity using only 250mg of samples (limits of quantification lower than 3 and 1.5ngg(-1), respectively for pine and oak) were achieved by the selected procedures. These methods proved to be reliable for PAH analyses and, having advantage of fastness, can be used in biomonitoring studies of PAH air contamination. Copyright © 2016 Elsevier B.V. All rights reserved.
Enhanced capacity and stability for the separation of cesium in electrically switched ion exchange
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tawfic, A.F.; Dickson, S.E.; Kim, Y.
2015-03-15
Electrically switched ion exchange (ESIX) can be used to separate ionic contaminants from industrial wastewater, including that generated by the nuclear industry. The ESIX method involves sequential application of reduction and oxidation potentials to an ion exchange film to induce the respective loading and unloading of cesium. This technology is superior to conventional methods (e.g electrodialysis reversal or reverse osmosis) as it requires very little energy for ionic separation. In previous studies, ESIX films have demonstrated relatively low ion exchange capacities and limited film stabilities over repeated potential applications. In this study, the methodology for the deposition of electro-active filmsmore » (nickel hexacyanoferrate) on nickel electrodes was modified to improve the ion exchange capacity for cesium removal using ESIX. Cyclic voltammetry was used to investigate the ion exchange capacity and stability. Scanning electron microscopy (SEM) was used to characterize the modified film surfaces. Additionally, the films were examined for the separation of cesium ions. This modified film preparation technique enhanced the ion exchange capacity and improves the film stability compared to previous methods for the deposition of ESIX films. (authors)« less
Improving Estimation of Ground Casualty Risk from Reentering Space Objects
NASA Technical Reports Server (NTRS)
Ostrom, C.
2017-01-01
A recent improvement to the long-term estimation of ground casualties from reentering space debris is the further refinement and update to the human population distribution. Previous human population distributions were based on global totals with simple scaling factors for future years, or a coarse grid of population counts in a subset of the world's countries, each cell having its own projected growth rate. The newest population model includes a 5-fold refinement in both latitude and longitude resolution. All areas along a single latitude are combined to form a global population distribution as a function of latitude, creating a more accurate population estimation based on non-uniform growth at the country and area levels. Previous risk probability calculations used simplifying assumptions that did not account for the ellipsoidal nature of the earth. The new method uses first, a simple analytical method to estimate the amount of time spent above each latitude band for a debris object with a given orbit inclination, and second, a more complex numerical method that incorporates the effects of a non-spherical Earth. These new results are compared with the prior models to assess the magnitude of the effects on reentry casualty risk.
NASA Astrophysics Data System (ADS)
Shin, Hyeonwoo; Kang, Chan-mo; Baek, Kyu-Ha; Kim, Jun Young; Do, Lee-Mi; Lee, Changhee
2018-05-01
We present a novel methods of fabricating low-temperature (180 °C), solution-processed zinc oxide (ZnO) transistors using a ZnO precursor that is blended with zinc hydroxide [Zn(OH)2] and zinc oxide hydrate (ZnO • H2O) in an ammonium solution. By using the proposed method, we successfully improved the electrical performance of the transistor in terms of the mobility (μ), on/off current ratio (I on/I off), sub-threshold swing (SS), and operational stability. Our new approach to forming a ZnO film was systematically compared with previously proposed methods. An atomic forced microscopic (AFM) image and an X-ray photoelectron spectroscopy (XPS) analysis showed that our method increases the ZnO crystallite size with less OH‑ impurities. Thus, we attribute the improved electrical performance to the better ZnO film formation using the blending methods.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kim, Sun Mo, E-mail: Sunmo.Kim@rmp.uhn.on.ca; Haider, Masoom A.; Jaffray, David A.
Purpose: A previously proposed method to reduce radiation dose to patient in dynamic contrast-enhanced (DCE) CT is enhanced by principal component analysis (PCA) filtering which improves the signal-to-noise ratio (SNR) of time-concentration curves in the DCE-CT study. The efficacy of the combined method to maintain the accuracy of kinetic parameter estimates at low temporal resolution is investigated with pixel-by-pixel kinetic analysis of DCE-CT data. Methods: The method is based on DCE-CT scanning performed with low temporal resolution to reduce the radiation dose to the patient. The arterial input function (AIF) with high temporal resolution can be generated with a coarselymore » sampled AIF through a previously published method of AIF estimation. To increase the SNR of time-concentration curves (tissue curves), first, a region-of-interest is segmented into squares composed of 3 × 3 pixels in size. Subsequently, the PCA filtering combined with a fraction of residual information criterion is applied to all the segmented squares for further improvement of their SNRs. The proposed method was applied to each DCE-CT data set of a cohort of 14 patients at varying levels of down-sampling. The kinetic analyses using the modified Tofts’ model and singular value decomposition method, then, were carried out for each of the down-sampling schemes between the intervals from 2 to 15 s. The results were compared with analyses done with the measured data in high temporal resolution (i.e., original scanning frequency) as the reference. Results: The patients’ AIFs were estimated to high accuracy based on the 11 orthonormal bases of arterial impulse responses established in the previous paper. In addition, noise in the images was effectively reduced by using five principal components of the tissue curves for filtering. Kinetic analyses using the proposed method showed superior results compared to those with down-sampling alone; they were able to maintain the accuracy in the quantitative histogram parameters of volume transfer constant [standard deviation (SD), 98th percentile, and range], rate constant (SD), blood volume fraction (mean, SD, 98th percentile, and range), and blood flow (mean, SD, median, 98th percentile, and range) for sampling intervals between 10 and 15 s. Conclusions: The proposed method of PCA filtering combined with the AIF estimation technique allows low frequency scanning for DCE-CT study to reduce patient radiation dose. The results indicate that the method is useful in pixel-by-pixel kinetic analysis of DCE-CT data for patients with cervical cancer.« less
Chang, Hing-Chiu; Guhaniyogi, Shayan; Chen, Nan-kuei
2014-01-01
Purpose We report a series of techniques to reliably eliminate artifacts in interleaved echo-planar imaging (EPI) based diffusion weighted imaging (DWI). Methods First, we integrate the previously reported multiplexed sensitivity encoding (MUSE) algorithm with a new adaptive Homodyne partial-Fourier reconstruction algorithm, so that images reconstructed from interleaved partial-Fourier DWI data are free from artifacts even in the presence of either a) motion-induced k-space energy peak displacement, or b) susceptibility field gradient induced fast phase changes. Second, we generalize the previously reported single-band MUSE framework to multi-band MUSE, so that both through-plane and in-plane aliasing artifacts in multi-band multi-shot interleaved DWI data can be effectively eliminated. Results The new adaptive Homodyne-MUSE reconstruction algorithm reliably produces high-quality and high-resolution DWI, eliminating residual artifacts in images reconstructed with previously reported methods. Furthermore, the generalized MUSE algorithm is compatible with multi-band and high-throughput DWI. Conclusion The integration of the multi-band and adaptive Homodyne-MUSE algorithms significantly improves the spatial-resolution, image quality, and scan throughput of interleaved DWI. We expect that the reported reconstruction framework will play an important role in enabling high-resolution DWI for both neuroscience research and clinical uses. PMID:24925000
Segmentized Clear Channel Assessment for IEEE 802.15.4 Networks.
Son, Kyou Jung; Hong, Sung Hyeuck; Moon, Seong-Pil; Chang, Tae Gyu; Cho, Hanjin
2016-06-03
This paper proposed segmentized clear channel assessment (CCA) which increases the performance of IEEE 802.15.4 networks by improving carrier sense multiple access with collision avoidance (CSMA/CA). Improving CSMA/CA is important because the low-power consumption feature and throughput performance of IEEE 802.15.4 are greatly affected by CSMA/CA behavior. To improve the performance of CSMA/CA, this paper focused on increasing the chance to transmit a packet by assessing precise channel status. The previous method used in CCA, which is employed by CSMA/CA, assesses the channel by measuring the energy level of the channel. However, this method shows limited channel assessing behavior, which comes from simple threshold dependent channel busy evaluation. The proposed method solves this limited channel decision problem by dividing CCA into two groups. Two groups of CCA compare their energy levels to get precise channel status. To evaluate the performance of the segmentized CCA method, a Markov chain model has been developed. The validation of analytic results is confirmed by comparing them with simulation results. Additionally, simulation results show the proposed method is improving a maximum 8.76% of throughput and decreasing a maximum 3.9% of the average number of CCAs per packet transmission than the IEEE 802.15.4 CCA method.
Segmentized Clear Channel Assessment for IEEE 802.15.4 Networks
Son, Kyou Jung; Hong, Sung Hyeuck; Moon, Seong-Pil; Chang, Tae Gyu; Cho, Hanjin
2016-01-01
This paper proposed segmentized clear channel assessment (CCA) which increases the performance of IEEE 802.15.4 networks by improving carrier sense multiple access with collision avoidance (CSMA/CA). Improving CSMA/CA is important because the low-power consumption feature and throughput performance of IEEE 802.15.4 are greatly affected by CSMA/CA behavior. To improve the performance of CSMA/CA, this paper focused on increasing the chance to transmit a packet by assessing precise channel status. The previous method used in CCA, which is employed by CSMA/CA, assesses the channel by measuring the energy level of the channel. However, this method shows limited channel assessing behavior, which comes from simple threshold dependent channel busy evaluation. The proposed method solves this limited channel decision problem by dividing CCA into two groups. Two groups of CCA compare their energy levels to get precise channel status. To evaluate the performance of the segmentized CCA method, a Markov chain model has been developed. The validation of analytic results is confirmed by comparing them with simulation results. Additionally, simulation results show the proposed method is improving a maximum 8.76% of throughput and decreasing a maximum 3.9% of the average number of CCAs per packet transmission than the IEEE 802.15.4 CCA method. PMID:27271626
NASA Technical Reports Server (NTRS)
Greenberg, Paul S.; Ku, Jerry C.
1994-01-01
A new technique is described for the full-field determination of soot volume fractions via laser extinction measurements. This technique differs from previously reported point-wise methods in that a two-dimensional array (i.e., image) of data is acquired simultaneously. In this fashion, the net data rate is increased, allowing the study of time-dependent phenomena and the investigation of spatial and temporal correlations. A telecentric imaging configuration is employed to provide depth-invariant magnification and to permit the specification of the collection angle for scattered light. To improve the threshold measurement sensitivity, a method is employed to suppress undesirable coherent imaging effects. A discussion of the tomographic inversion process is provided, including the results obtained from numerical simulation. Results obtained with this method from an ethylene diffusion flame are shown to be in close agreement with those previously obtained by sequential point-wise interrogation.
Improving the analysis of composite endpoints in rare disease trials.
McMenamin, Martina; Berglind, Anna; Wason, James M S
2018-05-22
Composite endpoints are recommended in rare diseases to increase power and/or to sufficiently capture complexity. Often, they are in the form of responder indices which contain a mixture of continuous and binary components. Analyses of these outcomes typically treat them as binary, thus only using the dichotomisations of continuous components. The augmented binary method offers a more efficient alternative and is therefore especially useful for rare diseases. Previous work has indicated the method may have poorer statistical properties when the sample size is small. Here we investigate small sample properties and implement small sample corrections. We re-sample from a previous trial with sample sizes varying from 30 to 80. We apply the standard binary and augmented binary methods and determine the power, type I error rate, coverage and average confidence interval width for each of the estimators. We implement Firth's adjustment for the binary component models and a small sample variance correction for the generalized estimating equations, applying the small sample adjusted methods to each sub-sample as before for comparison. For the log-odds treatment effect the power of the augmented binary method is 20-55% compared to 12-20% for the standard binary method. Both methods have approximately nominal type I error rates. The difference in response probabilities exhibit similar power but both unadjusted methods demonstrate type I error rates of 6-8%. The small sample corrected methods have approximately nominal type I error rates. On both scales, the reduction in average confidence interval width when using the adjusted augmented binary method is 17-18%. This is equivalent to requiring a 32% smaller sample size to achieve the same statistical power. The augmented binary method with small sample corrections provides a substantial improvement for rare disease trials using composite endpoints. We recommend the use of the method for the primary analysis in relevant rare disease trials. We emphasise that the method should be used alongside other efforts in improving the quality of evidence generated from rare disease trials rather than replace them.
Harmonic source wavefront aberration correction for ultrasound imaging
Dianis, Scott W.; von Ramm, Olaf T.
2011-01-01
A method is proposed which uses a lower-frequency transmit to create a known harmonic acoustical source in tissue suitable for wavefront correction without a priori assumptions of the target or requiring a transponder. The measurement and imaging steps of this method were implemented on the Duke phased array system with a two-dimensional (2-D) array. The method was tested with multiple electronic aberrators [0.39π to 1.16π radians root-mean-square (rms) at 4.17 MHz] and with a physical aberrator 0.17π radians rms at 4.17 MHz) in a variety of imaging situations. Corrections were quantified in terms of peak beam amplitude compared to the unaberrated case, with restoration between 0.6 and 36.6 dB of peak amplitude with a single correction. Standard phantom images before and after correction were obtained and showed both visible improvement and 14 dB contrast improvement after correction. This method, when combined with previous phase correction methods, may be an important step that leads to improved clinical images. PMID:21303031
Fusing Symbolic and Numerical Diagnostic Computations
NASA Technical Reports Server (NTRS)
James, Mark
2007-01-01
X-2000 Anomaly Detection Language denotes a developmental computing language, and the software that establishes and utilizes the language, for fusing two diagnostic computer programs, one implementing a numerical analysis method, the other implementing a symbolic analysis method into a unified event-based decision analysis software system for realtime detection of events (e.g., failures) in a spacecraft, aircraft, or other complex engineering system. The numerical analysis method is performed by beacon-based exception analysis for multi-missions (BEAMs), which has been discussed in several previous NASA Tech Briefs articles. The symbolic analysis method is, more specifically, an artificial-intelligence method of the knowledge-based, inference engine type, and its implementation is exemplified by the Spacecraft Health Inference Engine (SHINE) software. The goal in developing the capability to fuse numerical and symbolic diagnostic components is to increase the depth of analysis beyond that previously attainable, thereby increasing the degree of confidence in the computed results. In practical terms, the sought improvement is to enable detection of all or most events, with no or few false alarms.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mostafaei, F; Nie, L
Purpose: Improvement in an in vivo K x-ray fluorescence system, based on 109Cd source, for the detection of gadolinium (Gd) in bone has been investigated. Series of improvements to the method is described. Gd is of interest because of the extensive use of Gd-based contrast agents in MR imaging and the potential toxicity of Gd exposure. Methods: A set of seven bone equivalent phantoms with different amount of Gd concentrations (from 0–100 ppm) has been developed. Soft tissue equivalent plastic plates were used to simulate the soft tissue overlaying the tibia bone in an in vivo measurement. A new 5more » GBq 109Cd source was used to improve the source activity in comparison to the previous study (0.17 GBq). An improved spectral fitting program was utilized for data analysis. Results: The previous published minimum detection limit (MDL) for Gd doped phantom measurements using KXRF system was 3.3 ppm. In this study the MDL for bare bone phantoms was found to be 0.8 ppm. Our previous study used only three layers of plastic (0.32, 0.64 and 0.96 mm) as soft tissue equivalent materials and obtained the MDL of 4–4.8 ppm. In this study the plastic plates with more realistic thicknesses to simulate the soft tissue covering tibia bone (nine thicknesses ranging from 0.61–6.13 mm) were used. The MDLs for phantoms were determined to be 1.8–3.5 ppm. Conclusion: With the improvements made to the technology (stronger source, improved data analysis algorithm, realistic soft tissue thicknesses), the MDL of the KXRF system to measure Gd in bare bone was improved by a factor of 4.1. The MDL is at the level of the bone Gd concentration reported in literature. Hence, the system is ready to be tested on human subjects to investigate the use of bone Gd as a biomarker for Gd toxicity.« less
Smeraglia, John; Silva, John-Paul; Jones, Kieran
2017-08-01
In order to evaluate placental transfer of certolizumab pegol (CZP), a more sensitive and selective bioanalytical assay was required to accurately measure low CZP concentrations in infant and umbilical cord blood. Results & methodology: A new electrochemiluminescence immunoassay was developed to measure CZP levels in human plasma. Validation experiments demonstrated improved selectivity (no matrix interference observed) and a detection range of 0.032-5.0 μg/ml. Accuracy and precision met acceptance criteria (mean total error ≤20.8%). Dilution linearity and sample stability were acceptable and sufficient to support the method. The electrochemiluminescence immunoassay was validated for measuring low CZP concentrations in human plasma. The method demonstrated a more than tenfold increase in sensitivity compared with previous assays, and improved selectivity for intact CZP.
An improved algorithm for evaluating trellis phase codes
NASA Technical Reports Server (NTRS)
Mulligan, M. G.; Wilson, S. G.
1982-01-01
A method is described for evaluating the minimum distance parameters of trellis phase codes, including CPFSK, partial response FM, and more importantly, coded CPM (continuous phase modulation) schemes. The algorithm provides dramatically faster execution times and lesser memory requirements than previous algorithms. Results of sample calculations and timing comparisons are included.
An improved algorithm for evaluating trellis phase codes
NASA Technical Reports Server (NTRS)
Mulligan, M. G.; Wilson, S. G.
1984-01-01
A method is described for evaluating the minimum distance parameters of trellis phase codes, including CPFSK, partial response FM, and more importantly, coded CPM (continuous phase modulation) schemes. The algorithm provides dramatically faster execution times and lesser memory requirements than previous algorithms. Results of sample calculations and timing comparisons are included.
A procedure for estimating the frequency distribution of CO levels in the micro-region of a highway.
DOT National Transportation Integrated Search
1979-01-01
This report demonstrates that the probability of violating a "not to be exceeded more than once per year", one-hour air quality standard can be bounded from above. This result represents a significant improvement over previous methods of ascertaining...
Engaging Practical Students through Audio Feedback
ERIC Educational Resources Information Center
Pearson, John
2018-01-01
This paper uses an action research intervention in an attempt to improve student engagement with summative feedback. The intervention delivered summative module feedback to the students as audio recordings, replacing the written method employed in previous years. The project found that students are keen on audio as an alternative to written…
20171130_Ind Ergo Report_631 DI Water Movement Process.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rivera, Cynthia R.
Perform an industrial ergonomic assessment to evaluate the new procedures for filling, lifting, and delivering high purity de-ionized water to building 9925. The goal was to improve on the previous method by minimizing/eliminating as much lifting and bending as possible to reduce the potential for overexertion-related injuries.
Innovation 101: Promoting Undergraduate Innovation through a Two-Day Boot Camp
ERIC Educational Resources Information Center
West, Richard E.; Tateishi, Isaku; Wright, Geoffrey A.; Fonoimoana, Melia
2012-01-01
Over the years, many training methods for creativity and innovation have been developed. Despite these programs and research, further improvement is necessary, particularly in schools of technology and engineering education, where previous efforts have focused on developing solutions to defined problems, not in identifying and defining the…
Improving consensus contact prediction via server correlation reduction.
Gao, Xin; Bu, Dongbo; Xu, Jinbo; Li, Ming
2009-05-06
Protein inter-residue contacts play a crucial role in the determination and prediction of protein structures. Previous studies on contact prediction indicate that although template-based consensus methods outperform sequence-based methods on targets with typical templates, such consensus methods perform poorly on new fold targets. However, we find out that even for new fold targets, the models generated by threading programs can contain many true contacts. The challenge is how to identify them. In this paper, we develop an integer linear programming model for consensus contact prediction. In contrast to the simple majority voting method assuming that all the individual servers are equally important and independent, the newly developed method evaluates their correlation by using maximum likelihood estimation and extracts independent latent servers from them by using principal component analysis. An integer linear programming method is then applied to assign a weight to each latent server to maximize the difference between true contacts and false ones. The proposed method is tested on the CASP7 data set. If the top L/5 predicted contacts are evaluated where L is the protein size, the average accuracy is 73%, which is much higher than that of any previously reported study. Moreover, if only the 15 new fold CASP7 targets are considered, our method achieves an average accuracy of 37%, which is much better than that of the majority voting method, SVM-LOMETS, SVM-SEQ, and SAM-T06. These methods demonstrate an average accuracy of 13.0%, 10.8%, 25.8% and 21.2%, respectively. Reducing server correlation and optimally combining independent latent servers show a significant improvement over the traditional consensus methods. This approach can hopefully provide a powerful tool for protein structure refinement and prediction use.
A Bayesian model averaging method for improving SMT phrase table
NASA Astrophysics Data System (ADS)
Duan, Nan
2013-03-01
Previous methods on improving translation quality by employing multiple SMT models usually carry out as a second-pass decision procedure on hypotheses from multiple systems using extra features instead of using features in existing models in more depth. In this paper, we propose translation model generalization (TMG), an approach that updates probability feature values for the translation model being used based on the model itself and a set of auxiliary models, aiming to alleviate the over-estimation problem and enhance translation quality in the first-pass decoding phase. We validate our approach for translation models based on auxiliary models built by two different ways. We also introduce novel probability variance features into the log-linear models for further improvements. We conclude our approach can be developed independently and integrated into current SMT pipeline directly. We demonstrate BLEU improvements on the NIST Chinese-to-English MT tasks for single-system decodings.
Kobayashi, Masanori; Oka, Masanori
2004-01-01
We have developed a hip hemi-arthroplasty using polyvinyl alcohol-hydrogel (PVA-H) as the treatment for hip joint disorders in which the lesion is limited to the joint surface. In previous studies, we characterized the biocompatibility and the mechanical properties of PVA-H as an arthroplasty material. To fix PVA-H firmly to the bone, we have devised an implant composed of PVA-H and porous titanium fiber mesh (TFM). However, because of poor infiltration of the PVA solution into the pores of the TFM when using the low temperature crystallization method, the strength of the PVA-H-TFM interface was insufficient. Consequently, the infiltration method was improved by adopting high-pressure injection molding. With this improved method, the bonding strength of the interface increased remarkably. However, as this injection molding requires high temperature, various mechanical properties of the PVA-H might change with this treatment in comparison with the previous method. The purpose of this study was to investigate the effect of high temperature treatment on the mechanical properties of PVA-H as artificial articular cartilage, the tensile test and friction test were performed about new PVA-H. The results showed no significant mechanical deterioration of the PVA-H. This certified that the injection-molding method did not induce the change of the mechanical properties of PVA-H and indicated the potential of hemi-arthroplasty using PVA-H by this method in the future.
De Spiegelaere, Ward; Malatinkova, Eva; Lynch, Lindsay; Van Nieuwerburgh, Filip; Messiaen, Peter; O'Doherty, Una; Vandekerckhove, Linos
2014-06-01
Quantification of integrated proviral HIV DNA by repetitive-sampling Alu-HIV PCR is a candidate virological tool to monitor the HIV reservoir in patients. However, the experimental procedures and data analysis of the assay are complex and hinder its widespread use. Here, we provide an improved and simplified data analysis method by adopting binomial and Poisson statistics. A modified analysis method on the basis of Poisson statistics was used to analyze the binomial data of positive and negative reactions from a 42-replicate Alu-HIV PCR by use of dilutions of an integration standard and on samples of 57 HIV-infected patients. Results were compared with the quantitative output of the previously described Alu-HIV PCR method. Poisson-based quantification of the Alu-HIV PCR was linearly correlated with the standard dilution series, indicating that absolute quantification with the Poisson method is a valid alternative for data analysis of repetitive-sampling Alu-HIV PCR data. Quantitative outputs of patient samples assessed by the Poisson method correlated with the previously described Alu-HIV PCR analysis, indicating that this method is a valid alternative for quantifying integrated HIV DNA. Poisson-based analysis of the Alu-HIV PCR data enables absolute quantification without the need of a standard dilution curve. Implementation of the CI estimation permits improved qualitative analysis of the data and provides a statistical basis for the required minimal number of technical replicates. © 2014 The American Association for Clinical Chemistry.
NASA Technical Reports Server (NTRS)
Tan, P. W.; Raju, I. S.; Shivakumar, K. N.; Newman, J. C., Jr.
1988-01-01
A re-evaluation of the 3-D finite-element models and methods used to analyze surface crack at stress concentrations is presented. Previous finite-element models used by Raju and Newman for surface and corner cracks at holes were shown to have ill-shaped elements at the intersection of the hole and crack boundaries. These ill-shaped elements tended to make the model too stiff and, hence, gave lower stress-intensity factors near the hole-crack intersection than models without these elements. Improved models, without these ill-shaped elements, were developed for a surface crack at a circular hole and at a semi-circular edge notch. Stress-intensity factors were calculated by both the nodal-force and virtual-crack-closure methods. Both methods and different models gave essentially the same results. Comparisons made between the previously developed stress-intensity factor equations and the results from the improved models agreed well except for configurations with large notch-radii-to-plate-thickness ratios. Stress-intensity factors for a semi-elliptical surface crack located at the center of a semi-circular edge notch in a plate subjected to remote tensile loadings were calculated using the improved models. The ratio of crack depth to crack length ranged form 0.4 to 2; the ratio of crack depth to plate thickness ranged from 0.2 to 0.8; and the ratio of notch radius to the plate thickness ranged from 1 to 3. The models had about 15,000 degrees-of-freedom. Stress-intensity factors were calculated by using the nodal-force method.
NASA Astrophysics Data System (ADS)
Zhang, Yan; Li, Hailong; Xiao, Kai; Wang, Xuejing; Lu, Xiaoting; Zhang, Meng; An, An; Qu, Wenjing; Wan, Li; Zheng, Chunmiao; Wang, Xusheng; Jiang, Xiaowei
2017-10-01
Radium and radon mass balance models have been widely used to quantify submarine groundwater discharge (SGD) in the coastal areas. However, the losses of radium or radon in seawater caused by recirculated saline groundwater discharge (RSGD) are ignored in most of the previous studies for tracer-based models and this can lead to an underestimation of SGD. Here we present an improved method which considers the losses of tracers caused by RSGD to enhance accuracy in estimating SGD and SGD-associated material loadings. Theoretical analysis indicates that neglecting the losses of tracers induced by RSGD would underestimate the SGD by a percentage approximately equaling the tracer activity ratio of nearshore seawater to groundwater. The data analysis of previous typical case studies shows that the existing old models underestimated the SGD by 1.9-93%, with an average of 32.2%. The method is applied in Jiaozhou Bay (JZB), North China, which is experiencing significant environmental pollution. The SGD flux into JZB estimated by the improved method is ˜1.44 and 1.34 times of that estimated by the old method for 226Ra mass balance model and 228Ra mass balance model, respectively. Both SGD and RSGD fluxes are significantly higher than the discharge rate of Dagu River (the largest one running into JZB). The fluxes of nutrients and metals through SGD are comparable to or even higher than those from local rivers, which indicates that SGD is an important source of chemicals into JZB and has important impact on marine ecological system.
Optoelectronic scanning system upgrade by energy center localization methods
NASA Astrophysics Data System (ADS)
Flores-Fuentes, W.; Sergiyenko, O.; Rodriguez-Quiñonez, J. C.; Rivas-López, M.; Hernández-Balbuena, D.; Básaca-Preciado, L. C.; Lindner, L.; González-Navarro, F. F.
2016-11-01
A problem of upgrading an optoelectronic scanning system with digital post-processing of the signal based on adequate methods of energy center localization is considered. An improved dynamic triangulation analysis technique is proposed by an example of industrial infrastructure damage detection. A modification of our previously published method aimed at searching for the energy center of an optoelectronic signal is described. Application of the artificial intelligence algorithm of compensation for the error of determining the angular coordinate in calculating the spatial coordinate through dynamic triangulation is demonstrated. Five energy center localization methods are developed and tested to select the best method. After implementation of these methods, digital compensation for the measurement error, and statistical data analysis, a non-parametric behavior of the data is identified. The Wilcoxon signed rank test is applied to improve the result further. For optical scanning systems, it is necessary to detect a light emitter mounted on the infrastructure being investigated to calculate its spatial coordinate by the energy center localization method.
Winters, C.E.
1957-11-12
A method for the preparation of a diethyl ether solution of uranyl nitrate is described. Previously the preparation of such ether solutions has been difficult and expensive, since crystalline uranyl nitrate hexahydrate dissolves very slowly in ether. An improved method for effecting such dissolution has been found, and it comprises adding molten uranyl nitrate hexahydrate at a temperature of 65 to 105 deg C to the ether while maintaining the temperature of the ether solvent below its boiling point.
Functional Independent Scaling Relation for ORR/OER Catalysts
Christensen, Rune; Hansen, Heine A.; Dickens, Colin F.; ...
2016-10-11
A widely used adsorption energy scaling relation between OH* and OOH* intermediates in the oxygen reduction reaction (ORR) and oxygen evolution reaction (OER), has previously been determined using density functional theory and shown to dictate a minimum thermodynamic overpotential for both reactions. Here, we show that the oxygen–oxygen bond in the OOH* intermediate is, however, not well described with the previously used class of exchange-correlation functionals. By quantifying and correcting the systematic error, an improved description of gaseous peroxide species versus experimental data and a reduction in calculational uncertainty is obtained. For adsorbates, we find that the systematic error largelymore » cancels the vdW interaction missing in the original determination of the scaling relation. An improved scaling relation, which is fully independent of the applied exchange–correlation functional, is obtained and found to differ by 0.1 eV from the original. Lastly, this largely confirms that, although obtained with a method suffering from systematic errors, the previously obtained scaling relation is applicable for predictions of catalytic activity.« less
Andrasiak, Iga; Rybka, Justyna; Knopinska-Posluszny, Wanda; Wrobel, Tomasz
2017-05-01
Bendamustine and ibrutinib are commonly used in the treatment of patients suffering from chronic lymphocytic leukemia (CLL). In this study we compare efficacy and safety bendamustine versus ibrutinib therapy in previously untreated patients with CLL. Because there are no head-to-head comparisons between bendamustine and ibrutinib, we performed indirect comparison using Bucher method. A systematic literature review was performed and 2 studies published before June 2016 were taken into analysis. Treatment with ibrutinib significantly improves PFS determined by investigator (HR of 0.3; P = .01) and OS (HR of 0.21; P < .001. Our study indicates that ibrutinib therapy improves PFS, OS and is superior in terms of safety comparing with bendamustine therapy in CLL patients. Copyright © 2017 Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Guo, Yanhui; Zhou, Chuan; Chan, Heang-Ping; Wei, Jun; Chughtai, Aamer; Sundaram, Baskaran; Hadjiiski, Lubomir M.; Patel, Smita; Kazerooni, Ella A.
2013-04-01
A 3D multiscale intensity homogeneity transformation (MIHT) method was developed to reduce false positives (FPs) in our previously developed CAD system for pulmonary embolism (PE) detection. In MIHT, the voxel intensity of a PE candidate region was transformed to an intensity homogeneity value (IHV) with respect to the local median intensity. The IHVs were calculated in multiscales (MIHVs) to measure the intensity homogeneity, taking into account vessels of different sizes and different degrees of occlusion. Seven new features including the entropy, gradient, and moments that characterized the intensity distributions of the candidate regions were derived from the MIHVs and combined with the previously designed features that described the shape and intensity of PE candidates for the training of a linear classifier to reduce the FPs. 59 CTPA PE cases were collected from our patient files (UM set) with IRB approval and 69 cases from the PIOPED II data set with access permission. 595 and 800 PEs were identified as reference standard by experienced thoracic radiologists in the UM and PIOPED set, respectively. FROC analysis was used for performance evaluation. Compared with our previous CAD system, at a test sensitivity of 80%, the new method reduced the FP rate from 18.9 to 14.1/scan for the PIOPED set when the classifier was trained with the UM set and from 22.6 to 16.0/scan vice versa. The improvement was statistically significant (p<0.05) by JAFROC analysis. This study demonstrated that the MIHT method is effective in reducing FPs and improving the performance of the CAD system.
Genetic engineering of Ganoderma lucidum for the efficient production of ganoderic acids
Xu, Jun-Wei; Zhong, Jian-Jiang
2015-01-01
Ganoderma lucidum is a well-known traditional medicinal mushroom that produces ganoderic acids with numerous interesting bioactivities. Genetic engineering is an efficient approach to improve ganoderic acid biosynthesis. However, reliable genetic transformation methods and appropriate genetic manipulation strategies remain underdeveloped and thus should be enhanced. We previously established a homologous genetic transformation method for G. lucidum; we also applied the established method to perform the deregulated overexpression of a homologous 3-hydroxy-3-methyl-glutaryl coenzyme A reductase gene in G. lucidum. Engineered strains accumulated more ganoderic acids than wild-type strains. In this report, the genetic transformation systems of G. lucidum are described; current trends are also presented to improve ganoderic acid production through the genetic manipulation of G. lucidum. PMID:26588475
Cheating prevention in visual cryptography.
Hu, Chih-Ming; Tzeng, Wen-Guey
2007-01-01
Visual cryptography (VC) is a method of encrypting a secret image into shares such that stacking a sufficient number of shares reveals the secret image. Shares are usually presented in transparencies. Each participant holds a transparency. Most of the previous research work on VC focuses on improving two parameters: pixel expansion and contrast. In this paper, we studied the cheating problem in VC and extended VC. We considered the attacks of malicious adversaries who may deviate from the scheme in any way. We presented three cheating methods and applied them on attacking existent VC or extended VC schemes. We improved one cheat-preventing scheme. We proposed a generic method that converts a VCS to another VCS that has the property of cheating prevention. The overhead of the conversion is near optimal in both contrast degression and pixel expansion.
Tabor, P S; Neihof, R A
1982-10-01
We report a method which combines epifluorescence microscopy and microautoradiography to determine both the total number of microorganisms in natural water populations and those individual organisms active in the uptake of specific substrates. After incubation with H-labeled substrate, the sample is filtered and, while still on the filter, mounted directly in a film of autoradiographic emulsion on a microscope slide. The microautoradiogram is processed and stained with acridine orange, and, subsequently, the filter is removed before microscopic observation. This novel preparation resulted in increased accuracy in direct counts made from the autoradiogram, improved sensitivity in the recognition of uptake-active (H-labeled) organisms, and enumeration of a significantly greater number of labeled organisms compared with corresponding samples prepared by a previously reported method.
Tabor, Paul S.; Neihof, Rex A.
1982-01-01
We report a method which combines epifluorescence microscopy and microautoradiography to determine both the total number of microorganisms in natural water populations and those individual organisms active in the uptake of specific substrates. After incubation with 3H-labeled substrate, the sample is filtered and, while still on the filter, mounted directly in a film of autoradiographic emulsion on a microscope slide. The microautoradiogram is processed and stained with acridine orange, and, subsequently, the filter is removed before microscopic observation. This novel preparation resulted in increased accuracy in direct counts made from the autoradiogram, improved sensitivity in the recognition of uptake-active (3H-labeled) organisms, and enumeration of a significantly greater number of labeled organisms compared with corresponding samples prepared by a previously reported method. Images PMID:16346120
DOE Office of Scientific and Technical Information (OSTI.GOV)
He, Hongxing; Fang, Hengrui; Miller, Mitchell D.
2016-07-15
An iterative transform algorithm is proposed to improve the conventional molecular-replacement method for solving the phase problem in X-ray crystallography. Several examples of successful trial calculations carried out with real diffraction data are presented. An iterative transform method proposed previously for direct phasing of high-solvent-content protein crystals is employed for enhancing the molecular-replacement (MR) algorithm in protein crystallography. Target structures that are resistant to conventional MR due to insufficient similarity between the template and target structures might be tractable with this modified phasing method. Trial calculations involving three different structures are described to test and illustrate the methodology. The relationshipmore » of the approach to PHENIX Phaser-MR and MR-Rosetta is discussed.« less
Churchwell, Mona I; Twaddle, Nathan C; Meeker, Larry R; Doerge, Daniel R
2005-10-25
Recent technological advances have made available reverse phase chromatographic media with a 1.7 microm particle size along with a liquid handling system that can operate such columns at much higher pressures. This technology, termed ultra performance liquid chromatography (UPLC), offers significant theoretical advantages in resolution, speed, and sensitivity for analytical determinations, particularly when coupled with mass spectrometers capable of high-speed acquisitions. This paper explores the differences in LC-MS performance by conducting a side-by-side comparison of UPLC for several methods previously optimized for HPLC-based separation and quantification of multiple analytes with maximum throughput. In general, UPLC produced significant improvements in method sensitivity, speed, and resolution. Sensitivity increases with UPLC, which were found to be analyte-dependent, were as large as 10-fold and improvements in method speed were as large as 5-fold under conditions of comparable peak separations. Improvements in chromatographic resolution with UPLC were apparent from generally narrower peak widths and from a separation of diastereomers not possible using HPLC. Overall, the improvements in LC-MS method sensitivity, speed, and resolution provided by UPLC show that further advances can be made in analytical methodology to add significant value to hypothesis-driven research.
Improvement of an algorithm for recognition of liveness using perspiration in fingerprint devices
NASA Astrophysics Data System (ADS)
Parthasaradhi, Sujan T.; Derakhshani, Reza; Hornak, Lawrence A.; Schuckers, Stephanie C.
2004-08-01
Previous work in our laboratory and others have demonstrated that spoof fingers made of a variety of materials including silicon, Play-Doh, clay, and gelatin (gummy finger) can be scanned and verified when compared to a live enrolled finger. Liveness, i.e. to determine whether the introduced biometric is coming from a live source, has been suggested as a means to circumvent attacks using spoof fingers. We developed a new liveness method based on perspiration changes in the fingerprint image. Recent results showed approximately 90% classification rate using different classification methods for various technologies including optical, electro-optical, and capacitive DC, a shorter time window and a diverse dataset. This paper focuses on improvement of the live classification rate by using a weight decay method during the training phase in order to improve the generalization and reduce the variance of the neural network based classifier. The dataset included fingerprint images from 33 live subjects, 33 spoofs created with dental impression material and Play-Doh, and fourteen cadaver fingers. 100% live classification was achieved with 81.8 to 100% spoof classification, depending on the device technology. The weight-decay method improves upon past reports by increasing the live and spoof classification rate.
Vital sign sensing method based on EMD in terahertz band
NASA Astrophysics Data System (ADS)
Xu, Zhengwu; Liu, Tong
2014-12-01
Non-contact respiration and heartbeat rates detection could be applied to find survivors trapped in the disaster or the remote monitoring of the respiration and heartbeat of a patient. This study presents an improved algorithm that extracts the respiration and heartbeat rates of humans by utilizing the terahertz radar, which further lessens the effects of noise, suppresses the cross-term, and enhances the detection accuracy. A human target echo model for the terahertz radar is first presented. Combining the over-sampling method, low-pass filter, and Empirical Mode Decomposition improves the signal-to-noise ratio. The smoothed pseudo Wigner-Ville distribution time-frequency technique and the centroid of the spectrogram are used to estimate the instantaneous velocity of the target's cardiopulmonary motion. The down-sampling method is adopted to prevent serious distortion. Finally, a second time-frequency analysis is applied to the centroid curve to extract the respiration and heartbeat rates of the individual. Simulation results show that compared with the previously presented vital sign sensing method, the improved algorithm enhances the signal-to-noise ratio to 1 dB with a detection accuracy of 80%. The improved algorithm is an effective approach for the detection of respiration and heartbeat signal in a complicated environment.
Full velocity difference model for a car-following theory.
Jiang, R; Wu, Q; Zhu, Z
2001-07-01
In this paper, we present a full velocity difference model for a car-following theory based on the previous models in the literature. To our knowledge, the model is an improvement over the previous ones theoretically, because it considers more aspects in car-following process than others. This point is verified by numerical simulation. Then we investigate the property of the model using both analytic and numerical methods, and find that the model can describe the phase transition of traffic flow and estimate the evolution of traffic congestion.
Trace of totally positive algebraic integers and integer transfinite diameter
NASA Astrophysics Data System (ADS)
Flammang, V.
2009-06-01
Explicit auxiliary functions can be used in the ``Schur-Siegel- Smyth trace problem''. In the previous works, these functions were constructed only with polynomials having all their roots positive. Here, we use several polynomials with complex roots, which are found with Wu's algorithm, and we improve the known lower bounds for the absolute trace of totally positive algebraic integers. This improvement has a consequence for the search of Salem numbers that have a negative trace. The same method also gives a small improvement of the upper bound for the integer transfinite diameter of [0,1].
Overview of MPLNET Version 3 Cloud Detection
NASA Technical Reports Server (NTRS)
Lewis, Jasper R.; Campbell, James; Welton, Ellsworth J.; Stewart, Sebastian A.; Haftings, Phillip
2016-01-01
The National Aeronautics and Space Administration Micro Pulse Lidar Network, version 3, cloud detection algorithm is described and differences relative to the previous version are highlighted. Clouds are identified from normalized level 1 signal profiles using two complementary methods. The first method considers vertical signal derivatives for detecting low-level clouds. The second method, which detects high-level clouds like cirrus, is based on signal uncertainties necessitated by the relatively low signal-to-noise ratio exhibited in the upper troposphere by eye-safe network instruments, especially during daytime. Furthermore, a multitemporal averaging scheme is used to improve cloud detection under conditions of a weak signal-to-noise ratio. Diurnal and seasonal cycles of cloud occurrence frequency based on one year of measurements at the Goddard Space Flight Center (Greenbelt, Maryland) site are compared for the new and previous versions. The largest differences, and perceived improvement, in detection occurs for high clouds (above 5 km, above MSL), which increase in occurrence by over 5%. There is also an increase in the detection of multilayered cloud profiles from 9% to 19%. Macrophysical properties and estimates of cloud optical depth are presented for a transparent cirrus dataset. However, the limit to which the cirrus cloud optical depth could be reliably estimated occurs between 0.5 and 0.8. A comparison using collocated CALIPSO measurements at the Goddard Space Flight Center and Singapore Micro Pulse Lidar Network (MPLNET) sites indicates improvements in cloud occurrence frequencies and layer heights.
Automated three-component synthesis of a library of γ-lactams
Fenster, Erik; Hill, David; Reiser, Oliver
2012-01-01
Summary A three-component method for the synthesis of γ-lactams from commercially available maleimides, aldehydes, and amines was adapted to parallel library synthesis. Improvements to the chemistry over previous efforts include the optimization of the method to a one-pot process, the management of by-products and excess reagents, the development of an automated parallel sequence, and the adaption of the method to permit the preparation of enantiomerically enriched products. These efforts culminated in the preparation of a library of 169 γ-lactams. PMID:23209515
Degree of Approximation by a General Cλ -Summability Method
NASA Astrophysics Data System (ADS)
Sonker, S.; Munjal, A.
2018-03-01
In the present study, two theorems explaining the degree of approximation of signals belonging to the class Lip(α, p, w) by a more general C λ -method (Summability method) have been formulated. Improved estimations have been observed in terms of λ(n) where (λ(n))‑α ≤ n ‑α for 0 < α ≤ 1 as compared to previous studies presented in terms of n. These estimations of infinite matrices are very much applicable in solid state physics which further motivates for an investigation of perturbations of matrix valued functions.
[The possibility for using the phenomenon of polarized light interference in treating amblyopia].
Abramov, V G; Vakurina, A E; Kashchenko, T P; Pargina, N M
1996-01-01
A new method for treating amblyopia is proposed, making use of the phenomenon of polarized light interference. It helps act simultaneously on the brightness, contrast frequency, and color sensitivity in response to patterns. The method was used in the treatment of 36 children. In group 1 (n = 20) it was combined with the traditional methods. Such treatment was more effective than in controls treated routinely. Group 2 consisted of 16 children in whom previous therapy was of no avail. Visual function was improved in 7 of them.
NASA Astrophysics Data System (ADS)
Buchari; Tarigan, U.; Ambarita, M. B.
2018-02-01
PT. XYZ is a wood processing company which produce semi-finished wood with production system is make to order. In the production process, it can be seen that the production line is not balanced. The imbalance of the production line is caused by the difference in cycle time between work stations. In addition, there are other issues, namely the existence of material flow pattern is irregular so it resulted in the backtracking and displacement distance away. This study aimed to obtain the allocation of work elements to specific work stations and propose an improvement of the production layout based on the result of improvements in the line balancing. The method used in the balancing is Ranked Positional Weight (RPW) or also known as Helgeson Birnie method. While the methods used in the improvement of the layout is the method of Systematic Layout Planning (SLP). By using Ranked Positional Weight (RPW) obtained increase in line efficiency becomes 84,86% and decreased balance delay becomes 15,14%. Repairing the layout using the method of Systematic Layout Planning (SLP) also give good results with a reduction in path length becomes 133,82 meters from 213,09 meters previously or a decrease of 37.2%.
A Pragmatic Smoothing Method for Improving the Quality of the Results in Atomic Spectroscopy
NASA Astrophysics Data System (ADS)
Bennun, Leonardo
2017-07-01
A new smoothing method for the improvement on the identification and quantification of spectral functions based on the previous knowledge of the signals that are expected to be quantified, is presented. These signals are used as weighted coefficients in the smoothing algorithm. This smoothing method was conceived to be applied in atomic and nuclear spectroscopies preferably to these techniques where net counts are proportional to acquisition time, such as particle induced X-ray emission (PIXE) and other X-ray fluorescence spectroscopic methods, etc. This algorithm, when properly applied, does not distort the form nor the intensity of the signal, so it is well suited for all kind of spectroscopic techniques. This method is extremely effective at reducing high-frequency noise in the signal much more efficient than a single rectangular smooth of the same width. As all of smoothing techniques, the proposed method improves the precision of the results, but in this case we found also a systematic improvement on the accuracy of the results. We still have to evaluate the improvement on the quality of the results when this method is applied over real experimental results. We expect better characterization of the net area quantification of the peaks, and smaller Detection and Quantification Limits. We have applied this method to signals that obey Poisson statistics, but with the same ideas and criteria, it could be applied to time series. In a general case, when this algorithm is applied over experimental results, also it would be required that the sought characteristic functions, required for this weighted smoothing method, should be obtained from a system with strong stability. If the sought signals are not perfectly clean, this method should be carefully applied
Application of Raman spectroscopy for cervical dysplasia diagnosis
Kanter, Elizabeth M.; Vargis, Elizabeth; Majumder, Shovan; Keller, Matthew D.; Woeste, Emily; Rao, Gautam G.; Mahadevan-Jansen, Anita
2014-01-01
Cervical cancer is the second most common malignancy among women worldwide, with over 490000 cases diagnosed and 274000 deaths each year. Although current screening methods have dramatically reduced cervical cancer incidence and mortality in developed countries, a “See and Treat” method would be preferred, especially in developing countries. Results from our previous work have suggested that Raman spectroscopy can be used to detect cervical precancers; however, with a classification accuracy of 88%, it was not clinically applicable. In this paper, we describe how incorporating a woman's hormonal status, particularly the point in menstrual cycle and menopausal state, into our previously developed classification algorithm improves the accuracy of our method to 94%. The results of this paper bring Raman spectroscopy one step closer to being utilized in a clinical setting to diagnose cervical dysplasia. Posterior probabilities of class membership, as determined by MRDF-SMLR, for patients regardless of menopausal status, and for pre-menopausal patients only PMID:19343687
Amount of Postcue Encoding Predicts Amount of Directed Forgetting
ERIC Educational Resources Information Center
Pastotter, Bernhard; Bauml, Karl-Heinz
2010-01-01
In list-method directed forgetting, participants are cued to intentionally forget a previously studied list (List 1) before encoding a subsequently presented list (List 2). Compared with remember-cued participants, forget-cued participants typically show impaired recall of List 1 and improved recall of List 2, referred to as List 1 forgetting and…
Cognitive Support in Teaching Football Techniques
ERIC Educational Resources Information Center
Duda, Henryk
2009-01-01
Study aim: To improve the teaching of football techniques by applying cognitive and imagery techniques. Material and methods: Four groups of subjects, n = 32 each, were studied: male and female physical education students aged 20-21 years, not engaged previously in football training; male juniors and minors, aged 16 and 13 years, respectively,…
Combining RTI and Psychoeducational Assessment: What We Must Assume to Do Otherwise
ERIC Educational Resources Information Center
Wodrich, David L.; Spencer, Marsha L. S.; Daley, Kelly B.
2006-01-01
The Individuals With Disabilities Education Improvement Act of 2004 (IDEA; 2004) permitted lack of students' response to intervention (RTI) to be considered as a basis for documenting specific learning disabilities (SLD). The previous method of detecting SLD, which relied on IQ and achievement testing, consequently is no longer mandatory.…
Talker Differences in Clear and Conversational Speech: Acoustic Characteristics of Vowels
ERIC Educational Resources Information Center
Ferguson, Sarah Hargus; Kewley-Port, Diane
2007-01-01
Purpose: To determine the specific acoustic changes that underlie improved vowel intelligibility in clear speech. Method: Seven acoustic metrics were measured for conversational and clear vowels produced by 12 talkers--6 who previously were found (S. H. Ferguson, 2004) to produce a large clear speech vowel intelligibility effect for listeners with…
Application of an auditory model to speech recognition.
Cohen, J R
1989-06-01
Some aspects of auditory processing are incorporated in a front end for the IBM speech-recognition system [F. Jelinek, "Continuous speech recognition by statistical methods," Proc. IEEE 64 (4), 532-556 (1976)]. This new process includes adaptation, loudness scaling, and mel warping. Tests show that the design is an improvement over previous algorithms.
ERIC Educational Resources Information Center
Sencibaugh, Joseph M.
2005-01-01
This paper examines research studies, which focus on interventions commonly used with students who are learning disabled and identify effective methods that produce substantial benefits concerning reading comprehension. This paper synthesizes previous observation studies by conducting a meta-analysis of strategies used to improve the reading…
ERIC Educational Resources Information Center
Sencibaugh, Joseph M.
2007-01-01
This paper examines research studies, which focus on interventions commonly used with students who are learning disabled and identifies effective methods that produce substantial benefits concerning reading comprehension. This paper synthesizes previous observation studies by conducting a meta-analysis of strategies used to improve the reading…
Haemophilus haemolyticus Isolates Causing Clinical Disease
Wang, Xin; Briere, Elizabeth C.; Katz, Lee S.; Cohn, Amanda C.; Clark, Thomas A.; Messonnier, Nancy E.; Mayer, Leonard W.
2012-01-01
We report seven cases of Haemophilus haemolyticus invasive disease detected in the United States, which were previously misidentified as nontypeable Haemophilus influenzae. All cases had different symptoms and presentations. Our study suggests that a testing scheme that includes reliable PCR assays and standard microbiological methods should be used in order to improve H. haemolyticus identification. PMID:22573587
Haemophilus haemolyticus isolates causing clinical disease.
Anderson, Raydel; Wang, Xin; Briere, Elizabeth C; Katz, Lee S; Cohn, Amanda C; Clark, Thomas A; Messonnier, Nancy E; Mayer, Leonard W
2012-07-01
We report seven cases of Haemophilus haemolyticus invasive disease detected in the United States, which were previously misidentified as nontypeable Haemophilus influenzae. All cases had different symptoms and presentations. Our study suggests that a testing scheme that includes reliable PCR assays and standard microbiological methods should be used in order to improve H. haemolyticus identification.
NASA Astrophysics Data System (ADS)
Hon, Marc; Stello, Dennis; Yu, Jie
2018-05-01
Deep learning in the form of 1D convolutional neural networks have previously been shown to be capable of efficiently classifying the evolutionary state of oscillating red giants into red giant branch stars and helium-core burning stars by recognizing visual features in their asteroseismic frequency spectra. We elaborate further on the deep learning method by developing an improved convolutional neural network classifier. To make our method useful for current and future space missions such as K2, TESS, and PLATO, we train classifiers that are able to classify the evolutionary states of lower frequency resolution spectra expected from these missions. Additionally, we provide new classifications for 8633 Kepler red giants, out of which 426 have previously not been classified using asteroseismology. This brings the total to 14983 Kepler red giants classified with our new neural network. We also verify that our classifiers are remarkably robust to suboptimal data, including low signal-to-noise and incorrect training truth labels.
Improved regulatory element prediction based on tissue-specific local epigenomic signatures
DOE Office of Scientific and Technical Information (OSTI.GOV)
He, Yupeng; Gorkin, David U.; Dickel, Diane E.
Accurate enhancer identification is critical for understanding the spatiotemporal transcriptional regulation during development as well as the functional impact of disease-related noncoding genetic variants. Computational methods have been developed to predict the genomic locations of active enhancers based on histone modifications, but the accuracy and resolution of these methods remain limited. Here, we present an algorithm, regulator y element prediction based on tissue-specific local epigenetic marks (REPTILE), which integrates histone modification and whole-genome cytosine DNA methylation profiles to identify the precise location of enhancers. We tested the ability of REPTILE to identify enhancers previously validated in reporter assays. Compared withmore » existing methods, REPTILE shows consistently superior performance across diverse cell and tissue types, and the enhancer locations are significantly more refined. We show that, by incorporating base-resolution methylation data, REPTILE greatly improves upon current methods for annotation of enhancers across a variety of cell and tissue types.« less
Information Filtering via Heterogeneous Diffusion in Online Bipartite Networks
Zhang, Fu-Guo; Zeng, An
2015-01-01
The rapid expansion of Internet brings us overwhelming online information, which is impossible for an individual to go through all of it. Therefore, recommender systems were created to help people dig through this abundance of information. In networks composed by users and objects, recommender algorithms based on diffusion have been proven to be one of the best performing methods. Previous works considered the diffusion process from user to object, and from object to user to be equivalent. We show in this work that it is not the case and we improve the quality of the recommendation by taking into account the asymmetrical nature of this process. We apply this idea to modify the state-of-the-art recommendation methods. The simulation results show that the new methods can outperform these existing methods in both recommendation accuracy and diversity. Finally, this modification is checked to be able to improve the recommendation in a realistic case. PMID:26125631
Cooper, A D; Stubbings, G W; Kelly, M; Tarbin, J A; Farrington, W H; Shearer, G
1998-07-03
An improved on-line metal chelate affinity chromatography-high-performance liquid chromatography (MCAC-HPLC) method for the determination of tetracycline antibiotics in animal tissues and egg has been developed. Extraction was carried out with ethyl acetate. The extract was then evaporated to dryness and reconstituted in methanol prior to on-line MCAC clean-up and HPLC-UV determination. Recoveries of tetracycline, oxytetracycline, demeclocycline and chlortetracycline in the range 42% to 101% were obtained from egg, poultry, fish and venison tissues spiked at 25 micrograms kg-1. Limits of detection less than 10 microgram kg-1 were estimated for all four analytes. This method has higher throughput, higher recovery and lower limits of detection than a previously reported on-line MCAC-HPLC method which involved aqueous extraction and solid-phase extraction clean-up.
Improved regulatory element prediction based on tissue-specific local epigenomic signatures
He, Yupeng; Gorkin, David U.; Dickel, Diane E.; ...
2017-02-13
Accurate enhancer identification is critical for understanding the spatiotemporal transcriptional regulation during development as well as the functional impact of disease-related noncoding genetic variants. Computational methods have been developed to predict the genomic locations of active enhancers based on histone modifications, but the accuracy and resolution of these methods remain limited. Here, we present an algorithm, regulator y element prediction based on tissue-specific local epigenetic marks (REPTILE), which integrates histone modification and whole-genome cytosine DNA methylation profiles to identify the precise location of enhancers. We tested the ability of REPTILE to identify enhancers previously validated in reporter assays. Compared withmore » existing methods, REPTILE shows consistently superior performance across diverse cell and tissue types, and the enhancer locations are significantly more refined. We show that, by incorporating base-resolution methylation data, REPTILE greatly improves upon current methods for annotation of enhancers across a variety of cell and tissue types.« less
Information Filtering via Heterogeneous Diffusion in Online Bipartite Networks.
Zhang, Fu-Guo; Zeng, An
2015-01-01
The rapid expansion of Internet brings us overwhelming online information, which is impossible for an individual to go through all of it. Therefore, recommender systems were created to help people dig through this abundance of information. In networks composed by users and objects, recommender algorithms based on diffusion have been proven to be one of the best performing methods. Previous works considered the diffusion process from user to object, and from object to user to be equivalent. We show in this work that it is not the case and we improve the quality of the recommendation by taking into account the asymmetrical nature of this process. We apply this idea to modify the state-of-the-art recommendation methods. The simulation results show that the new methods can outperform these existing methods in both recommendation accuracy and diversity. Finally, this modification is checked to be able to improve the recommendation in a realistic case.
NASA Astrophysics Data System (ADS)
Huismann, Tyler D.
Due to the rapidly expanding role of electric propulsion (EP) devices, it is important to evaluate their integration with other spacecraft systems. Specifically, EP device plumes can play a major role in spacecraft integration, and as such, accurate characterization of plume structure bears on mission success. This dissertation addresses issues related to accurate prediction of plume structure in a particular type of EP device, a Hall thruster. This is done in two ways: first, by coupling current plume simulation models with current models that simulate a Hall thruster's internal plasma behavior; second, by improving plume simulation models and thereby increasing physical fidelity. These methods are assessed by comparing simulated results to experimental measurements. Assessment indicates the two methods improve plume modeling capabilities significantly: using far-field ion current density as a metric, these approaches used in conjunction improve agreement with measurements by a factor of 2.5, as compared to previous methods. Based on comparison to experimental measurements, recent computational work on discharge chamber modeling has been largely successful in predicting properties of internal thruster plasmas. This model can provide detailed information on plasma properties at a variety of locations. Frequently, experimental data is not available at many locations that are of interest regarding computational models. Excepting the presence of experimental data, there are limited alternatives for scientifically determining plasma properties that are necessary as inputs into plume simulations. Therefore, this dissertation focuses on coupling current models that simulate internal thruster plasma behavior with plume simulation models. Further, recent experimental work on atom-ion interactions has provided a better understanding of particle collisions within plasmas. This experimental work is used to update collision models in a current plume simulation code. Previous versions of the code assume an unknown dependence between particles' pre-collision velocities and post-collision scattering angles. This dissertation focuses on updating several of these types of collisions by assuming a curve fit based on the measurements of atom-ion interactions, such that previously unknown angular dependences are well-characterized.
Akbar, M Ali; Ali, Norhashidah Hj Mohd; Mohyud-Din, Syed Tauseef
2013-01-01
The (G'/G)-expansion method is one of the most direct and effective method for obtaining exact solutions of nonlinear partial differential equations (PDEs). In the present article, we construct the exact traveling wave solutions of nonlinear evolution equations in mathematical physics via the (2 + 1)-dimensional breaking soliton equation by using two methods: namely, a further improved (G'/G)-expansion method, where G(ξ) satisfies the auxiliary ordinary differential equation (ODE) [G'(ξ)](2) = p G (2)(ξ) + q G (4)(ξ) + r G (6)(ξ); p, q and r are constants and the well known extended tanh-function method. We demonstrate, nevertheless some of the exact solutions bring out by these two methods are analogous, but they are not one and the same. It is worth mentioning that the first method has not been exercised anybody previously which gives further exact solutions than the second one. PACS numbers 02.30.Jr, 05.45.Yv, 02.30.Ik.
Johnson, Lucas B; Gintner, Lucas P; Park, Sehoo; Snow, Christopher D
2015-08-01
Accuracy of current computational protein design (CPD) methods is limited by inherent approximations in energy potentials and sampling. These limitations are often used to qualitatively explain design failures; however, relatively few studies provide specific examples or quantitative details that can be used to improve future CPD methods. Expanding the design method to include a library of sequences provides data that is well suited for discriminating between stabilizing and destabilizing design elements. Using thermophilic endoglucanase E1 from Acidothermus cellulolyticus as a model enzyme, we computationally designed a sequence with 60 mutations. The design sequence was rationally divided into structural blocks and recombined with the wild-type sequence. Resulting chimeras were assessed for activity and thermostability. Surprisingly, unlike previous chimera libraries, regression analysis based on one- and two-body effects was not sufficient for predicting chimera stability. Analysis of molecular dynamics simulations proved helpful in distinguishing stabilizing and destabilizing mutations. Reverting to the wild-type amino acid at destabilized sites partially regained design stability, and introducing predicted stabilizing mutations in wild-type E1 significantly enhanced thermostability. The ability to isolate stabilizing and destabilizing elements in computational design offers an opportunity to interpret previous design failures and improve future CPD methods. © The Author 2015. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
Zhang, Mingjing; Wen, Ming; Zhang, Zhi-Min; Lu, Hongmei; Liang, Yizeng; Zhan, Dejian
2015-03-01
Retention time shift is one of the most challenging problems during the preprocessing of massive chromatographic datasets. Here, an improved version of the moving window fast Fourier transform cross-correlation algorithm is presented to perform nonlinear and robust alignment of chromatograms by analyzing the shifts matrix generated by moving window procedure. The shifts matrix in retention time can be estimated by fast Fourier transform cross-correlation with a moving window procedure. The refined shift of each scan point can be obtained by calculating the mode of corresponding column of the shifts matrix. This version is simple, but more effective and robust than the previously published moving window fast Fourier transform cross-correlation method. It can handle nonlinear retention time shift robustly if proper window size has been selected. The window size is the only one parameter needed to adjust and optimize. The properties of the proposed method are investigated by comparison with the previous moving window fast Fourier transform cross-correlation and recursive alignment by fast Fourier transform using chromatographic datasets. The pattern recognition results of a gas chromatography mass spectrometry dataset of metabolic syndrome can be improved significantly after preprocessing by this method. Furthermore, the proposed method is available as an open source package at https://github.com/zmzhang/MWFFT2. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Recent Improvements in Aerodynamic Design Optimization on Unstructured Meshes
NASA Technical Reports Server (NTRS)
Nielsen, Eric J.; Anderson, W. Kyle
2000-01-01
Recent improvements in an unstructured-grid method for large-scale aerodynamic design are presented. Previous work had shown such computations to be prohibitively long in a sequential processing environment. Also, robust adjoint solutions and mesh movement procedures were difficult to realize, particularly for viscous flows. To overcome these limiting factors, a set of design codes based on a discrete adjoint method is extended to a multiprocessor environment using a shared memory approach. A nearly linear speedup is demonstrated, and the consistency of the linearizations is shown to remain valid. The full linearization of the residual is used to precondition the adjoint system, and a significantly improved convergence rate is obtained. A new mesh movement algorithm is implemented and several advantages over an existing technique are presented. Several design cases are shown for turbulent flows in two and three dimensions.
Improving informed consent: Stakeholder views
Anderson, Emily E.; Newman, Susan B.; Matthews, Alicia K.
2017-01-01
Purpose Innovation will be required to improve the informed consent process in research. We aimed to obtain input from key stakeholders—research participants and those responsible for obtaining informed consent—to inform potential development of a multimedia informed consent “app.” Methods This descriptive study used a mixed-methods approach. Five 90-minute focus groups were conducted with volunteer samples of former research participants and researchers/research staff responsible for obtaining informed consent. Participants also completed a brief survey that measured background information and knowledge and attitudes regarding research and the use of technology. Established qualitative methods were used to conduct the focus groups and data analysis. Results We conducted five focus groups with 41 total participants: three groups with former research participants (total n = 22), and two groups with researchers and research coordinators (total n = 19). Overall, individuals who had previously participated in research had positive views regarding their experiences. However, further discussion elicited that the informed consent process often did not meet its intended objectives. Findings from both groups are presented according to three primary themes: content of consent forms, experience of the informed consent process, and the potential of technology to improve the informed consent process. A fourth theme, need for lay input on informed consent, emerged from the researcher groups. Conclusions Our findings add to previous research that suggests that the use of interactive technology has the potential to improve the process of informed consent. However, our focus-group findings provide additional insight that technology cannot replace the human connection that is central to the informed consent process. More research that incorporates the views of key stakeholders is needed to ensure that multimedia consent processes do not repeat the mistakes of paper-based consent forms. PMID:28949896
An improved method of measuring heart rate using a webcam
NASA Astrophysics Data System (ADS)
Liu, Yi; Ouyang, Jianfei; Yan, Yonggang
2014-09-01
Measuring heart rate traditionally requires special equipment and physical contact with the subject. Reliable non-contact and low-cost measurements are highly desirable for convenient and comfortable physiological self-assessment. Previous work has shown that consumer-grade cameras can provide useful signals for remote heart rate measurements. In this paper a simple and robust method of measuring the heart rate using low-cost webcam is proposed. Blood volume pulse is extracted by proper Region of Interest (ROI) and color channel selection from image sequences of human faces without complex computation. Heart rate is subsequently quantified by spectrum analysis. The method is successfully applied under natural lighting conditions. Results of experiments show that it takes less time, is much simpler, and has similar accuracy to the previously published and widely used method of Independent Component Analysis (ICA). Benefitting from non-contact, convenience, and low-costs, it provides great promise for popularization of home healthcare and can further be applied to biomedical research.
Improvement of Hand Movement on Visual Target Tracking by Assistant Force of Model-Based Compensator
NASA Astrophysics Data System (ADS)
Ide, Junko; Sugi, Takenao; Nakamura, Masatoshi; Shibasaki, Hiroshi
Human motor control is achieved by the appropriate motor commands generating from the central nerve system. A test of visual target tracking is one of the effective methods for analyzing the human motor functions. We have previously examined a possibility for improving the hand movement on visual target tracking by additional assistant force through a simulation study. In this study, a method for compensating the human hand movement on visual target tracking by adding an assistant force was proposed. Effectiveness of the compensation method was investigated through the experiment for four healthy adults. The proposed compensator precisely improved the reaction time, the position error and the variability of the velocity of the human hand. The model-based compensator proposed in this study is constructed by using the measurement data on visual target tracking for each subject. The properties of the hand movement for different subjects can be reflected in the structure of the compensator. Therefore, the proposed method has possibility to adjust the individual properties of patients with various movement disorders caused from brain dysfunctions.
[Detection of lung nodules. New opportunities in chest radiography].
Pötter-Lang, S; Schalekamp, S; Schaefer-Prokop, C; Uffmann, M
2014-05-01
Chest radiography still represents the most commonly performed X-ray examination because it is readily available, requires low radiation doses and is relatively inexpensive. However, as previously published, many initially undetected lung nodules are retrospectively visible in chest radiographs. The great improvements in detector technology with the increasing dose efficiency and improved contrast resolution provide a better image quality and reduced dose needs. The dual energy acquisition technique and advanced image processing methods (e.g. digital bone subtraction and temporal subtraction) reduce the anatomical background noise by reduction of overlapping structures in chest radiography. Computer-aided detection (CAD) schemes increase the awareness of radiologists for suspicious areas. The advanced image processing methods show clear improvements for the detection of pulmonary lung nodules in chest radiography and strengthen the role of this method in comparison to 3D acquisition techniques, such as computed tomography (CT). Many of these methods will probably be integrated into standard clinical treatment in the near future. Digital software solutions offer advantages as they can be easily incorporated into radiology departments and are often more affordable as compared to hardware solutions.
NASA Astrophysics Data System (ADS)
Cheng, Yuan; Zheng, Mei; He, Ke-bin; Chen, Yingjun; Yan, Bo; Russell, Armistead G.; Shi, Wenyan; Jiao, Zheng; Sheng, Guoying; Fu, Jiamo; Edgerton, Eric S.
2011-02-01
A total of 333 PM 2.5 samples were collected at four sites in the southeastern Aerosol Research and Characterization Study (SEARCH) network during four seasons from 2003 to 2005 and were simultaneously analyzed by two common thermal-optical methods, the National Institute of Occupational Safety and Health (NIOSH) method and the Interagency Monitoring of Protected Visual Environments (IMPROVE) method. The concentrations of total carbon measured by the two methods were comparable, whereas the split of organic carbon (OC) and elemental carbon (EC) was significantly different. The NIOSH-defined EC was lower (up to 80%) than that defined by IMPROVE since the NIOSH method applied the transmittance charring correction and a much higher peak inert mode temperature. The discrepancy between NIOSH- and IMPROVE-defined EC showed distinct seasonal and spatial variations. Potential factors contributing to this discrepancy besides the analytical method were investigated. The discrepancy between NIOSH- and IMPROVE-defined EC was larger in the spring compared to winter due to the influence of biomass burning, which is known to emit significant amount of brown carbon that would complicate the split of OC and EC. The NIOSH-defined EC to IMPROVE-defined EC ratio reached its minimum (0.2-0.5) in the summer, when the largest discrepancy was observed. This was most likely to be attributed to the influence of secondary organic aerosol (SOA). Moreover, the discrepancy between NIOSH- and IMPROVE-defined EC was larger in the coastal and the rural sites where the presence of abundant SOA was found based on previous studies in this region, providing supporting evidence that SOA could contribute to the observed discrepancy in summer.
QCD with two light dynamical chirally improved quarks: Mesons
NASA Astrophysics Data System (ADS)
Engel, Georg P.; Lang, C. B.; Limmer, Markus; Mohler, Daniel; Schäfer, Andreas
2012-02-01
We present results for the spectrum of light and strange mesons on configurations with two flavors of mass-degenerate Chirally Improved sea quarks. The calculations are performed on seven ensembles of lattice size 163×32 at three different gauge couplings and with pion masses ranging from 250 to 600 MeV. To reliably extract excited states, we use the variational method with an interpolator basis containing both Gaussian and derivative quark sources. Both conventional and exotic channels up to spin 2 are considered. Strange quarks are treated within the partially quenched approximation. For kaons we investigate the mixing of interpolating fields corresponding to definite C-parity in the SU(3) limit. This enlarged basis allows for an improved determination of the low-lying kaon spectrum. In addition to masses we also extract the ratio of the pseudoscalar decay constants of the kaon and pion and obtain FK/Fπ=1.215(41). The results presented here include some ensembles from previous publications and the corresponding results supersede the previously published values.
Accurate template-based modeling in CASP12 using the IntFOLD4-TS, ModFOLD6, and ReFOLD methods.
McGuffin, Liam J; Shuid, Ahmad N; Kempster, Robert; Maghrabi, Ali H A; Nealon, John O; Salehe, Bajuna R; Atkins, Jennifer D; Roche, Daniel B
2018-03-01
Our aim in CASP12 was to improve our Template-Based Modeling (TBM) methods through better model selection, accuracy self-estimate (ASE) scores and refinement. To meet this aim, we developed two new automated methods, which we used to score, rank, and improve upon the provided server models. Firstly, the ModFOLD6_rank method, for improved global Quality Assessment (QA), model ranking and the detection of local errors. Secondly, the ReFOLD method for fixing errors through iterative QA guided refinement. For our automated predictions we developed the IntFOLD4-TS protocol, which integrates the ModFOLD6_rank method for scoring the multiple-template models that were generated using a number of alternative sequence-structure alignments. Overall, our selection of top models and ASE scores using ModFOLD6_rank was an improvement on our previous approaches. In addition, it was worthwhile attempting to repair the detected errors in the top selected models using ReFOLD, which gave us an overall gain in performance. According to the assessors' formula, the IntFOLD4 server ranked 3rd/5th (average Z-score > 0.0/-2.0) on the server only targets, and our manual predictions (McGuffin group) ranked 1st/2nd (average Z-score > -2.0/0.0) compared to all other groups. © 2017 Wiley Periodicals, Inc.
NASA Astrophysics Data System (ADS)
Li, T.; Griffiths, W. D.; Chen, J.
2017-11-01
The Maximum Likelihood method and the Linear Least Squares (LLS) method have been widely used to estimate Weibull parameters for reliability of brittle and metal materials. In the last 30 years, many researchers focused on the bias of Weibull modulus estimation, and some improvements have been achieved, especially in the case of the LLS method. However, there is a shortcoming in these methods for a specific type of data, where the lower tail deviates dramatically from the well-known linear fit in a classic LLS Weibull analysis. This deviation can be commonly found from the measured properties of materials, and previous applications of the LLS method on this kind of dataset present an unreliable linear regression. This deviation was previously thought to be due to physical flaws ( i.e., defects) contained in materials. However, this paper demonstrates that this deviation can also be caused by the linear transformation of the Weibull function, occurring in the traditional LLS method. Accordingly, it may not be appropriate to carry out a Weibull analysis according to the linearized Weibull function, and the Non-linear Least Squares method (Non-LS) is instead recommended for the Weibull modulus estimation of casting properties.
Metal artifact reduction for CT-based luggage screening.
Karimi, Seemeen; Martz, Harry; Cosman, Pamela
2015-01-01
In aviation security, checked luggage is screened by computed tomography scanning. Metal objects in the bags create artifacts that degrade image quality. Though there exist metal artifact reduction (MAR) methods mainly in medical imaging literature, they require knowledge of the materials in the scan, or are outlier rejection methods. To improve and evaluate a MAR method we previously introduced, that does not require knowledge of the materials in the scan, and gives good results on data with large quantities and different kinds of metal. We describe in detail an optimization which de-emphasizes metal projections and has a constraint for beam hardening and scatter. This method isolates and reduces artifacts in an intermediate image, which is then fed to a previously published sinogram replacement method. We evaluate the algorithm for luggage data containing multiple and large metal objects. We define measures of artifact reduction, and compare this method against others in MAR literature. Metal artifacts were reduced in our test images, even for multiple and large metal objects, without much loss of structure or resolution. Our MAR method outperforms the methods with which we compared it. Our approach does not make assumptions about image content, nor does it discard metal projections.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wu, X; Wang, Y
Purpose: Due to limited commissioning time, we previously only released our True beam non-FFF mode for prostate treatment. Clinical demand now pushes us to release the non-FFF mode for SRT/SBRT treatment. When re-planning on True beam previously treated SRT/SBRT cases on iX machine we found the patient specific QA pass rate was worse than iX’s, though the 2Gy/fx prostate Result had been as good. We hypothesize that in TPS the True beam DLG and MLC transmission values, of those measured during commissioning could not yet provide accurate SRS/SBRT dosimetry. Hence this work is to investigate how the TPS DLG andmore » transmission value affects Rapid Arc plans’ dosimetric accuracy. Methods: We increased DLG and transmission value of True beam in TPS such that their percentage differences against the measured matched those of iX’s. We re-calculated 2 SRT, 1 SBRT and 2 prostate plans, performed patient specific QA on these new plans and compared the results to the previous. Results: With DLG and transmission value set respectively 40 and 8% higher than the measured, the patient specific QA pass rate (at 3%/3mm) improved from 95.0 to 97.6% vs previous iX’s 97.8% in the case of SRT. In the case of SBRT, the pass rate improved from 75.2 to 93.9% vs previous iX’s 92.5%. In the case of prostate, the pass rate improved from 99.3 to 100%. The maximum dose difference in plans before and after adjusting DLG and transmission was approximately 1% of the prescription dose among all plans. Conclusion: The impact of adjusting DLG and transmission value on dosimetry might be the same among all Rapid Arc plans regardless hypofractionated or not. The large variation observed in patient specific QA pass rate might be due to the data analysis method in the QA software being more sensitive to hypofractionated plans.« less
Online learning control using adaptive critic designs with sparse kernel machines.
Xu, Xin; Hou, Zhongsheng; Lian, Chuanqiang; He, Haibo
2013-05-01
In the past decade, adaptive critic designs (ACDs), including heuristic dynamic programming (HDP), dual heuristic programming (DHP), and their action-dependent ones, have been widely studied to realize online learning control of dynamical systems. However, because neural networks with manually designed features are commonly used to deal with continuous state and action spaces, the generalization capability and learning efficiency of previous ACDs still need to be improved. In this paper, a novel framework of ACDs with sparse kernel machines is presented by integrating kernel methods into the critic of ACDs. To improve the generalization capability as well as the computational efficiency of kernel machines, a sparsification method based on the approximately linear dependence analysis is used. Using the sparse kernel machines, two kernel-based ACD algorithms, that is, kernel HDP (KHDP) and kernel DHP (KDHP), are proposed and their performance is analyzed both theoretically and empirically. Because of the representation learning and generalization capability of sparse kernel machines, KHDP and KDHP can obtain much better performance than previous HDP and DHP with manually designed neural networks. Simulation and experimental results of two nonlinear control problems, that is, a continuous-action inverted pendulum problem and a ball and plate control problem, demonstrate the effectiveness of the proposed kernel ACD methods.
Network meta-analyses could be improved by searching more sources and by involving a librarian.
Li, Lun; Tian, Jinhui; Tian, Hongliang; Moher, David; Liang, Fuxiang; Jiang, Tongxiao; Yao, Liang; Yang, Kehu
2014-09-01
Network meta-analyses (NMAs) aim to rank the benefits (or harms) of interventions, based on all available randomized controlled trials. Thus, the identification of relevant data is critical. We assessed the conduct of the literature searches in NMAs. Published NMAs were retrieved by searching electronic bibliographic databases and other sources. Two independent reviewers selected studies and five trained reviewers abstracted data regarding literature searches, in duplicate. Search method details were examined using descriptive statistics. Two hundred forty-nine NMAs were included. Eight used previous systematic reviews to identify primary studies without further searching, and five did not report any literature searches. In the 236 studies that used electronic databases to identify primary studies, the median number of databases was 3 (interquartile range: 3-5). MEDLINE, EMBASE, and Cochrane Central Register of Controlled Trials were the most commonly used databases. The most common supplemental search methods included reference lists of included studies (48%), reference lists of previous systematic reviews (40%), and clinical trial registries (32%). None of these supplemental methods was conducted in more than 50% of the NMAs. Literature searches in NMAs could be improved by searching more sources, and by involving a librarian or information specialist. Copyright © 2014 Elsevier Inc. All rights reserved.
Development of the Ion Exchange-Gravimetric Method for Sodium in Serum as a Definitive Method
Moody, John R.; Vetter, Thomas W.
1996-01-01
An ion exchange-gravimetric method, previously developed as a National Committee for Clinical Laboratory Standards (NCCLS) reference method for the determination of sodium in human serum, has been re-evaluated and improved. Sources of analytical error in this method have been examined more critically and the overall uncertainties decreased. Additionally, greater accuracy and repeatability have been achieved by the application of this definitive method to a sodium chloride reference material. In this method sodium in serum is ion-exchanged, selectively eluted and converted to a weighable precipitate as Na2SO4. Traces of sodium eluting before or after the main fraction, and precipitate contaminants are determined instrumentally. Co-precipitating contaminants contribute less than 0.1 % while the analyte lost to other eluted ion-exchange fractions contributes less than 0.02 % to the total precipitate mass. With improvements, the relative expanded uncertainty (k = 2) of the method, as applied to serum, is 0.3 % to 0.4 % and is less than 0.1 % when applied to a sodium chloride reference material. PMID:27805122
Localization of diffusion sources in complex networks with sparse observations
NASA Astrophysics Data System (ADS)
Hu, Zhao-Long; Shen, Zhesi; Tang, Chang-Bing; Xie, Bin-Bin; Lu, Jian-Feng
2018-04-01
Locating sources in a large network is of paramount importance to reduce the spreading of disruptive behavior. Based on the backward diffusion-based method and integer programming, we propose an efficient approach to locate sources in complex networks with limited observers. The results on model networks and empirical networks demonstrate that, for a certain fraction of observers, the accuracy of our method for source localization will improve as the increase of network size. Besides, compared with the previous method (the maximum-minimum method), the performance of our method is much better with a small fraction of observers, especially in heterogeneous networks. Furthermore, our method is more robust against noise environments and strategies of choosing observers.
AMS 4.0: consensus prediction of post-translational modifications in protein sequences.
Plewczynski, Dariusz; Basu, Subhadip; Saha, Indrajit
2012-08-01
We present here the 2011 update of the AutoMotif Service (AMS 4.0) that predicts the wide selection of 88 different types of the single amino acid post-translational modifications (PTM) in protein sequences. The selection of experimentally confirmed modifications is acquired from the latest UniProt and Phospho.ELM databases for training. The sequence vicinity of each modified residue is represented using amino acids physico-chemical features encoded using high quality indices (HQI) obtaining by automatic clustering of known indices extracted from AAindex database. For each type of the numerical representation, the method builds the ensemble of Multi-Layer Perceptron (MLP) pattern classifiers, each optimising different objectives during the training (for example the recall, precision or area under the ROC curve (AUC)). The consensus is built using brainstorming technology, which combines multi-objective instances of machine learning algorithm, and the data fusion of different training objects representations, in order to boost the overall prediction accuracy of conserved short sequence motifs. The performance of AMS 4.0 is compared with the accuracy of previous versions, which were constructed using single machine learning methods (artificial neural networks, support vector machine). Our software improves the average AUC score of the earlier version by close to 7 % as calculated on the test datasets of all 88 PTM types. Moreover, for the selected most-difficult sequence motifs types it is able to improve the prediction performance by almost 32 %, when compared with previously used single machine learning methods. Summarising, the brainstorming consensus meta-learning methodology on the average boosts the AUC score up to around 89 %, averaged over all 88 PTM types. Detailed results for single machine learning methods and the consensus methodology are also provided, together with the comparison to previously published methods and state-of-the-art software tools. The source code and precompiled binaries of brainstorming tool are available at http://code.google.com/p/automotifserver/ under Apache 2.0 licensing.
Sinkewicz, Marilyn; Garfinkel, Irwin
2009-05-01
We present new estimates of unwed fathers' ability to pay child support. Prior research relied on surveys that drastically undercounted nonresident unwed fathers and provided no link to their children who lived in separate households. To overcome these limitations, previous research assumed assortative mating and that each mother partnered with one father who was actually eligible to pay support and had no other child support obligations. Because the Fragile Families and Child Wellbeing Study contains data on couples, multiple-partner fertility, and a rich array of other previously unmeasured characteristics of fathers, it is uniquely suited to address the limitations of previous research. We also use an improved method of dealing with missing data. Our findings suggest that previous research overestimated the aggregate ability of unwed nonresident fathers to pay child support by 33% to 60%.
An improved multimodal method for sound propagation in nonuniform lined ducts.
Bi, WenPing; Pagneux, Vincent; Lafarge, Denis; Aurégan, Yves
2007-07-01
An efficient method is proposed for modeling time harmonic acoustic propagation in a nonuniform lined duct without flow. The lining impedance is axially segmented uniform, but varies circumferentially. The sound pressure is expanded in term of rigid duct modes and an additional function that carries the information about the impedance boundary. The rigid duct modes and the additional function are known a priori so that calculations of the true liner modes, which are difficult, are avoided. By matching the pressure and axial velocity at the interface between different uniform segments, scattering matrices are obtained for each individual segment; these are then combined to construct a global scattering matrix for multiple segments. The present method is an improvement of the multimodal propagation method, developed in a previous paper [Bi et al., J. Sound Vib. 289, 1091-1111 (2006)]. The radial rate of convergence is improved from O(n(-2)), where n is the radial mode indices, to O(n(-4)). It is numerically shown that using the present method, acoustic propagation in the nonuniform lined intake of an aeroengine can be calculated by a personal computer for dimensionless frequency K up to 80, approaching the third blade passing frequency of turbofan noise.
NASA Astrophysics Data System (ADS)
Starkey, Andrew; Usman Ahmad, Aliyu; Hamdoun, Hassan
2017-10-01
This paper investigates the application of a novel method for classification called Feature Weighted Self Organizing Map (FWSOM) that analyses the topology information of a converged standard Self Organizing Map (SOM) to automatically guide the selection of important inputs during training for improved classification of data with redundant inputs, examined against two traditional approaches namely neural networks and Support Vector Machines (SVM) for the classification of EEG data as presented in previous work. In particular, the novel method looks to identify the features that are important for classification automatically, and in this way the important features can be used to improve the diagnostic ability of any of the above methods. The paper presents the results and shows how the automated identification of the important features successfully identified the important features in the dataset and how this results in an improvement of the classification results for all methods apart from linear discriminatory methods which cannot separate the underlying nonlinear relationship in the data. The FWSOM in addition to achieving higher classification accuracy has given insights into what features are important in the classification of each class (left and right-hand movements), and these are corroborated by already published work in this area.
A 3D inversion for all-space magnetotelluric data with static shift correction
NASA Astrophysics Data System (ADS)
Zhang, Kun
2017-04-01
Base on the previous studies on the static shift correction and 3D inversion algorithms, we improve the NLCG 3D inversion method and propose a new static shift correction method which work in the inversion. The static shift correction method is based on the 3D theory and real data. The static shift can be detected by the quantitative analysis of apparent parameters (apparent resistivity and impedance phase) of MT in high frequency range, and completed correction with inversion. The method is an automatic processing technology of computer with 0 cost, and avoids the additional field work and indoor processing with good results. The 3D inversion algorithm is improved (Zhang et al., 2013) base on the NLCG method of Newman & Alumbaugh (2000) and Rodi & Mackie (2001). For the algorithm, we added the parallel structure, improved the computational efficiency, reduced the memory of computer and added the topographic and marine factors. So the 3D inversion could work in general PC with high efficiency and accuracy. And all the MT data of surface stations, seabed stations and underground stations can be used in the inversion algorithm.
NASA Astrophysics Data System (ADS)
Baron, J.; Campbell, W. C.; DeMille, D.; Doyle, J. M.; Gabrielse, G.; Gurevich, Y. V.; Hess, P. W.; Hutzler, N. R.; Kirilov, E.; Kozyryev, I.; O'Leary, B. R.; Panda, C. D.; Parsons, M. F.; Spaun, B.; Vutha, A. C.; West, A. D.; West, E. P.; ACME Collaboration
2017-07-01
We recently set a new limit on the electric dipole moment of the electron (eEDM) (J Baron et al and ACME collaboration 2014 Science 343 269-272), which represented an order-of-magnitude improvement on the previous limit and placed more stringent constraints on many charge-parity-violating extensions to the standard model. In this paper we discuss the measurement in detail. The experimental method and associated apparatus are described, together with the techniques used to isolate the eEDM signal. In particular, we detail the way experimental switches were used to suppress effects that can mimic the signal of interest. The methods used to search for systematic errors, and models explaining observed systematic errors, are also described. We briefly discuss possible improvements to the experiment.
Properties of Ni^+ from microwave spectroscopy of n=9 Rydberg levels of Nickel
NASA Astrophysics Data System (ADS)
Woods, Shannon; Keele, Julie; Smith, Chris; Lundeen, Stephen
2012-06-01
The microwave/RESIS method was used to determine the relative positions of 15 of the n=9 Rydberg levels of Nickel with L >= 6. Because the ground state of the Ni^+ ion is a ^2D5/2 level, each Rydberg level (n,L) splits into six eigenstates whose relative positions are determined by long-range e-Ni^+ interactions present in addition to the dominant Coulomb interaction. A previous study with the optical RESIS method determined these positions with precision of +/- 30 MHz [1]. Using the microwave/RESIS method improves that precision by a factor of 300, and leads to much improved determinations of the Ni+ properties that control the long-range interactions. [4pt] [1] Julie A. Keele, Shannon L. Woods, M.E. Hanni, and S.R. Lundeen Phys. Rev. 81, 022506 (2010)
Image Reconstruction for a Partially Collimated Whole Body PET Scanner
Alessio, Adam M.; Schmitz, Ruth E.; MacDonald, Lawrence R.; Wollenweber, Scott D.; Stearns, Charles W.; Ross, Steven G.; Ganin, Alex; Lewellen, Thomas K.; Kinahan, Paul E.
2008-01-01
Partially collimated PET systems have less collimation than conventional 2-D systems and have been shown to offer count rate improvements over 2-D and 3-D systems. Despite this potential, previous efforts have not established image-based improvements with partial collimation and have not customized the reconstruction method for partially collimated data. This work presents an image reconstruction method tailored for partially collimated data. Simulated and measured sensitivity patterns are presented and provide a basis for modification of a fully 3-D reconstruction technique. The proposed method uses a measured normalization correction term to account for the unique sensitivity to true events. This work also proposes a modified scatter correction based on simulated data. Measured image quality data supports the use of the normalization correction term for true events, and suggests that the modified scatter correction is unnecessary. PMID:19096731
Image Reconstruction for a Partially Collimated Whole Body PET Scanner.
Alessio, Adam M; Schmitz, Ruth E; Macdonald, Lawrence R; Wollenweber, Scott D; Stearns, Charles W; Ross, Steven G; Ganin, Alex; Lewellen, Thomas K; Kinahan, Paul E
2008-06-01
Partially collimated PET systems have less collimation than conventional 2-D systems and have been shown to offer count rate improvements over 2-D and 3-D systems. Despite this potential, previous efforts have not established image-based improvements with partial collimation and have not customized the reconstruction method for partially collimated data. This work presents an image reconstruction method tailored for partially collimated data. Simulated and measured sensitivity patterns are presented and provide a basis for modification of a fully 3-D reconstruction technique. The proposed method uses a measured normalization correction term to account for the unique sensitivity to true events. This work also proposes a modified scatter correction based on simulated data. Measured image quality data supports the use of the normalization correction term for true events, and suggests that the modified scatter correction is unnecessary.
Nursing students' mathematic calculation skills.
Rainboth, Lynde; DeMasi, Chris
2006-12-01
This mixed method study used a pre-test/post-test design to evaluate the efficacy of a teaching strategy in improving beginning nursing student learning outcomes. During a 4-week student teaching period, a convenience sample of 54 sophomore level nursing students were required to complete calculation assignments, taught one calculation method, and mandated to attend medication calculation classes. These students completed pre- and post-math tests and a major medication mathematic exam. Scores from the intervention student group were compared to those achieved by the previous sophomore class. Results demonstrated a statistically significant improvement from pre- to post-test and the students who received the intervention had statistically significantly higher scores on the major medication calculation exam than did the students in the control group. The evaluation completed by the intervention group showed that the students were satisfied with the method and outcome.
Compact illumination optic with three freeform surfaces for improved beam control.
Sorgato, Simone; Mohedano, Rubén; Chaves, Julio; Hernández, Maikel; Blen, José; Grabovičkić, Dejan; Benítez, Pablo; Miñano, Juan Carlos; Thienpont, Hugo; Duerr, Fabian
2017-11-27
Multi-chip and large size LEDs dominate the lighting market in developed countries these days. Nevertheless, a general optical design method to create prescribed intensity patterns for this type of extended sources does not exist. We present a design strategy in which the source and the target pattern are described by means of "edge wavefronts" of the system. The goal is then finding an optic coupling these wavefronts, which in the current work is a monolithic part comprising up to three freeform surfaces calculated with the simultaneous multiple surface (SMS) method. The resulting optic fully controls, for the first time, three freeform wavefronts, one more than previous SMS designs. Simulations with extended LEDs demonstrate improved intensity tailoring capabilities, confirming the effectiveness of our method and suggesting that enhanced performance features can be achieved by controlling additional wavefronts.
Improving ECG Classification Accuracy Using an Ensemble of Neural Network Modules
Javadi, Mehrdad; Ebrahimpour, Reza; Sajedin, Atena; Faridi, Soheil; Zakernejad, Shokoufeh
2011-01-01
This paper illustrates the use of a combined neural network model based on Stacked Generalization method for classification of electrocardiogram (ECG) beats. In conventional Stacked Generalization method, the combiner learns to map the base classifiers' outputs to the target data. We claim adding the input pattern to the base classifiers' outputs helps the combiner to obtain knowledge about the input space and as the result, performs better on the same task. Experimental results support our claim that the additional knowledge according to the input space, improves the performance of the proposed method which is called Modified Stacked Generalization. In particular, for classification of 14966 ECG beats that were not previously seen during training phase, the Modified Stacked Generalization method reduced the error rate for 12.41% in comparison with the best of ten popular classifier fusion methods including Max, Min, Average, Product, Majority Voting, Borda Count, Decision Templates, Weighted Averaging based on Particle Swarm Optimization and Stacked Generalization. PMID:22046232
NASA Astrophysics Data System (ADS)
Itoh, Masato; Hagimori, Yuki; Nonaka, Kenichiro; Sekiguchi, Kazuma
2016-09-01
In this study, we apply a hierarchical model predictive control to omni-directional mobile vehicle, and improve the tracking performance. We deal with an independent four-wheel driving/steering vehicle (IFWDS) equipped with four coaxial steering mechanisms (CSM). The coaxial steering mechanism is a special one composed of two steering joints on the same axis. In our previous study with respect to IFWDS with ideal steering, we proposed a model predictive tracking control. However, this method did not consider constraints of the coaxial steering mechanism which causes delay of steering. We also proposed a model predictive steering control considering constraints of this mechanism. In this study, we propose a hierarchical system combining above two control methods for IFWDS. An upper controller, which deals with vehicle kinematics, runs a model predictive tracking control, and a lower controller, which considers constraints of coaxial steering mechanism, runs a model predictive steering control which tracks the predicted steering angle optimized an upper controller. We verify the superiority of this method by comparing this method with the previous method.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Moran, James; Alexander, Thomas; Aalseth, Craig
2017-08-01
Previous measurements have demonstrated the wealth of information that tritium (T) can provide on environmentally relevant processes. We present modifications to sample preparation approaches that enable T measurement by proportional counting on small sample sizes equivalent to 120 mg of water and demonstrate the accuracy of these methods on a suite of standardized water samples. This enhanced method should provide the analytical flexibility needed to address persistent knowledge gaps in our understanding of T behavior in the environment.
Fast data transmission from serial data acquisition for the GEM detector system
NASA Astrophysics Data System (ADS)
Kolasinski, Piotr; Pozniak, Krzysztof T.; Czarski, Tomasz; Byszuk, Adrian; Chernyshova, Maryna; Kasprowicz, Grzegorz; Krawczyk, Rafal D.; Wojenski, Andrzej; Zabolotny, Wojciech
2015-09-01
This article proposes new method of storing data and transferring it to PC in the X-ray GEM detector system. The whole process is performed by FPGA chips (Spartan-6 series from Xilinx). Comparing to previous methods, new approach allows to store much more data in the system. New, improved implementation of the communication algorithm significantly increases transfer rate between system and PC. In PC data is merged and processed by MATLAB. The structure of firmware implemented in the FPGAs is described.
Computation of transonic viscous-inviscid interacting flow
NASA Technical Reports Server (NTRS)
Whitfield, D. L.; Thomas, J. L.; Jameson, A.; Schmidt, W.
1983-01-01
Transonic viscous-inviscid interaction is considered using the Euler and inverse compressible turbulent boundary-layer equations. Certain improvements in the inverse boundary-layer method are mentioned, along with experiences in using various Runge-Kutta schemes to solve the Euler equations. Numerical conditions imposed on the Euler equations at a surface for viscous-inviscid interaction using the method of equivalent sources are developed, and numerical solutions are presented and compared with experimental data to illustrate essential points. Previously announced in STAR N83-17829
Advanced Hybrid Modeling of Hall Thruster Plumes
2010-06-16
Hall thruster operated in the Large Vacuum Test Facility at the University of Michigan. The approach utilizes the direct simulation Monte Carlo method and the Particle-in-Cell method to simulate the collision and plasma dynamics of xenon neutrals and ions. The electrons are modeled as a fluid using conservation equations. A second code is employed to model discharge chamber behavior to provide improved input conditions at the thruster exit for the plume simulation. Simulation accuracy is assessed using experimental data previously
NASA Astrophysics Data System (ADS)
Jiang, Xiangqian; Wang, Kaiwei; Martin, Haydn
2006-12-01
We introduce a new surface measurement method for potential online application. Compared with our previous research, the new design is a significant improvement. It also features high stability because it uses a near common-path configuration. The method should be of great benefit to advanced manufacturing, especially for quality and process control in ultraprecision manufacturing and on the production line. Proof-of-concept experiments have been successfully conducted by measuring the system repeatability and the displacements of a mirror surface.
ERIC Educational Resources Information Center
Su, Addison Y. S.; Huang, Chester S. J.; Yang, Stephen J. H.; Ding, T. J.; Hsieh, Y. Z.
2015-01-01
In Taiwan elementary schools, Scratch programming has been taught for more than four years. Previous studies have shown that personal annotations is a useful learning method that improve learning performance. An annotation-based Scratch programming (ASP) system provides for the creation, share, and review of annotations and homework solutions in…
NASA Astrophysics Data System (ADS)
Zhou, Bing-Lu; Zhu, Jiong-Ming; Yan, Zong-Chao
2006-06-01
The nonrelativistic ground-state energy of He4H+ is calculated using a variational method in Hylleraas coordinates. Convergence to a few parts in 1010 is achieved, which improves the best previous result of Pavanello [J. Chem. Phys. 123, 104306 (2005)]. Expectation values of the interparticle distances are evaluated. Similar results for He3H+ are also presented.
The Jigsaw Technique and Self-Efficacy of Vocational Training Students: A Practice Report
ERIC Educational Resources Information Center
Darnon, Celine; Buchs, Celine; Desbar, Delphine
2012-01-01
Can teenagers' self-efficacy be improved in a short time? Previous research has shown the positive effect of cooperative learning methods, including "jigsaw classrooms" (Aronson and Patnoe, 1997), on various outcomes (e.g., the liking of school, self-esteem, and reduction of prejudices). The present practice report investigated the effects of…
New technology in postfire rehab
Joe Sabel
2007-01-01
PAM-12⢠is a recycled office paper byproduct made into a spreadable mulch with added Water Soluble Polyacrylamide (WSPAM), a previously difficult polymer to apply. PAM-12 is extremely versatile and can be applied through several methods. In a field test, PAM-12 outperformed straw in every targeted performance area: erosion control, improving soil hydrophobicity, and...
ERIC Educational Resources Information Center
Lin, Chen-Ju
2012-01-01
Instructional conversations are a teaching method in which teacher and students discuss academic topics with students' previous experience or knowledge (Tharp & Gallimore, 1988). In order to improve student learning, providing students more opportunities to engage in instructional conversations is often recommended. As research indicates…
ERIC Educational Resources Information Center
Dryden, Eileen M.; Desmarais, Jeffrey; Arsenault, Lisa
2017-01-01
Background: Research shows that individuals with disabilities are more likely to experience abuse than their peers without disabilities. Yet, few evidenced-based abuse prevention interventions exist. This study examines whether positive outcomes identified previously in an evaluation of IMPACT:Ability were maintained 1 year later. Methods: A…
The Effect of Tomatis Therapy on Children with Autism: Eleven Case Studies
ERIC Educational Resources Information Center
Gerritsen, Jan
2010-01-01
This article presents a reanalysis of a previously reported study on the impact of the Tomatis Method of auditory stimulation on subjects with autism. When analyzed as individual case studies, the data showed that six of the 11 subjects with autism demonstrated significant improvement from 90 hours of Tomatis Therapy. Five subjects did not benefit…
USDA-ARS?s Scientific Manuscript database
Bacterial cold water disease (BCWD) causes significant economic loss in salmonid aquaculture. We previously detected genetic variation for BCWD resistance in our rainbow trout population, and a family-based selection program to improve resistance was initiated at the National Center for Cool and Col...
USDA-ARS?s Scientific Manuscript database
Bacterial cold water disease (BCWD) causes significant economic loss in salmonid aquaculture. We previously detected genetic variation for BCWD resistance in our rainbow trout population, and a family-based selection program to improve resistance was initiated at the NCCCWA in 2005. The main objec...
The Faintest WISE Debris Disks: Enhanced Methods for Detection and Verification
NASA Astrophysics Data System (ADS)
Patel, Rahul I.; Metchev, Stanimir A.; Heinze, Aren; Trollo, Joseph
2017-02-01
In an earlier study, we reported nearly 100 previously unknown dusty debris disks around Hipparcos main-sequence stars within 75 pc by selecting stars with excesses in individual WISE colors. Here, we further scrutinize the Hipparcos 75 pc sample to (1) gain sensitivity to previously undetected, fainter mid-IR excesses and (2) remove spurious excesses contaminated by previously unidentified blended sources. We improve on our previous method by adopting a more accurate measure of the confidence threshold for excess detection and by adding an optimally weighted color average that incorporates all shorter-wavelength WISE photometry, rather than using only individual WISE colors. The latter is equivalent to spectral energy distribution fitting, but only over WISE bandpasses. In addition, we leverage the higher-resolution WISE images available through the unWISE.me image service to identify contaminated WISE excesses based on photocenter offsets among the W3- and W4-band images. Altogether, we identify 19 previously unreported candidate debris disks. Combined with the results from our earlier study, we have found a total of 107 new debris disks around 75 pc Hipparcos main-sequence stars using precisely calibrated WISE photometry. This expands the 75 pc debris disk sample by 22% around Hipparcos main-sequence stars and by 20% overall (including non-main-sequence and non-Hipparcos stars).
Improving Upon String Methods for Transition State Discovery.
Chaffey-Millar, Hugh; Nikodem, Astrid; Matveev, Alexei V; Krüger, Sven; Rösch, Notker
2012-02-14
Transition state discovery via application of string methods has been researched on two fronts. The first front involves development of a new string method, named the Searching String method, while the second one aims at estimating transition states from a discretized reaction path. The Searching String method has been benchmarked against a number of previously existing string methods and the Nudged Elastic Band method. The developed methods have led to a reduction in the number of gradient calls required to optimize a transition state, as compared to existing methods. The Searching String method reported here places new beads on a reaction pathway at the midpoint between existing beads, such that the resolution of the path discretization in the region containing the transition state grows exponentially with the number of beads. This approach leads to favorable convergence behavior and generates more accurate estimates of transition states from which convergence to the final transition states occurs more readily. Several techniques for generating improved estimates of transition states from a converged string or nudged elastic band have been developed and benchmarked on 13 chemical test cases. Optimization approaches for string methods, and pitfalls therein, are discussed.
Development of a Low-Noise High Common-Mode-Rejection Instrumentation Amplifier. M.S. Thesis
NASA Technical Reports Server (NTRS)
Rush, Kenneth; Blalock, T. V.; Kennedy, E. J.
1975-01-01
Several previously used instrumentation amplifier circuits were examined to find limitations and possibilities for improvement. One general configuration is analyzed in detail, and methods for improvement are enumerated. An improved amplifier circuit is described and analyzed with respect to common mode rejection and noise. Experimental data are presented showing good agreement between calculated and measured common mode rejection ratio and equivalent noise resistance. The amplifier is shown to be capable of common mode rejection in excess of 140 db for a trimmed circuit at frequencies below 100 Hz and equivalent white noise below 3.0 nv/square root of Hz above 1000 Hz.
Parker, Stephen; Meurk, Carla; Newman, Ellie; Fletcher, Clayton; Swinson, Isabella; Dark, Frances
2018-04-16
This study explores how consumers expect community-based residential mental health rehabilitation to compare with previous experiences of care. Understanding what consumers hope to receive from mental health services, and listening to their perspectives about what has and has not worked in previous care settings, may illuminate pathways to improved service engagement and outcomes. A mixed-methods research design taking a pragmatic approach to grounded theory guided the analysis of 24 semi-structured interviews with consumers on commencement at three Community Care Units (CCUs) in Australia. Two of these CCUs were trialling a staffing model integrating peer support work with clinical care. All interviews were conducted by an independent interviewer within the first 6 weeks of the consumer's stay. All participants expected the CCU to offer an improvement on previous experiences of care. Comparisons were made to acute and subacute inpatient settings, supported accommodation, and outpatient care. Consumers expected differences in the people (staff and co-residents), the focus of care, physical environ, and rules and regulations. Participants from the integrated staffing model sites articulated the expected value of a less clinical approach to care. Overall, consumers' expectations aligned with the principles articulated in policy frameworks for recovery-oriented practice. However, their reflections on past care suggest that these services continue to face significant challenges realizing these principles in practice. Paying attention to the kind of working relationship consumers want to have with mental health services, such as the provision of choice and maintaining a practical and therapeutic supportive focus, could improve their engagement and outcomes. © 2018 Australian College of Mental Health Nurses Inc.
Self-adjusting grid methods for one-dimensional hyperbolic conservation laws
NASA Technical Reports Server (NTRS)
Harten, A.; Hyman, J. M.
1983-01-01
The automatic adjustment of a grid which follows the dynamics of the numerical solution of hyperbolic conservation laws is given. The grid motion is determined by averaging the local characteristic velocities of the equations with respect to the amplitudes of the signals. The resulting algorithm is a simple extension of many currently popular Godunov-type methods. Computer codes using one of these methods can be easily modified to add the moving mesh as an option. Numerical examples are given that illustrate the improved accuracy of Godunov's and Roe's methods on a self-adjusting mesh. Previously announced in STAR as N83-15008
NASA Astrophysics Data System (ADS)
Delincée, Henry; Soika, Christiane
2002-03-01
Fruit may be irradiated at rather low doses, below 1 kGy in combination treatments or for quarantine purposes. To improve the ESR detection sensitivity of irradiated fruit de Jesus et al. (Int. J. Food Sci. Technol. 34 (1999) 173.) proposed extracting the fruit pulp with 80% ethanol and measuring the residue with ESR using low power (0.25 mW) for detection of 'cellulosic' radicals. An improvement in ESR sensitivity using the extraction procedure could be confirmed in this paper for strawberries and papayas. In most cases, a radiation dose of 0.5 kGy could be detected in both fruits even after 2-3 weeks storage. In addition, some herbs and spices were also tested, but only for a few of them the ESR detection of the 'cellulosic' signal was improved by previous alcoholic extraction. As an alternative to ESR measurements, other detection methods like DNA Comet Assay and thermoluminescence were also tested.
NASA Technical Reports Server (NTRS)
Ahn, Kyung H.
1994-01-01
The RNG-based algebraic turbulence model, with a new method of solving the cubic equation and applying new length scales, is introduced. An analysis is made of the RNG length scale which was previously reported and the resulting eddy viscosity is compared with those from other algebraic turbulence models. Subsequently, a new length scale is introduced which actually uses the two previous RNG length scales in a systematic way to improve the model performance. The performance of the present RNG model is demonstrated by simulating the boundary layer flow over a flat plate and the flow over an airfoil.
Novel Methods to Determine Feeder Locational PV Hosting Capacity and PV Impact Signatures
DOE Office of Scientific and Technical Information (OSTI.GOV)
Reno, Matthew J.; Coogan, Kyle; Seuss, John
Often PV hosting capacity analysis is performed for a limited number of distribution feeders. For medium - voltage distribution feeders, previous results generally analyze less than 20 feeders, and then the results are extrapolated out to similar types of feeders. Previous hosting capacity research has often focused on determining a single value for the hosting capacity for the entire feeder, whereas this research expands previous hosting capacity work to investigate all the regions of the feeder that may allow many different hosting capacity values wit h an idea called locational hosting capacity (LHC)to determine the largest PV size that canmore » be interconnected at different locations (buses) on the study feeders. This report discusses novel methods for analyzing PV interconnections with advanced simulati on methods. The focus is feeder and location - specific impacts of PV that determine the locational PV hosting capacity. Feeder PV impact signature are used to more precisely determine the local maximum hosting capacity of individual areas of the feeder. T he feeder signature provides improved interconnection screening with certain zones that show the risk of impact to the distribution feeder from PV interconnections.« less
NASA Astrophysics Data System (ADS)
Azmi, Nur Iffah Mohamed; Arifin Mat Piah, Kamal; Yusoff, Wan Azhar Wan; Romlay, Fadhlur Rahman Mohd
2018-03-01
Controller that uses PID parameters requires a good tuning method in order to improve the control system performance. Tuning PID control method is divided into two namely the classical methods and the methods of artificial intelligence. Particle swarm optimization algorithm (PSO) is one of the artificial intelligence methods. Previously, researchers had integrated PSO algorithms in the PID parameter tuning process. This research aims to improve the PSO-PID tuning algorithms by integrating the tuning process with the Variable Weight Grey- Taguchi Design of Experiment (DOE) method. This is done by conducting the DOE on the two PSO optimizing parameters: the particle velocity limit and the weight distribution factor. Computer simulations and physical experiments were conducted by using the proposed PSO- PID with the Variable Weight Grey-Taguchi DOE and the classical Ziegler-Nichols methods. They are implemented on the hydraulic positioning system. Simulation results show that the proposed PSO-PID with the Variable Weight Grey-Taguchi DOE has reduced the rise time by 48.13% and settling time by 48.57% compared to the Ziegler-Nichols method. Furthermore, the physical experiment results also show that the proposed PSO-PID with the Variable Weight Grey-Taguchi DOE tuning method responds better than Ziegler-Nichols tuning. In conclusion, this research has improved the PSO-PID parameter by applying the PSO-PID algorithm together with the Variable Weight Grey-Taguchi DOE method as a tuning method in the hydraulic positioning system.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rampadarath, H.; Morgan, J. S.; Tingay, S. J.
2014-01-01
The results of multi-epoch observations of the southern starburst galaxy, NGC 253, with the Australian Long Baseline Array at 2.3 GHz are presented. As with previous radio interferometric observations of this galaxy, no new sources were discovered. By combining the results of this survey with Very Large Array observations at higher frequencies from the literature, spectra were derived and a free-free absorption model was fitted of 20 known sources in NGC 253. The results were found to be consistent with previous studies. The supernova remnant, 5.48-43.3, was imaged with the highest sensitivity and resolution to date, revealing a two-lobed morphology.more » Comparisons with previous observations of similar resolution give an upper limit of 10{sup 4} km s{sup –1} for the expansion speed of this remnant. We derive a supernova rate of <0.2 yr{sup –1} for the inner 300 pc using a model that improves on previous methods by incorporating an improved radio supernova peak luminosity distribution and by making use of multi-wavelength radio data spanning 21 yr. A star formation rate of SFR(M ≥ 5 M {sub ☉}) < 4.9 M {sub ☉} yr{sup –1} was also estimated using the standard relation between supernova and star formation rates. Our improved estimates of supernova and star formation rates are consistent with studies at other wavelengths. The results of our study point to the possible existence of a small population of undetected supernova remnants, suggesting a low rate of radio supernova production in NGC 253.« less
Development of new methodologies for evaluating the energy performance of new commercial buildings
NASA Astrophysics Data System (ADS)
Song, Suwon
The concept of Measurement and Verification (M&V) of a new building continues to become more important because efficient design alone is often not sufficient to deliver an efficient building. Simulation models that are calibrated to measured data can be used to evaluate the energy performance of new buildings if they are compared to energy baselines such as similar buildings, energy codes, and design standards. Unfortunately, there is a lack of detailed M&V methods and analysis methods to measure energy savings from new buildings that would have hypothetical energy baselines. Therefore, this study developed and demonstrated several new methodologies for evaluating the energy performance of new commercial buildings using a case-study building in Austin, Texas. First, three new M&V methods were developed to enhance the previous generic M&V framework for new buildings, including: (1) The development of a method to synthesize weather-normalized cooling energy use from a correlation of Motor Control Center (MCC) electricity use when chilled water use is unavailable, (2) The development of an improved method to analyze measured solar transmittance against incidence angle for sample glazing using different solar sensor types, including Eppley PSP and Li-Cor sensors, and (3) The development of an improved method to analyze chiller efficiency and operation at part-load conditions. Second, three new calibration methods were developed and analyzed, including: (1) A new percentile analysis added to the previous signature method for use with a DOE-2 calibration, (2) A new analysis to account for undocumented exhaust air in DOE-2 calibration, and (3) An analysis of the impact of synthesized direct normal solar radiation using the Erbs correlation on DOE-2 simulation. Third, an analysis of the actual energy savings compared to three different energy baselines was performed, including: (1) Energy Use Index (EUI) comparisons with sub-metered data, (2) New comparisons against Standards 90.1-1989 and 90.1-2001, and (3) A new evaluation of the performance of selected Energy Conservation Design Measures (ECDMs). Finally, potential energy savings were also simulated from selected improvements, including: minimum supply air flow, undocumented exhaust air, and daylighting.
Towards a Better Corrosion Resistance and Biocompatibility Improvement of Nitinol Medical Devices
NASA Astrophysics Data System (ADS)
Rokicki, Ryszard; Hryniewicz, Tadeusz; Pulletikurthi, Chandan; Rokosz, Krzysztof; Munroe, Norman
2015-04-01
Haemocompatibility of Nitinol implantable devices and their corrosion resistance as well as resistance to fracture are very important features of advanced medical implants. The authors of the paper present some novel methods capable to improve Nitinol implantable devices to some marked degree beyond currently used electropolishing (EP) processes. Instead, a magnetoelectropolishing process should be advised. The polarization study shows that magnetoelectropolished Nitinol surface is more corrosion resistant than that obtained after a standard EP and has a unique ability to repassivate the surface. Currently used sterilization processes of Nitinol implantable devices can dramatically change physicochemical properties of medical device and by this influence its biocompatibility. The Authors' experimental results clearly show the way to improve biocompatibility of NiTi alloy surface. The final sodium hypochlorite treatment should replace currently used Nitinol implantable devices sterilization methods which rationale was also given in our previous study.
Image-optimized Coronal Magnetic Field Models
NASA Astrophysics Data System (ADS)
Jones, Shaela I.; Uritsky, Vadim; Davila, Joseph M.
2017-08-01
We have reported previously on a new method we are developing for using image-based information to improve global coronal magnetic field models. In that work, we presented early tests of the method, which proved its capability to improve global models based on flawed synoptic magnetograms, given excellent constraints on the field in the model volume. In this follow-up paper, we present the results of similar tests given field constraints of a nature that could realistically be obtained from quality white-light coronagraph images of the lower corona. We pay particular attention to difficulties associated with the line-of-sight projection of features outside of the assumed coronagraph image plane and the effect on the outcome of the optimization of errors in the localization of constraints. We find that substantial improvement in the model field can be achieved with these types of constraints, even when magnetic features in the images are located outside of the image plane.
Image-Optimized Coronal Magnetic Field Models
NASA Technical Reports Server (NTRS)
Jones, Shaela I.; Uritsky, Vadim; Davila, Joseph M.
2017-01-01
We have reported previously on a new method we are developing for using image-based information to improve global coronal magnetic field models. In that work we presented early tests of the method which proved its capability to improve global models based on flawed synoptic magnetograms, given excellent constraints on the field in the model volume. In this follow-up paper we present the results of similar tests given field constraints of a nature that could realistically be obtained from quality white-light coronagraph images of the lower corona. We pay particular attention to difficulties associated with the line-of-sight projection of features outside of the assumed coronagraph image plane, and the effect on the outcome of the optimization of errors in localization of constraints. We find that substantial improvement in the model field can be achieved with this type of constraints, even when magnetic features in the images are located outside of the image plane.
On-the-fly transition search and applications to temperature-accelerated dynamics
NASA Astrophysics Data System (ADS)
Shim, Yunsic; Amar, Jacques
2015-03-01
Temperature-accelerated dynamics (TAD) is a powerful method to study non-equilibrium processes and has been providing surprising insights for a variety of systems. While serial TAD simulations have been limited by the roughly N3 increase in the computational cost as a function of the number of atoms N in the system, recently we have shown that by carrying out parallel TAD simulations which combine spatial decomposition with our semi-rigorous synchronous sublattice algorithm, significantly improved scaling is possible. However, in this approach the size of activated events is limited by the processor size while the dynamics is not exact. Here we discuss progress in improving the scaling of serial TAD by combining the use of on-the-fly transition searching with our previously developed localized saddle-point method. We demonstrate improved performance for the cases of Ag/Ag(100) annealing and Cu/Cu(100) growth. Supported by NSF DMR-1410840.
Music-Based Magnetic Resonance Fingerprinting to Improve Patient Comfort During MRI Exams
Ma, Dan; Pierre, Eric Y.; Jiang, Yun; Schluchter, Mark D.; Setsompop, Kawin; Gulani, Vikas; Griswold, Mark A.
2015-01-01
Purpose The unpleasant acoustic noise is an important drawback of almost every magnetic resonance imaging scan. Instead of reducing the acoustic noise to improve patient comfort, a method is proposed to mitigate the noise problem by producing musical sounds directly from the switching magnetic fields while simultaneously quantifying multiple important tissue properties. Theory and Methods MP3 music files were converted to arbitrary encoding gradients, which were then used with varying flip angles and TRs in both 2D and 3D MRF exam. This new acquisition method named MRF-Music was used to quantify T1, T2 and proton density maps simultaneously while providing pleasing sounds to the patients. Results The MRF-Music scans were shown to significantly improve the patients' comfort during the MRI scans. The T1 and T2 values measured from phantom are in good agreement with those from the standard spin echo measurements. T1 and T2 values from the brain scan are also close to previously reported values. Conclusions MRF-Music sequence provides significant improvement of the patient's comfort as compared to the MRF scan and other fast imaging techniques such as EPI and TSE scans. It is also a fast and accurate quantitative method that quantifies multiple relaxation parameter simultaneously. PMID:26178439
Improvements to robotics-inspired conformational sampling in rosetta.
Stein, Amelie; Kortemme, Tanja
2013-01-01
To accurately predict protein conformations in atomic detail, a computational method must be capable of sampling models sufficiently close to the native structure. All-atom sampling is difficult because of the vast number of possible conformations and extremely rugged energy landscapes. Here, we test three sampling strategies to address these difficulties: conformational diversification, intensification of torsion and omega-angle sampling and parameter annealing. We evaluate these strategies in the context of the robotics-based kinematic closure (KIC) method for local conformational sampling in Rosetta on an established benchmark set of 45 12-residue protein segments without regular secondary structure. We quantify performance as the fraction of sub-Angstrom models generated. While improvements with individual strategies are only modest, the combination of intensification and annealing strategies into a new "next-generation KIC" method yields a four-fold increase over standard KIC in the median percentage of sub-Angstrom models across the dataset. Such improvements enable progress on more difficult problems, as demonstrated on longer segments, several of which could not be accurately remodeled with previous methods. Given its improved sampling capability, next-generation KIC should allow advances in other applications such as local conformational remodeling of multiple segments simultaneously, flexible backbone sequence design, and development of more accurate energy functions.
Improvements to Robotics-Inspired Conformational Sampling in Rosetta
Stein, Amelie; Kortemme, Tanja
2013-01-01
To accurately predict protein conformations in atomic detail, a computational method must be capable of sampling models sufficiently close to the native structure. All-atom sampling is difficult because of the vast number of possible conformations and extremely rugged energy landscapes. Here, we test three sampling strategies to address these difficulties: conformational diversification, intensification of torsion and omega-angle sampling and parameter annealing. We evaluate these strategies in the context of the robotics-based kinematic closure (KIC) method for local conformational sampling in Rosetta on an established benchmark set of 45 12-residue protein segments without regular secondary structure. We quantify performance as the fraction of sub-Angstrom models generated. While improvements with individual strategies are only modest, the combination of intensification and annealing strategies into a new “next-generation KIC” method yields a four-fold increase over standard KIC in the median percentage of sub-Angstrom models across the dataset. Such improvements enable progress on more difficult problems, as demonstrated on longer segments, several of which could not be accurately remodeled with previous methods. Given its improved sampling capability, next-generation KIC should allow advances in other applications such as local conformational remodeling of multiple segments simultaneously, flexible backbone sequence design, and development of more accurate energy functions. PMID:23704889
Cao, Zhipeng; Oh, Sukhoon; Otazo, Ricardo; Sica, Christopher T.; Griswold, Mark A.; Collins, Christopher M.
2014-01-01
Purpose Introduce a novel compressed sensing reconstruction method to accelerate proton resonance frequency (PRF) shift temperature imaging for MRI induced radiofrequency (RF) heating evaluation. Methods A compressed sensing approach that exploits sparsity of the complex difference between post-heating and baseline images is proposed to accelerate PRF temperature mapping. The method exploits the intra- and inter-image correlations to promote sparsity and remove shared aliasing artifacts. Validations were performed on simulations and retrospectively undersampled data acquired in ex-vivo and in-vivo studies by comparing performance with previously proposed techniques. Results The proposed complex difference constrained compressed sensing reconstruction method improved the reconstruction of smooth and local PRF temperature change images compared to various available reconstruction methods in a simulation study, a retrospective study with heating of a human forearm in vivo, and a retrospective study with heating of a sample of beef ex vivo . Conclusion Complex difference based compressed sensing with utilization of a fully-sampled baseline image improves the reconstruction accuracy for accelerated PRF thermometry. It can be used to improve the volumetric coverage and temporal resolution in evaluation of RF heating due to MRI, and may help facilitate and validate temperature-based methods for safety assurance. PMID:24753099
Underwater image enhancement based on the dark channel prior and attenuation compensation
NASA Astrophysics Data System (ADS)
Guo, Qingwen; Xue, Lulu; Tang, Ruichun; Guo, Lingrui
2017-10-01
Aimed at the two problems of underwater imaging, fog effect and color cast, an Improved Segmentation Dark Channel Prior (ISDCP) defogging method is proposed to solve the fog effects caused by physical properties of water. Due to mass refraction of light in the process of underwater imaging, fog effects would lead to image blurring. And color cast is closely related to different degree of attenuation while light with different wavelengths is traveling in water. The proposed method here integrates the ISDCP and quantitative histogram stretching techniques into the image enhancement procedure. Firstly, the threshold value is set during the refinement process of the transmission maps to identify the original mismatching, and to conduct the differentiated defogging process further. Secondly, a method of judging the propagating distance of light is adopted to get the attenuation degree of energy during the propagation underwater. Finally, the image histogram is stretched quantitatively in Red-Green-Blue channel respectively according to the degree of attenuation in each color channel. The proposed method ISDCP can reduce the computational complexity and improve the efficiency in terms of defogging effect to meet the real-time requirements. Qualitative and quantitative comparison for several different underwater scenes reveals that the proposed method can significantly improve the visibility compared with previous methods.
NASA Astrophysics Data System (ADS)
Wähmer, M.; Anhalt, K.; Hollandt, J.; Klein, R.; Taubert, R. D.; Thornagel, R.; Ulm, G.; Gavrilov, V.; Grigoryeva, I.; Khlevnoy, B.; Sapritsky, V.
2017-10-01
Absolute spectral radiometry is currently the only established primary thermometric method for the temperature range above 1300 K. Up to now, the ongoing improvements of high-temperature fixed points and their formal implementation into an improved temperature scale with the mise en pratique for the definition of the kelvin, rely solely on single-wavelength absolute radiometry traceable to the cryogenic radiometer. Two alternative primary thermometric methods, yielding comparable or possibly even smaller uncertainties, have been proposed in the literature. They use ratios of irradiances to determine the thermodynamic temperature traceable to blackbody radiation and synchrotron radiation. At PTB, a project has been established in cooperation with VNIIOFI to use, for the first time, all three methods simultaneously for the determination of the phase transition temperatures of high-temperature fixed points. For this, a dedicated four-wavelengths ratio filter radiometer was developed. With all three thermometric methods performed independently and in parallel, we aim to compare the potential and practical limitations of all three methods, disclose possibly undetected systematic effects of each method and thereby confirm or improve the previous measurements traceable to the cryogenic radiometer. This will give further and independent confidence in the thermodynamic temperature determination of the high-temperature fixed point's phase transitions.
Raffo, Antonio; Carcea, Marina; Castagna, Claudia; Magrì, Andrea
2015-08-07
An improved method based on headspace solid phase microextraction combined with gas chromatography-mass spectrometry (HS-SPME/GC-MS) was proposed for the semi-quantitative determination of wheat bread volatile compounds isolated from both whole slice and crust samples. A DVB/CAR/PDMS fibre was used to extract volatiles from the headspace of a bread powdered sample dispersed in a sodium chloride (20%) aqueous solution and kept for 60min at 50°C under controlled stirring. Thirty-nine out of all the extracted volatiles were fully identified, whereas for 95 other volatiles a tentative identification was proposed, to give a complete as possible profile of wheat bread volatile compounds. The use of an array of ten structurally and physicochemically similar internal standards allowed to markedly improve method precision with respect to previous HS-SPME/GC-MS methods for bread volatiles. Good linearity of the method was verified for a selection of volatiles from several chemical groups by calibration with matrix-matched extraction solutions. This simple, rapid, precise and sensitive method could represent a valuable tool to obtain semi-quantitative information when investigating the influence of technological factors on volatiles formation in wheat bread and other bakery products. Copyright © 2015 Elsevier B.V. All rights reserved.
On Bayesian Testing of Additive Conjoint Measurement Axioms Using Synthetic Likelihood.
Karabatsos, George
2018-06-01
This article introduces a Bayesian method for testing the axioms of additive conjoint measurement. The method is based on an importance sampling algorithm that performs likelihood-free, approximate Bayesian inference using a synthetic likelihood to overcome the analytical intractability of this testing problem. This new method improves upon previous methods because it provides an omnibus test of the entire hierarchy of cancellation axioms, beyond double cancellation. It does so while accounting for the posterior uncertainty that is inherent in the empirical orderings that are implied by these axioms, together. The new method is illustrated through a test of the cancellation axioms on a classic survey data set, and through the analysis of simulated data.
An improved, robust, axial line singularity method for bodies of revolution
NASA Technical Reports Server (NTRS)
Hemsch, Michael J.
1989-01-01
The failures encountered in attempts to increase the range of applicability of the axial line singularity method for representing incompressible, inviscid flow about an inclined and slender body-of-revolution are presently noted to be common to all efforts to solve Fredholm equations of the first kind. It is shown that a previously developed smoothing technique yields a robust method for numerical solution of the governing equations; this technique is easily retrofitted to existing codes, and allows the number of circularities to be increased until the most accurate line singularity solution is obtained.
Lattice hydrodynamic model based traffic control: A transportation cyber-physical system approach
NASA Astrophysics Data System (ADS)
Liu, Hui; Sun, Dihua; Liu, Weining
2016-11-01
Lattice hydrodynamic model is a typical continuum traffic flow model, which describes the jamming transition of traffic flow properly. Previous studies in lattice hydrodynamic model have shown that the use of control method has the potential to improve traffic conditions. In this paper, a new control method is applied in lattice hydrodynamic model from a transportation cyber-physical system approach, in which only one lattice site needs to be controlled in this control scheme. The simulation verifies the feasibility and validity of this method, which can ensure the efficient and smooth operation of the traffic flow.
An analytical method to predict efficiency of aircraft gearboxes
NASA Technical Reports Server (NTRS)
Anderson, N. E.; Loewenthal, S. H.; Black, J. D.
1984-01-01
A spur gear efficiency prediction method previously developed by the authors was extended to include power loss of planetary gearsets. A friction coefficient model was developed for MIL-L-7808 oil based on disc machine data. This combined with the recent capability of predicting losses in spur gears of nonstandard proportions allows the calculation of power loss for complete aircraft gearboxes that utilize spur gears. The method was applied to the T56/501 turboprop gearbox and compared with measured test data. Bearing losses were calculated with large scale computer programs. Breakdowns of the gearbox losses point out areas for possible improvement.
Self-consistent analysis of high drift velocity measurements with the STARE system
NASA Technical Reports Server (NTRS)
Reinleitner, L. A.; Nielsen, E.
1985-01-01
The use of the STARE and SABRE coherent radar systems as valuable tools for geophysical research has been enhanced by a new technique called the Superimposed-Grid-Point method. This method permits an analysis of E-layer plasma irregularity phase velocity versus flow angle utilizing only STARE or SABRE data. As previous work with STARE has indicated, this analysis has clearly shown that the cosine law assumption breaks down for velocities near and exceeding the local ion acoustic velocities. Use of this method is improving understanding of naturally-occurring plasma irregularities in the E-layer.
NASA Astrophysics Data System (ADS)
Chen, Ming-Chih; Hsiao, Shen-Fu
In this paper, we propose an area-efficient design of Advanced Encryption Standard (AES) processor by applying a new common-expression-elimination (CSE) method to the sub-functions of various transformations required in AES. The proposed method reduces the area cost of realizing the sub-functions by extracting the common factors in the bit-level XOR/AND-based sum-of-product expressions of these sub-functions using a new CSE algorithm. Cell-based implementation results show that the AES processor with our proposed CSE method has significant area improvement compared with previous designs.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wood, M. P.; Lawler, J. E.; Sneden, C.
2013-10-01
Atomic transition probability measurements for 364 lines of Ti II in the UV through near-IR are reported. Branching fractions from data recorded using a Fourier transform spectrometer (FTS) and a new echelle spectrometer are combined with published radiative lifetimes to determine these transition probabilities. The new results are in generally good agreement with previously reported FTS measurements. Use of the new echelle spectrometer, independent radiometric calibration methods, and independent data analysis routines enables a reduction of systematic errors and overall improvement in transition probability accuracy over previous measurements. The new Ti II data are applied to high-resolution visible and UVmore » spectra of the Sun and metal-poor star HD 84937 to derive new, more accurate Ti abundances. Lines covering a range of wavelength and excitation potential are used to search for non-LTE effects. The Ti abundances derived using Ti II for these two stars match those derived using Ti I and support the relative Ti/Fe abundance ratio versus metallicity seen in previous studies.« less
The review and results of different methods for facial recognition
NASA Astrophysics Data System (ADS)
Le, Yifan
2017-09-01
In recent years, facial recognition draws much attention due to its wide potential applications. As a unique technology in Biometric Identification, facial recognition represents a significant improvement since it could be operated without cooperation of people under detection. Hence, facial recognition will be taken into defense system, medical detection, human behavior understanding, etc. Several theories and methods have been established to make progress in facial recognition: (1) A novel two-stage facial landmark localization method is proposed which has more accurate facial localization effect under specific database; (2) A statistical face frontalization method is proposed which outperforms state-of-the-art methods for face landmark localization; (3) It proposes a general facial landmark detection algorithm to handle images with severe occlusion and images with large head poses; (4) There are three methods proposed on Face Alignment including shape augmented regression method, pose-indexed based multi-view method and a learning based method via regressing local binary features. The aim of this paper is to analyze previous work of different aspects in facial recognition, focusing on concrete method and performance under various databases. In addition, some improvement measures and suggestions in potential applications will be put forward.
Augmenting Conceptual Design Trajectory Tradespace Exploration with Graph Theory
NASA Technical Reports Server (NTRS)
Dees, Patrick D.; Zwack, Mathew R.; Steffens, Michael; Edwards, Stephen
2016-01-01
Within conceptual design changes occur rapidly due to a combination of uncertainty and shifting requirements. To stay relevant in this fluid time, trade studies must also be performed rapidly. In order to drive down analysis time while improving the information gained by these studies, surrogate models can be created to represent the complex output of a tool or tools within a specified tradespace. In order to create this model however, a large amount of data must be collected in a short amount of time. By this method, the historical approach of relying on subject matter experts to generate the data required is schedule infeasible. However, by implementing automation and distributed analysis the required data can be generated in a fraction of the time. Previous work focused on setting up a tool called multiPOST capable of orchestrating many simultaneous runs of an analysis tool assessing these automated analyses utilizing heuristics gleaned from the best practices of current subject matter experts. In this update to the previous work, elements of graph theory are included to further drive down analysis time by leveraging data previously gathered. It is shown to outperform the previous method in both time required, and the quantity and quality of data produced.
Redesigning a risk-management process for tracking injuries.
Wenzel, G R
1998-01-01
The changing responsibilities of registered nurses are challenging even the most dedicated professionals. To survive within her newly-defined roles, one nurse used a total quality improvement model to understand, analyze, and improve a medical center's system for tracking inpatient injuries. This process led to the drafting of an original software design that implemented a nursing informatics tracking system. It has resulted in significant savings of time and money and has far surpassed the accuracy, efficiency, and scope of the previous method. This article presents an overview of the design process.
Nuclear techniques for the on-line bulk analysis of carbon in coal-fired power stations.
Sowerby, B D
2009-09-01
Carbon trading schemes usually require large emitters of CO(2), such as coal-fired power stations, to monitor, report and be audited on their CO(2) emissions. The emission price provides a significant additional incentive for power stations to improve efficiency. In the present paper, previous work on the bulk determination of carbon in coal is reviewed and assessed. The most favourable method is that based on neutron inelastic scattering. The potential role of on-line carbon analysers in improving boiler efficiency and in carbon accounting is discussed.
Watanabe, Takashi
2013-01-01
The wearable sensor system developed by our group, which measured lower limb angles using Kalman-filtering-based method, was suggested to be useful in evaluation of gait function for rehabilitation support. However, it was expected to reduce variations of measurement errors. In this paper, a variable-Kalman-gain method based on angle error that was calculated from acceleration signals was proposed to improve measurement accuracy. The proposed method was tested comparing to fixed-gain Kalman filter and a variable-Kalman-gain method that was based on acceleration magnitude used in previous studies. First, in angle measurement in treadmill walking, the proposed method measured lower limb angles with the highest measurement accuracy and improved significantly foot inclination angle measurement, while it improved slightly shank and thigh inclination angles. The variable-gain method based on acceleration magnitude was not effective for our Kalman filter system. Then, in angle measurement of a rigid body model, it was shown that the proposed method had measurement accuracy similar to or higher than results seen in other studies that used markers of camera-based motion measurement system fixing on a rigid plate together with a sensor or on the sensor directly. The proposed method was found to be effective in angle measurement with inertial sensors. PMID:24282442
NASA Astrophysics Data System (ADS)
Solovjov, Vladimir P.; Webb, Brent W.; Andre, Frederic
2018-07-01
Following previous theoretical development based on the assumption of a rank correlated spectrum, the Rank Correlated Full Spectrum k-distribution (RC-FSK) method is proposed. The method proves advantageous in modeling radiation transfer in high temperature gases in non-uniform media in two important ways. First, and perhaps most importantly, the method requires no specification of a reference gas thermodynamic state. Second, the spectral construction of the RC-FSK model is simpler than original correlated FSK models, requiring only two cumulative k-distributions. Further, although not exhaustive, example problems presented here suggest that the method may also yield improved accuracy relative to prior methods, and may exhibit less sensitivity to the blackbody source temperature used in the model predictions. This paper outlines the theoretical development of the RC-FSK method, comparing the spectral construction with prior correlated spectrum FSK method formulations. Further the RC-FSK model's relationship to the Rank Correlated Spectral Line Weighted-sum-of-gray-gases (RC-SLW) model is defined. The work presents predictions using the Rank Correlated FSK method and previous FSK methods in three different example problems. Line-by-line benchmark predictions are used to assess the accuracy.
Incremental Learning of Context Free Grammars by Parsing-Based Rule Generation and Rule Set Search
NASA Astrophysics Data System (ADS)
Nakamura, Katsuhiko; Hoshina, Akemi
This paper discusses recent improvements and extensions in Synapse system for inductive inference of context free grammars (CFGs) from sample strings. Synapse uses incremental learning, rule generation based on bottom-up parsing, and the search for rule sets. The form of production rules in the previous system is extended from Revised Chomsky Normal Form A→βγ to Extended Chomsky Normal Form, which also includes A→B, where each of β and γ is either a terminal or nonterminal symbol. From the result of bottom-up parsing, a rule generation mechanism synthesizes minimum production rules required for parsing positive samples. Instead of inductive CYK algorithm in the previous version of Synapse, the improved version uses a novel rule generation method, called ``bridging,'' which bridges the lacked part of the derivation tree for the positive string. The improved version also employs a novel search strategy, called serial search in addition to minimum rule set search. The synthesis of grammars by the serial search is faster than the minimum set search in most cases. On the other hand, the size of the generated CFGs is generally larger than that by the minimum set search, and the system can find no appropriate grammar for some CFL by the serial search. The paper shows experimental results of incremental learning of several fundamental CFGs and compares the methods of rule generation and search strategies.
Ferrantelli, Joseph R; Harrison, Deed E; Harrison, Donald D; Stewart, Denis
2005-01-01
To describe the treatment of a patient with chronic whiplash-associated disorders (WADs) previously unresponsive to multiple physical therapy and chiropractic treatments, which resolved following Clinical Biomechanics of Posture (CBP) rehabilitation methods. A 40-year-old man involved in a high-speed rear-impact collision developed chronic WADs including cervicothoracic, shoulder, and arm pain and headache. The patient was diagnosed with a confirmed chip fracture of the C5 vertebra and cervical and thoracic disk herniations. He was treated with traditional chiropractic and physical therapy modalities but experienced only temporary symptomatic reduction and was later given a whole body permanent impairment rating of 33% by an orthopedic surgeon. The patient was treated with CBP mirror-image cervical spine adjustments, exercise, and traction to reduce forward head posture and cervical kyphosis. A presentation of abnormal head protrusion resolved and cervical kyphosis returned to lordosis posttreatment. His initial neck disability index was 46% and 0% at the end of care. Verbal pain rating scales also improved for neck pain (from 5/10 to 0/10). A patient with chronic WADs and abnormal head protrusion, cervical kyphosis, and disk herniation experienced an improvement in symptoms and function after the use of CBP rehabilitation protocols when other traditional chiropractic and physical therapy procedures showed little or no lasting improvement.
Bladed wheels damage detection through Non-Harmonic Fourier Analysis improved algorithm
NASA Astrophysics Data System (ADS)
Neri, P.
2017-05-01
Recent papers introduced the Non-Harmonic Fourier Analysis for bladed wheels damage detection. This technique showed its potential in estimating the frequency of sinusoidal signals even when the acquisition time is short with respect to the vibration period, provided that some hypothesis are fulfilled. Anyway, previously proposed algorithms showed severe limitations in cracks detection at their early stage. The present paper proposes an improved algorithm which allows to detect a blade vibration frequency shift due to a crack whose size is really small compared to the blade width. Such a technique could be implemented for condition-based maintenance, allowing to use non-contact methods for vibration measurements. A stator-fixed laser sensor could monitor all the blades as they pass in front of the spot, giving precious information about the wheel health. This configuration determines an acquisition time for each blade which become shorter as the machine rotational speed increases. In this situation, traditional Discrete Fourier Transform analysis results in poor frequency resolution, being not suitable for small frequency shift detection. Non-Harmonic Fourier Analysis instead showed high reliability in vibration frequency estimation even with data samples collected in a short time range. A description of the improved algorithm is provided in the paper, along with a comparison with the previous one. Finally, a validation of the method is presented, based on finite element simulations results.
Carreira, Mónica; Anarte, María Teresa; Linares, Francisca; Olveira, Gabriel; González Romero, Stella
2017-01-01
Abstract Background: In a previous study we demonstrated improvement in metabolic control and reduction in hypoglycemia in people with type 1 diabetes on multiple daily injections, after having used a bolus calculator for 4 months. Objective: To demonstrate whether (1) extending its use (2) or introducing it in the control group, previously subjected to treatment intensification, could further improve metabolic control and related psychological issues. Methods: After the previous clinical trial, in which the subjects were randomized either to treatment with the calculator or to control group for 4 months, both groups used the calculator during an additional 4-month period. Results: In the previous control group, after using the device, HbA1c did not improve (7.86% ± 0.87% vs. 8.01% ± 0.93%, P 0.215), although a significant decrease in postprandial hypoglycemia was observed (2.3 ± 2 vs. 1.1 ± 1.2/2 weeks, P 0.002). In the group in which the treatment was extended from 4 to 8 months, HbA1c did not improve either (7.61 ± 0.58 vs. 7.73 ± 0.65, P 0.209); however this group had a greater perceived treatment satisfaction (12.03 ± 4.26 vs. 13.71 ± 3.75, P 0.007) and a significant decrease in fear of hypoglycemia (28.24 ± 8.18 basal vs. 25.66 ± 8.02 at 8 months, P 0.026). Conclusions: The extension in the use of the calculator or its introduction in a previously intensified control group did not improve metabolic control, although it did confirm a decrease in hypoglycemic episodes in the short term, while the extension of its use to 8 months was associated with a reduction in fear of hypoglycemia and greater treatment satisfaction. PMID:28594575
Improving Management of Green Retrofits from a Stakeholder Perspective: A Case Study in China.
Liang, Xin; Shen, Geoffrey Qiping; Guo, Li
2015-10-28
Green retrofits, which improve the environment and energy efficiency of buildings, are considered a potential solution for reducing energy consumption as well as improving human health and productivity. They represent some of the riskiest, most complex, and most uncertain projects to manage. As the foundation of project management, critical success factors (CSFs) have been emphasized by previous research. However, most studies identified and prioritized CSFs independently of stakeholders. This differs from the reality, where the success of green retrofits is tightly interrelated to the stakeholders of projects. To improve the analysis from a stakeholder perspective, the present study proposed an innovative method based on a two-mode social network analysis to integrate CSF analysis with stakeholders. The results of this method can provide further understanding of the interactions between stakeholders and CSFs, and the underlying relationship among CSFs through stakeholders. A pilot study was conducted to apply the proposed method and assess the CSFs for green retrofits in China. The five most significant CSFs are identified in the management of green retrofit. Furthermore, the interrelations between stakeholders and CSFs, coefficient and clusters of CSFs are likewise discussed.
Improving Management of Green Retrofits from a Stakeholder Perspective: A Case Study in China
Liang, Xin; Shen, Geoffrey Qiping; Guo, Li
2015-01-01
Green retrofits, which improve the environment and energy efficiency of buildings, are considered a potential solution for reducing energy consumption as well as improving human health and productivity. They represent some of the riskiest, most complex, and most uncertain projects to manage. As the foundation of project management, critical success factors (CSFs) have been emphasized by previous research. However, most studies identified and prioritized CSFs independently of stakeholders. This differs from the reality, where the success of green retrofits is tightly interrelated to the stakeholders of projects. To improve the analysis from a stakeholder perspective, the present study proposed an innovative method based on a two-mode social network analysis to integrate CSF analysis with stakeholders. The results of this method can provide further understanding of the interactions between stakeholders and CSFs, and the underlying relationship among CSFs through stakeholders. A pilot study was conducted to apply the proposed method and assess the CSFs for green retrofits in China. The five most significant CSFs are identified in the management of green retrofit. Furthermore, the interrelations between stakeholders and CSFs, coefficient and clusters of CSFs are likewise discussed. PMID:26516897
Optics-Only Calibration of a Neural-Net Based Optical NDE Method for Structural Health Monitoring
NASA Technical Reports Server (NTRS)
Decker, Arthur J.
2004-01-01
A calibration process is presented that uses optical measurements alone to calibrate a neural-net based NDE method. The method itself detects small changes in the vibration mode shapes of structures. The optics-only calibration process confirms previous work that the sensitivity to vibration-amplitude changes can be as small as 10 nanometers. A more practical value in an NDE service laboratory is shown to be 50 nanometers. Both model-generated and experimental calibrations are demonstrated using two implementations of the calibration technique. The implementations are based on previously published demonstrations of the NDE method and an alternative calibration procedure that depends on comparing neural-net and point sensor measurements. The optics-only calibration method, unlike the alternative method, does not require modifications of the structure being tested or the creation of calibration objects. The calibration process can be used to test improvements in the NDE process and to develop a vibration-mode-independence of damagedetection sensitivity. The calibration effort was intended to support NASA s objective to promote safety in the operations of ground test facilities or aviation safety, in general, by allowing the detection of the gradual onset of structural changes and damage.
Bernardes, Juliana; Zaverucha, Gerson; Vaquero, Catherine; Carbone, Alessandra
2016-01-01
Traditional protein annotation methods describe known domains with probabilistic models representing consensus among homologous domain sequences. However, when relevant signals become too weak to be identified by a global consensus, attempts for annotation fail. Here we address the fundamental question of domain identification for highly divergent proteins. By using high performance computing, we demonstrate that the limits of state-of-the-art annotation methods can be bypassed. We design a new strategy based on the observation that many structural and functional protein constraints are not globally conserved through all species but might be locally conserved in separate clades. We propose a novel exploitation of the large amount of data available: 1. for each known protein domain, several probabilistic clade-centered models are constructed from a large and differentiated panel of homologous sequences, 2. a decision-making protocol combines outcomes obtained from multiple models, 3. a multi-criteria optimization algorithm finds the most likely protein architecture. The method is evaluated for domain and architecture prediction over several datasets and statistical testing hypotheses. Its performance is compared against HMMScan and HHblits, two widely used search methods based on sequence-profile and profile-profile comparison. Due to their closeness to actual protein sequences, clade-centered models are shown to be more specific and functionally predictive than the broadly used consensus models. Based on them, we improved annotation of Plasmodium falciparum protein sequences on a scale not previously possible. We successfully predict at least one domain for 72% of P. falciparum proteins against 63% achieved previously, corresponding to 30% of improvement over the total number of Pfam domain predictions on the whole genome. The method is applicable to any genome and opens new avenues to tackle evolutionary questions such as the reconstruction of ancient domain duplications, the reconstruction of the history of protein architectures, and the estimation of protein domain age. Website and software: http://www.lcqb.upmc.fr/CLADE. PMID:27472895
Real-time sound speed correction using golden section search to enhance ultrasound imaging quality
NASA Astrophysics Data System (ADS)
Yoon, Chong Ook; Yoon, Changhan; Yoo, Yangmo; Song, Tai-Kyong; Chang, Jin Ho
2013-03-01
In medical ultrasound imaging, high-performance beamforming is important to enhance spatial and contrast resolutions. A modern receive dynamic beamfomer uses a constant sound speed that is typically assumed to 1540 m/s in generating receive focusing delays [1], [2]. However, this assumption leads to degradation of spatial and contrast resolutions particularly when imaging obese patients or breast since the sound speed is significantly lower than the assumed sound speed [3]; the true sound speed in the fatty tissue is around 1450 m/s. In our previous study, it was demonstrated that the modified nonlinear anisotropic diffusion is capable of determining an optimal sound speed and the proposed method is a useful tool to improve ultrasound image quality [4], [5]. In the previous study, however, we utilized at least 21 iterations to find an optimal sound speed, which may not be viable for real-time applications. In this paper, we demonstrates that the number of iterations can be dramatically reduced using the GSS(golden section search) method with a minimal error. To evaluate performances of the proposed method, in vitro experiments were conducted with a tissue mimicking phantom. To emulate a heterogeneous medium, the phantom was immersed in the water. From the experiments, the number of iterations was reduced from 21 to 7 with GSS method and the maximum error of the lateral resolution between direct and GSS was less than 1%. These results indicate that the proposed method can be implemented in real time to improve the image quality in the medical ultrasound imaging.
Unwed Fathers’ Ability to Pay Child Support: New Estimates Accounting for Multiple-Partner Fertility
SINKEWICZ, MARILYN; GARFINKEL, IRWIN
2009-01-01
We present new estimates of unwed fathers’ ability to pay child support. Prior research relied on surveys that drastically undercounted nonresident unwed fathers and provided no link to their children who lived in separate households. To overcome these limitations, previous research assumed assortative mating and that each mother partnered with one father who was actually eligible to pay support and had no other child support obligations. Because the Fragile Families and Child Wellbeing Study contains data on couples, multiple-partner fertility, and a rich array of other previously unmeasured characteristics of fathers, it is uniquely suited to address the limitations of previous research. We also use an improved method of dealing with missing data. Our findings suggest that previous research overestimated the aggregate ability of unwed nonresident fathers to pay child support by 33% to 60%. PMID:21305392
Motion-compensated speckle tracking via particle filtering
NASA Astrophysics Data System (ADS)
Liu, Lixin; Yagi, Shin-ichi; Bian, Hongyu
2015-07-01
Recently, an improved motion compensation method that uses the sum of absolute differences (SAD) has been applied to frame persistence utilized in conventional ultrasonic imaging because of its high accuracy and relative simplicity in implementation. However, high time consumption is still a significant drawback of this space-domain method. To seek for a more accelerated motion compensation method and verify if it is possible to eliminate conventional traversal correlation, motion-compensated speckle tracking between two temporally adjacent B-mode frames based on particle filtering is discussed. The optimal initial density of particles, the least number of iterations, and the optimal transition radius of the second iteration are analyzed from simulation results for the sake of evaluating the proposed method quantitatively. The speckle tracking results obtained using the optimized parameters indicate that the proposed method is capable of tracking the micromotion of speckle throughout the region of interest (ROI) that is superposed with global motion. The computational cost of the proposed method is reduced by 25% compared with that of the previous algorithm and further improvement is necessary.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wisotzky, Eric, E-mail: eric.wisotzky@charite.de, E-mail: eric.wisotzky@ipk.fraunhofer.de; O’Brien, Ricky; Keall, Paul J., E-mail: paul.keall@sydney.edu.au
2016-01-15
Purpose: Multileaf collimator (MLC) tracking radiotherapy is complex as the beam pattern needs to be modified due to the planned intensity modulation as well as the real-time target motion. The target motion cannot be planned; therefore, the modified beam pattern differs from the original plan and the MLC sequence needs to be recomputed online. Current MLC tracking algorithms use a greedy heuristic in that they optimize for a given time, but ignore past errors. To overcome this problem, the authors have developed and improved an algorithm that minimizes large underdose and overdose regions. Additionally, previous underdose and overdose events aremore » taken into account to avoid regions with high quantity of dose events. Methods: The authors improved the existing MLC motion control algorithm by introducing a cumulative underdose/overdose map. This map represents the actual projection of the planned tumor shape and logs occurring dose events at each specific regions. These events have an impact on the dose cost calculation and reduce recurrence of dose events at each region. The authors studied the improvement of the new temporal optimization algorithm in terms of the L1-norm minimization of the sum of overdose and underdose compared to not accounting for previous dose events. For evaluation, the authors simulated the delivery of 5 conformal and 14 intensity-modulated radiotherapy (IMRT)-plans with 7 3D patient measured tumor motion traces. Results: Simulations with conformal shapes showed an improvement of L1-norm up to 8.5% after 100 MLC modification steps. Experiments showed comparable improvements with the same type of treatment plans. Conclusions: A novel leaf sequencing optimization algorithm which considers previous dose events for MLC tracking radiotherapy has been developed and investigated. Reductions in underdose/overdose are observed for conformal and IMRT delivery.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Caldwell, W.S.; Conner, J.M.
Studies in our laboratory revealed artifactual formation of N-nitrosamines during trapping of mainstream and sidestream tobacco smoke by the method of Hoffmann and coworkers. Both volatile and tobacco-specific N-nitrosamines were produced. This artifact formation took place on the Cambridge filter, which is part of the collection train used in the previously published procedure. When the filter was treated with ascorbic acid before smoke collection, artifact formation was inhibited. The improved method resulting from these studies was applied to a comparative analysis of N-nitrosamines in smoke from cigarettes that heat, but do not burn, tobacco (the test cigarette) and several referencemore » cigarettes. Concentrations of volatile and tobacco-specific N-nitrosamines in both mainstream and sidestream smoke from the test cigarette were substantially lower than in the reference cigarettes.« less
Studies on Hot-Melt Prepregging on PRM-II-50 Polyimide Resin with Graphite Fibers
NASA Technical Reports Server (NTRS)
Shin, E. Eugene; Sutter, James K.; Juhas, John; Veverka, Adrienne; Klans, Ojars; Inghram, Linda; Scheiman, Dan; Papadopoulos, Demetrios; Zoha, John; Bubnick, Jim
2004-01-01
A second generation PMR (in situ Polymerization of Monomer Reactants) polyimide resin PMR-II-50, has been considered for high temperature and high stiffness space propulsion composites applications for its improved high temperature performance. As part of composite processing optimization, two commercial prepregging methods: solution vs. hot-melt processes were investigated with M40J fabrics from Toray. In a previous study a systematic chemical, physical, thermal and mechanical characterization of these composites indicated the poor resin-fiber interfacial wetting, especially for the hot-melt process, resulted in poor composite quality. In order to improve the interfacial wetting, optimization of the resin viscosity and process variables were attempted in a commercial hot-melt prepregging line. In addition to presenting the results from the prepreg quality optimization trials, the combined effects of the prepregging method and two different composite cure methods, i.e. hot press vs. autoclave on composite quality and properties are discussed.
Studies on Hot-Melt Prepregging of PMR-II-50 Polyimide Resin with Graphite Fibers
NASA Technical Reports Server (NTRS)
Shin, E. Eugene; Sutter, James K.; Juhas, John; Veverka, Adrienne; Klans, Ojars; Inghram, Linda; Scheiman, Dan; Papadopoulos, Demetrios; Zoha, John; Bubnick, Jim
2003-01-01
A Second generation PMR (in situ Polymerization of Monomer Reactants) polyimide resin, PMR-II-50, has been considered for high temperature and high stiffness space propulsion composites applications for its improved high temperature performance. As part of composite processing optimization, two commercial prepregging methods: solution vs. hot-melt processes were investigated with M40J fabrics from Toray. In a previous study a systematic chemical, physical, thermal and mechanical characterization of these composites indicated that poor resin-fiber interfacial wetting, especially for the hot-melt process, resulted in poor composite quality. In order to improve the interfacial wetting, optimization of the resin viscosity and process variables were attempted in a commercial hot-melt prepregging line. In addition to presenting the results from the prepreg quality optimization trials, the combined effects of the prepregging method and two different composite cure methods, i.e., hot press vs. autoclave on composite quality and properties are discussed.
NASA Astrophysics Data System (ADS)
Ding, Hao; Cao, Ming; DuPont, Andrew W.; Scott, Larry D.; Guha, Sushovan; Singhal, Shashideep; Younes, Mamoun; Pence, Isaac; Herline, Alan; Schwartz, David; Xu, Hua; Mahadevan-Jansen, Anita; Bi, Xiaohong
2016-03-01
Inflammatory bowel disease (IBD) is an idiopathic disease that is typically characterized by chronic inflammation of the gastrointestinal tract. Recently much effort has been devoted to the development of novel diagnostic tools that can assist physicians for fast, accurate, and automated diagnosis of the disease. Previous research based on Raman spectroscopy has shown promising results in differentiating IBD patients from normal screening cases. In the current study, we examined IBD patients in vivo through a colonoscope-coupled Raman system. Optical diagnosis for IBD discrimination was conducted based on full-range spectra using multivariate statistical methods. Further, we incorporated several feature selection methods in machine learning into the classification model. The diagnostic performance for disease differentiation was significantly improved after feature selection. Our results showed that improved IBD diagnosis can be achieved using Raman spectroscopy in combination with multivariate analysis and feature selection.
Learning and tuning fuzzy logic controllers through reinforcements.
Berenji, H R; Khedkar, P
1992-01-01
A method for learning and tuning a fuzzy logic controller based on reinforcements from a dynamic system is presented. It is shown that: the generalized approximate-reasoning-based intelligent control (GARIC) architecture learns and tunes a fuzzy logic controller even when only weak reinforcement, such as a binary failure signal, is available; introduces a new conjunction operator in computing the rule strengths of fuzzy control rules; introduces a new localized mean of maximum (LMOM) method in combining the conclusions of several firing control rules; and learns to produce real-valued control actions. Learning is achieved by integrating fuzzy inference into a feedforward network, which can then adaptively improve performance by using gradient descent methods. The GARIC architecture is applied to a cart-pole balancing system and demonstrates significant improvements in terms of the speed of learning and robustness to changes in the dynamic system's parameters over previous schemes for cart-pole balancing.
Process for measuring degradation of sulfur hexafluoride in high voltage systems
Sauers, Isidor
1986-01-01
This invention is a method of detecting the presence of toxic and corrosive by-products in high voltage systems produced by electrically induced degradation of SF.sub.6 insulating gas in the presence of certain impurities. It is an improvement over previous methods because it is extremely sensitive, detecting by-products present in parts per billion concentrations, and because the device employed is of a simple design and takes advantage of the by-products natural affinity for fluoride ions. The method employs an ion-molecule reaction cell in which negative ions of the by-products are produced by fluorine attachment. These ions are admitted to a negative ion mass spectrometer and identified by their spectra. This spectrometry technique is an improvement over conventional techniques because the negative ion peaks are strong and not obscured by a major ion spectra of the SF.sub.6 component as is the case in positive ion mass spectrometry.
Process for measuring degradation of sulfur hexafluoride in high voltage systems
Sauers, I.
1985-04-23
This invention is a method of detecting the presence of toxic and corrosive by-products in high voltage systems produced by electrically induced degradation of SF/sub 6/ insulating gas in the presence of certain impurities. It is an improvement over previous methods because it is extremely sensitive, detecting by-products present in parts per billion concentrations, and because the device employed is of a simple design and takes advantage of the by-products natural affinity for fluoride ions. The method employs an ion-molecule reaction cell in which negative ions of the by-products are produced by fluorine attachment. These ions are admitted to a negative ion mass spectrometer and identified by their spectra. This spectrometry technique is an improvement over conventional techniques because the negative ion peaks are strong and not obscured by a major ion spectra of the SF/sub 6/ component as is the case in positive ion mass spectrometry.
Detection of forced oscillations in power systems with multichannel methods
DOE Office of Scientific and Technical Information (OSTI.GOV)
Follum, James D.
2015-09-30
The increasing availability of high fidelity, geographically dispersed measurements in power systems improves the ability of researchers and engineers to study dynamic behaviors in the grid. One such behavior that is garnering increased attention is the presence of forced oscillations. Power system engineers are interested in forced oscillations because they are often symptomatic of the malfunction or misoperation of equipment. Though the resulting oscillation is not always large in amplitude, the root cause may be serious. In this report, multi-channel forced oscillation detection methods are developed. These methods leverage previously developed detection approaches based on the periodogram and spectral-coherence. Makingmore » use of geographically distributed channels of data is shown to improved detection performance and shorten the delay before an oscillation can be detected in the online environment. Results from simulated and measured power system data are presented.« less
Enhancement of succinate yield by manipulating NADH/NAD+ ratio and ATP generation.
Li, Jiaojiao; Li, Yikui; Cui, Zhiyong; Liang, Quanfeng; Qi, Qingsheng
2017-04-01
We previously engineered Escherichia coli YL104 to efficiently produce succinate from glucose. In this study, we investigated the relationships between the NADH/NAD + ratio, ATP level, and overall yield of succinate production by using glucose as the carbon source in YL104. First, the use of sole NADH dehydrogenases increased the overall yield of succinate by 7% and substantially decreased the NADH/NAD + ratio. Second, the soluble fumarate reductase from Saccharomyces cerevisiae was overexpressed to manipulate the anaerobic NADH/NAD + ratio and ATP level. Third, another strategy for reducing the ATP level was applied by introducing ATP futile cycling for improving succinate production. Finally, a combination of these methods exerted a synergistic effect on improving the overall yield of succinate, which was 39% higher than that of the previously engineered strain YL104. The study results indicated that regulation of the NADH/NAD + ratio and ATP level is an efficient strategy for succinate production.
Cho, Eugene N; Zhitomirsky, David; Han, Grace G D; Liu, Yun; Grossman, Jeffrey C
2017-03-15
Solar thermal fuels (STFs) harvest and store solar energy in a closed cycle system through conformational change of molecules and can release the energy in the form of heat on demand. With the aim of developing tunable and optimized STFs for solid-state applications, we designed three azobenzene derivatives functionalized with bulky aromatic groups (phenyl, biphenyl, and tert-butyl phenyl groups). In contrast to pristine azobenzene, which crystallizes and makes nonuniform films, the bulky azobenzene derivatives formed uniform amorphous films that can be charged and discharged with light and heat for many cycles. Thermal stability of the films, a critical metric for thermally triggerable STFs, was greatly increased by the bulky functionalization (up to 180 °C), and we were able to achieve record high energy density of 135 J/g for solid-state STFs, over a 30% improvement compared to previous solid-state reports. Furthermore, the chargeability in the solid state was improved, up to 80% charged from 40% charged in previous solid-state reports. Our results point toward molecular engineering as an effective method to increase energy storage in STFs, improve chargeability, and improve the thermal stability of the thin film.
NASA Astrophysics Data System (ADS)
Masalmah, Yahya M.; Vélez-Reyes, Miguel
2007-04-01
The authors proposed in previous papers the use of the constrained Positive Matrix Factorization (cPMF) to perform unsupervised unmixing of hyperspectral imagery. Two iterative algorithms were proposed to compute the cPMF based on the Gauss-Seidel and penalty approaches to solve optimization problems. Results presented in previous papers have shown the potential of the proposed method to perform unsupervised unmixing in HYPERION and AVIRIS imagery. The performance of iterative methods is highly dependent on the initialization scheme. Good initialization schemes can improve convergence speed, whether or not a global minimum is found, and whether or not spectra with physical relevance are retrieved as endmembers. In this paper, different initializations using random selection, longest norm pixels, and standard endmembers selection routines are studied and compared using simulated and real data.
Percolation in real multiplex networks
NASA Astrophysics Data System (ADS)
Bianconi, Ginestra; Radicchi, Filippo
2016-12-01
We present an exact mathematical framework able to describe site-percolation transitions in real multiplex networks. Specifically, we consider the average percolation diagram valid over an infinite number of random configurations where nodes are present in the system with given probability. The approach relies on the locally treelike ansatz, so that it is expected to accurately reproduce the true percolation diagram of sparse multiplex networks with negligible number of short loops. The performance of our theory is tested in social, biological, and transportation multiplex graphs. When compared against previously introduced methods, we observe improvements in the prediction of the percolation diagrams in all networks analyzed. Results from our method confirm previous claims about the robustness of real multiplex networks, in the sense that the average connectedness of the system does not exhibit any significant abrupt change as its individual components are randomly destroyed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dai, Xiubin; Gao, Yaozong; Shen, Dinggang, E-mail: dgshen@med.unc.edu
2015-05-15
Purpose: In image guided radiation therapy, it is crucial to fast and accurately localize the prostate in the daily treatment images. To this end, the authors propose an online update scheme for landmark-guided prostate segmentation, which can fully exploit valuable patient-specific information contained in the previous treatment images and can achieve improved performance in landmark detection and prostate segmentation. Methods: To localize the prostate in the daily treatment images, the authors first automatically detect six anatomical landmarks on the prostate boundary by adopting a context-aware landmark detection method. Specifically, in this method, a two-layer regression forest is trained as amore » detector for each target landmark. Once all the newly detected landmarks from new treatment images are reviewed or adjusted (if necessary) by clinicians, they are further included into the training pool as new patient-specific information to update all the two-layer regression forests for the next treatment day. As more and more treatment images of the current patient are acquired, the two-layer regression forests can be continually updated by incorporating the patient-specific information into the training procedure. After all target landmarks are detected, a multiatlas random sample consensus (multiatlas RANSAC) method is used to segment the entire prostate by fusing multiple previously segmented prostates of the current patient after they are aligned to the current treatment image. Subsequently, the segmented prostate of the current treatment image is again reviewed (or even adjusted if needed) by clinicians before including it as a new shape example into the prostate shape dataset for helping localize the entire prostate in the next treatment image. Results: The experimental results on 330 images of 24 patients show the effectiveness of the authors’ proposed online update scheme in improving the accuracies of both landmark detection and prostate segmentation. Besides, compared to the other state-of-the-art prostate segmentation methods, the authors’ method achieves the best performance. Conclusions: By appropriate use of valuable patient-specific information contained in the previous treatment images, the authors’ proposed online update scheme can obtain satisfactory results for both landmark detection and prostate segmentation.« less
NASA Astrophysics Data System (ADS)
Watanabe, Ryusuke; Muramatsu, Chisako; Ishida, Kyoko; Sawada, Akira; Hatanaka, Yuji; Yamamoto, Tetsuya; Fujita, Hiroshi
2017-03-01
Early detection of glaucoma is important to slow down progression of the disease and to prevent total vision loss. We have been studying an automated scheme for detection of a retinal nerve fiber layer defect (NFLD), which is one of the earliest signs of glaucoma on retinal fundus images. In our previous study, we proposed a multi-step detection scheme which consists of Gabor filtering, clustering and adaptive thresholding. The problems of the previous method were that the number of false positives (FPs) was still large and that the method included too many rules. In attempt to solve these problems, we investigated the end-to-end learning system without pre-specified features. A deep convolutional neural network (DCNN) with deconvolutional layers was trained to detect NFLD regions. In this preliminary investigation, we investigated effective ways of preparing the input images and compared the detection results. The optimal result was then compared with the result obtained by the previous method. DCNN training was carried out using original images of abnormal cases, original images of both normal and abnormal cases, ellipse-based polar transformed images, and transformed half images. The result showed that use of both normal and abnormal cases increased the sensitivity as well as the number of FPs. Although NFLDs are visualized with the highest contrast in green plane, the use of color images provided higher sensitivity than the use of green image only. The free response receiver operating characteristic curve using the transformed color images, which was the best among seven different sets studied, was comparable to that of the previous method. Use of DCNN has a potential to improve the generalizability of automated detection method of NFLDs and may be useful in assisting glaucoma diagnosis on retinal fundus images.
ERIC Educational Resources Information Center
Leming, Katie P.
2016-01-01
Previous qualitative research on educational practices designed to improve critical thinking has relied on anecdotal or student self-reports of gains in critical thinking. Unfortunately, student self-report data have been found to be unreliable proxies for measuring critical thinking gains. Therefore, in the current interpretivist study, five…
ERIC Educational Resources Information Center
Allen, Richard K.
In an attempt to discover improved classroom teaching methods, a class was turned into a business organization as a way of bringing life to the previously covered lectures and textual materials. The simulated games were an attempt to get people to work toward a common goal with all of the power plays, secret meetings, brainstorming, anger, and…
ERIC Educational Resources Information Center
Stringfield, Sam; Reynolds, David; Schaffer, Eugene
2016-01-01
This chapter presents data from a 15-year, mixed-methods school improvement effort. The High Reliability Schools (HRS) reform made use of previous research on school effects and on High Reliability Organizations (HROs). HROs are organizations in various parts of our cultures that are required to operate successfully "the first time, every…
Providing Behavioral Feedback to Students in an Alternative High School Setting
ERIC Educational Resources Information Center
Whitcomb, Sara A.; Hefter, Sheera; Barker, Elizabeth
2016-01-01
This column provides an example method for improving the consistency and quality of daily behavioral feedback provided to students in an alternative high school setting. Often, homeroom or advisory periods are prime points in the day for students to review their behavior from the previous day and set goals for a successful day to come. The method…
Collaboration and Networking among Rural Schools: Can It Work and When? Evidence from England
ERIC Educational Resources Information Center
Muijs, Daniel
2015-01-01
School-to-school collaboration as a school improvement method has grown in importance in England in recent years, and there is some evidence that such collaboration can have a positive impact on both capacity to change and student attainment. Most previous work in the area has focused on the urban context, however, despite the fact that increasing…
The Impact of a Therapy Dog Program on Children's Reading Skills and Attitudes toward Reading
ERIC Educational Resources Information Center
Kirnan, Jean; Siminerio, Steven; Wong, Zachary
2016-01-01
An existing school program in which therapy dogs are integrated into the reading curriculum was analyzed to determine the effect on student reading. Previous literature suggests an improvement in both reading skills and attitudes towards reading when students read in the presence of a therapy dog. Using a mixed method model, the researchers…
ERIC Educational Resources Information Center
Cho, Yonjoo; Jo, Sung Jun; Park, Sunyoung; Kang, Ingu; Chen, Zengguan
2011-01-01
This study conducted a citation network analysis (CNA) of human performance technology (HPT) to examine its current state of the field. Previous reviews of the field have used traditional research methods, such as content analysis, survey, Delphi, and citation analysis. The distinctive features of CNA come from using a social network analysis…
Improving operational plume forecasts
NASA Astrophysics Data System (ADS)
Balcerak, Ernie
2012-04-01
Forecasting how plumes of particles, such as radioactive particles from a nuclear disaster, will be transported and dispersed in the atmosphere is an important but computationally challenging task. During the Fukushima nuclear disaster in Japan, operational plume forecasts were produced each day, but as the emissions continued, previous emissions were not included in the simulations used for forecasts because it became impractical to rerun the simulations each day from the beginning of the accident. Draxler and Rolph examine whether it is possible to improve plume simulation speed and flexibility as conditions and input data change. The authors use a method known as a transfer coefficient matrix approach that allows them to simulate many radionuclides using only a few generic species for the computation. Their simulations work faster by dividing the computation into separate independent segments in such a way that the most computationally time consuming pieces of the calculation need to be done only once. This makes it possible to provide real-time operational plume forecasts by continuously updating the previous simulations as new data become available. They tested their method using data from the Fukushima incident to show that it performed well. (Journal of Geophysical Research-Atmospheres, doi:10.1029/2011JD017205, 2012)
2016-01-01
Many excellent methods exist that incorporate cryo-electron microscopy (cryoEM) data to constrain computational protein structure prediction and refinement. Previously, it was shown that iteration of two such orthogonal sampling and scoring methods – Rosetta and molecular dynamics (MD) simulations – facilitated exploration of conformational space in principle. Here, we go beyond a proof-of-concept study and address significant remaining limitations of the iterative MD–Rosetta protein structure refinement protocol. Specifically, all parts of the iterative refinement protocol are now guided by medium-resolution cryoEM density maps, and previous knowledge about the native structure of the protein is no longer necessary. Models are identified solely based on score or simulation time. All four benchmark proteins showed substantial improvement through three rounds of the iterative refinement protocol. The best-scoring final models of two proteins had sub-Ångstrom RMSD to the native structure over residues in secondary structure elements. Molecular dynamics was most efficient in refining secondary structure elements and was thus highly complementary to the Rosetta refinement which is most powerful in refining side chains and loop regions. PMID:25883538
NASA Astrophysics Data System (ADS)
Ma, Yulong; Liu, Heping
2017-12-01
Atmospheric flow over complex terrain, particularly recirculation flows, greatly influences wind-turbine siting, forest-fire behaviour, and trace-gas and pollutant dispersion. However, there is a large uncertainty in the simulation of flow over complex topography, which is attributable to the type of turbulence model, the subgrid-scale (SGS) turbulence parametrization, terrain-following coordinates, and numerical errors in finite-difference methods. Here, we upgrade the large-eddy simulation module within the Weather Research and Forecasting model by incorporating the immersed-boundary method into the module to improve simulations of the flow and recirculation over complex terrain. Simulations over the Bolund Hill indicate improved mean absolute speed-up errors with respect to previous studies, as well an improved simulation of the recirculation zone behind the escarpment of the hill. With regard to the SGS parametrization, the Lagrangian-averaged scale-dependent Smagorinsky model performs better than the classic Smagorinsky model in reproducing both velocity and turbulent kinetic energy. A finer grid resolution also improves the strength of the recirculation in flow simulations, with a higher horizontal grid resolution improving simulations just behind the escarpment, and a higher vertical grid resolution improving results on the lee side of the hill. Our modelling approach has broad applications for the simulation of atmospheric flows over complex topography.
Improvements to direct quantitative analysis of multiple microRNAs facilitating faster analysis.
Ghasemi, Farhad; Wegman, David W; Kanoatov, Mirzo; Yang, Burton B; Liu, Stanley K; Yousef, George M; Krylov, Sergey N
2013-11-05
Studies suggest that patterns of deregulation in sets of microRNA (miRNA) can be used as cancer diagnostic and prognostic biomarkers. Establishing a "miRNA fingerprint"-based diagnostic technique requires a suitable miRNA quantitation method. The appropriate method must be direct, sensitive, capable of simultaneous analysis of multiple miRNAs, rapid, and robust. Direct quantitative analysis of multiple microRNAs (DQAMmiR) is a recently introduced capillary electrophoresis-based hybridization assay that satisfies most of these criteria. Previous implementations of the method suffered, however, from slow analysis time and required lengthy and stringent purification of hybridization probes. Here, we introduce a set of critical improvements to DQAMmiR that address these technical limitations. First, we have devised an efficient purification procedure that achieves the required purity of the hybridization probe in a fast and simple fashion. Second, we have optimized the concentrations of the DNA probe to decrease the hybridization time to 10 min. Lastly, we have demonstrated that the increased probe concentrations and decreased incubation time removed the need for masking DNA, further simplifying the method and increasing its robustness. The presented improvements bring DQAMmiR closer to use in a clinical setting.
Improved regulatory element prediction based on tissue-specific local epigenomic signatures
He, Yupeng; Gorkin, David U.; Dickel, Diane E.; Nery, Joseph R.; Castanon, Rosa G.; Lee, Ah Young; Shen, Yin; Visel, Axel; Pennacchio, Len A.; Ren, Bing; Ecker, Joseph R.
2017-01-01
Accurate enhancer identification is critical for understanding the spatiotemporal transcriptional regulation during development as well as the functional impact of disease-related noncoding genetic variants. Computational methods have been developed to predict the genomic locations of active enhancers based on histone modifications, but the accuracy and resolution of these methods remain limited. Here, we present an algorithm, regulatory element prediction based on tissue-specific local epigenetic marks (REPTILE), which integrates histone modification and whole-genome cytosine DNA methylation profiles to identify the precise location of enhancers. We tested the ability of REPTILE to identify enhancers previously validated in reporter assays. Compared with existing methods, REPTILE shows consistently superior performance across diverse cell and tissue types, and the enhancer locations are significantly more refined. We show that, by incorporating base-resolution methylation data, REPTILE greatly improves upon current methods for annotation of enhancers across a variety of cell and tissue types. REPTILE is available at https://github.com/yupenghe/REPTILE/. PMID:28193886
Bergquist, J; Vona, M J; Stiller, C O; O'Connor, W T; Falkenberg, T; Ekman, R
1996-03-01
The use of capillary electrophoresis with laser-induced fluorescence detection (CE-LIF) for the analysis of microdialysate samples from the periaqueductal grey matter (PAG) of freely moving rats is described. By employing 3-(4-carboxybenzoyl)-2-quinoline-carboxaldehyde (CBQCA) as a derivatization agent, we simultaneously monitored the concentrations of 8 amino acids (arginine, glutamine, valine, gamma-amino-n-butyric acid (GABA), alanine, glycine, glutamate, and aspartate), with nanomolar and subnanomolar detection limits. Two of the amino acids (GABA and glutamate) were analysed in parallel by conventional high-performance liquid chromatography (HPLC) in order to directly compare the two analytical methods. Other CE methods for analysis of microdialysate have been previously described, and this improved method offers greater sensitivity, ease of use, and the possibility to monitor several amino acids simultaneously. By using this technique together with an optimised form of microdialysis technique, the tiny sample consumption and the improved detection limits permit the detection of fast and transient transmitter changes.
Bayesian Approach to Spectral Function Reconstruction for Euclidean Quantum Field Theories
NASA Astrophysics Data System (ADS)
Burnier, Yannis; Rothkopf, Alexander
2013-11-01
We present a novel approach to the inference of spectral functions from Euclidean time correlator data that makes close contact with modern Bayesian concepts. Our method differs significantly from the maximum entropy method (MEM). A new set of axioms is postulated for the prior probability, leading to an improved expression, which is devoid of the asymptotically flat directions present in the Shanon-Jaynes entropy. Hyperparameters are integrated out explicitly, liberating us from the Gaussian approximations underlying the evidence approach of the maximum entropy method. We present a realistic test of our method in the context of the nonperturbative extraction of the heavy quark potential. Based on hard-thermal-loop correlator mock data, we establish firm requirements in the number of data points and their accuracy for a successful extraction of the potential from lattice QCD. Finally we reinvestigate quenched lattice QCD correlators from a previous study and provide an improved potential estimation at T=2.33TC.
Bayesian approach to spectral function reconstruction for Euclidean quantum field theories.
Burnier, Yannis; Rothkopf, Alexander
2013-11-01
We present a novel approach to the inference of spectral functions from Euclidean time correlator data that makes close contact with modern Bayesian concepts. Our method differs significantly from the maximum entropy method (MEM). A new set of axioms is postulated for the prior probability, leading to an improved expression, which is devoid of the asymptotically flat directions present in the Shanon-Jaynes entropy. Hyperparameters are integrated out explicitly, liberating us from the Gaussian approximations underlying the evidence approach of the maximum entropy method. We present a realistic test of our method in the context of the nonperturbative extraction of the heavy quark potential. Based on hard-thermal-loop correlator mock data, we establish firm requirements in the number of data points and their accuracy for a successful extraction of the potential from lattice QCD. Finally we reinvestigate quenched lattice QCD correlators from a previous study and provide an improved potential estimation at T=2.33T(C).
Morphological rational operator for contrast enhancement.
Peregrina-Barreto, Hayde; Herrera-Navarro, Ana M; Morales-Hernández, Luis A; Terol-Villalobos, Iván R
2011-03-01
Contrast enhancement is an important task in image processing that is commonly used as a preprocessing step to improve the images for other tasks such as segmentation. However, some methods for contrast improvement that work well in low-contrast regions affect good contrast regions as well. This occurs due to the fact that some elements may vanish. A method focused on images with different luminance conditions is introduced in the present work. The proposed method is based on morphological transformations by reconstruction and rational operations, which, altogether, allow a more accurate contrast enhancement resulting in regions that are in harmony with their environment. Furthermore, due to the properties of these morphological transformations, the creation of new elements on the image is avoided. The processing is carried out on luminance values in the u'v'Y color space, which avoids the creation of new colors. As a result of the previous considerations, the proposed method keeps the natural color appearance of the image.
Automated railroad reconstruction from remote sensing image based on texture filter
NASA Astrophysics Data System (ADS)
Xiao, Jie; Lu, Kaixia
2018-03-01
Techniques of remote sensing have been improved incredibly in recent years and very accurate results and high resolution images can be acquired. There exist possible ways to use such data to reconstruct railroads. In this paper, an automated railroad reconstruction method from remote sensing images based on Gabor filter was proposed. The method is divided in three steps. Firstly, the edge-oriented railroad characteristics (such as line features) in a remote sensing image are detected using Gabor filter. Secondly, two response images with the filtering orientations perpendicular to each other are fused to suppress the noise and acquire a long stripe smooth region of railroads. Thirdly, a set of smooth regions can be extracted by firstly computing global threshold for the previous result image using Otsu's method and then converting it to a binary image based on the previous threshold. This workflow is tested on a set of remote sensing images and was found to deliver very accurate results in a quickly and highly automated manner.
NASA Astrophysics Data System (ADS)
Wilting, Jens; Lehnertz, Klaus
2015-08-01
We investigate a recently published analysis framework based on Bayesian inference for the time-resolved characterization of interaction properties of noisy, coupled dynamical systems. It promises wide applicability and a better time resolution than well-established methods. At the example of representative model systems, we show that the analysis framework has the same weaknesses as previous methods, particularly when investigating interacting, structurally different non-linear oscillators. We also inspect the tracking of time-varying interaction properties and propose a further modification of the algorithm, which improves the reliability of obtained results. We exemplarily investigate the suitability of this algorithm to infer strength and direction of interactions between various regions of the human brain during an epileptic seizure. Within the limitations of the applicability of this analysis tool, we show that the modified algorithm indeed allows a better time resolution through Bayesian inference when compared to previous methods based on least square fits.
NASA Astrophysics Data System (ADS)
Zhang, Yuanyuan; Gao, Zhiqiang; Liu, Xiangyang; Xu, Ning; Liu, Chaoshun; Gao, Wei
2016-09-01
Reclamation caused a significant dynamic change in the coastal zone, the tidal flat zone is an unstable reserve land resource, it has important significance for its research. In order to realize the efficient extraction of the tidal flat area information, this paper takes Rudong County in Jiangsu Province as the research area, using the HJ1A/1B images as the data source, on the basis of previous research experience and literature review, the paper chooses the method of object-oriented classification as a semi-automatic extraction method to generate waterlines. Then waterlines are analyzed by DSAS software to obtain tide points, automatic extraction of outer boundary points are followed under the use of Python to determine the extent of tidal flats in 2014 of Rudong County, the extraction area was 55182hm2, the confusion matrix is used to verify the accuracy and the result shows that the kappa coefficient is 0.945. The method could improve deficiencies of previous studies and its available free nature on the Internet makes a generalization.
Navarro, María; Kontoudakis, Nikolaos; Canals, Joan Miquel; García-Romero, Esteban; Gómez-Alonso, Sergio; Zamora, Fernando; Hermosín-Gutiérrez, Isidro
2017-07-01
A new method for the analysis of ellagitannins observed in oak-aged wine is proposed, exhibiting interesting advantages with regard to previously reported analytical methods. The necessary extraction of ellagitannins from wine was simplified to a single step of solid phase extraction (SPE) using size exclusion chromatography with Sephadex LH-20 without the need for any previous SPE of phenolic compounds using reversed-phase materials. The quantitative recovery of wine ellagitannins requires a combined elution with methanol and ethyl acetate, especially for increasing the recovery of the less polar acutissimins. The chromatographic method was performed using a fused-core C18 column, thereby avoiding the coelution of main ellagitannins, such as vescalagin and roburin E. However, the very polar ellagitannins, namely, the roburins A, B and C, still partially coeluted, and their quantification was assisted by the MS detector. This methodology also enabled the analysis of free gallic and ellagic acids in the same chromatographic run. Copyright © 2017 Elsevier Ltd. All rights reserved.
Deveau, Jason S.T.; Grodzinski, Bernard
2005-01-01
We describe an improved, efficient and reliable method for the vapour-phase silanization of multi-barreled, ion-selective microelectrodes of which the silanized barrel(s) are to be filled with neutral liquid ion-exchanger (LIX). The technique employs a metal manifold to exclusively and simultaneously deliver dimethyldichlorosilane to only the ion-selective barrels of several multi-barreled microelectrodes. Compared to previously published methods the technique requires fewer procedural steps, less handling of individual microelectrodes, improved reproducibility of silanization of the selected microelectrode barrels and employs standard borosilicate tubing rather than the less-conventional theta-type glass. The electrodes remain stable for up to 3 weeks after the silanization procedure. The efficacy of a double-barreled electrode containing a proton ionophore in the ion-selective barrel is demonstrated in situ in the leaf apoplasm of pea (Pisum) and sunflower (Helianthus). Individual leaves were penetrated to depth of ~150 μm through the abaxial surface. Microelectrode readings remained stable after multiple impalements without the need for a stabilizing PVC matrix. PMID:16136222
Integrating SAS and GIS software to improve habitat-use estimates from radiotelemetry data
Kenow, K.P.; Wright, R.G.; Samuel, M.D.; Rasmussen, P.W.
2001-01-01
Radiotelemetry has been used commonly to remotely determine habitat use by a variety of wildlife species. However, habitat misclassification can occur because the true location of a radiomarked animal can only be estimated. Analytical methods that provide improved estimates of habitat use from radiotelemetry location data using a subsampling approach have been proposed previously. We developed software, based on these methods, to conduct improved habitat-use analyses. A Statistical Analysis System (SAS)-executable file generates a random subsample of points from the error distribution of an estimated animal location and formats the output into ARC/INFO-compatible coordinate and attribute files. An associated ARC/INFO Arc Macro Language (AML) creates a coverage of the random points, determines the habitat type at each random point from an existing habitat coverage, sums the number of subsample points by habitat type for each location, and outputs tile results in ASCII format. The proportion and precision of habitat types used is calculated from the subsample of points generated for each radiotelemetry location. We illustrate the method and software by analysis of radiotelemetry data for a female wild turkey (Meleagris gallopavo).
Poor methodological detail precludes experimental repeatability and hampers synthesis in ecology.
Haddaway, Neal R; Verhoeven, Jos T A
2015-10-01
Despite the scientific method's central tenets of reproducibility (the ability to obtain similar results when repeated) and repeatability (the ability to replicate an experiment based on methods described), published ecological research continues to fail to provide sufficient methodological detail to allow either repeatability of verification. Recent systematic reviews highlight the problem, with one example demonstrating that an average of 13% of studies per year (±8.0 [SD]) failed to report sample sizes. The problem affects the ability to verify the accuracy of any analysis, to repeat methods used, and to assimilate the study findings into powerful and useful meta-analyses. The problem is common in a variety of ecological topics examined to date, and despite previous calls for improved reporting and metadata archiving, which could indirectly alleviate the problem, there is no indication of an improvement in reporting standards over time. Here, we call on authors, editors, and peer reviewers to consider repeatability as a top priority when evaluating research manuscripts, bearing in mind that legacy and integration into the evidence base can drastically improve the impact of individual research reports.
Gong, Kuang; Cheng-Liao, Jinxiu; Wang, Guobao; Chen, Kevin T; Catana, Ciprian; Qi, Jinyi
2018-04-01
Positron emission tomography (PET) is a functional imaging modality widely used in oncology, cardiology, and neuroscience. It is highly sensitive, but suffers from relatively poor spatial resolution, as compared with anatomical imaging modalities, such as magnetic resonance imaging (MRI). With the recent development of combined PET/MR systems, we can improve the PET image quality by incorporating MR information into image reconstruction. Previously, kernel learning has been successfully embedded into static and dynamic PET image reconstruction using either PET temporal or MRI information. Here, we combine both PET temporal and MRI information adaptively to improve the quality of direct Patlak reconstruction. We examined different approaches to combine the PET and MRI information in kernel learning to address the issue of potential mismatches between MRI and PET signals. Computer simulations and hybrid real-patient data acquired on a simultaneous PET/MR scanner were used to evaluate the proposed methods. Results show that the method that combines PET temporal information and MRI spatial information adaptively based on the structure similarity index has the best performance in terms of noise reduction and resolution improvement.
Experimental Detection and Characterization of Void using Time-Domain Reflection Wave
NASA Astrophysics Data System (ADS)
Zahari, M. N. H.; Madun, A.; Dahlan, S. H.; Joret, A.; Zainal Abidin, M. H.; Mohammad, A. H.; Omar, A. H.
2018-04-01
Recent technologies in engineering views have brought the significant improvement in terms of performance and precision. One of those improvements is in geophysics studies for underground detection. Reflection method has been demonstrated to able to detect and locate subsurface anomalies in previous studies, including voids. Conventional method merely involves field testing only for limited areas. This may lead to undiscovered of the void position. Problems arose when the voids were not recognised in early stage and thus, causing hazards, costs increment, and can lead to serious accidents and structural damages. Therefore, to achieve better certainty of the site investigation, a dynamic approach is needed to be implemented. To estimate and characterize the anomalies signal in a better way, an attempt has been made to model air-filled void as experimental testing at site. Robust detection and characterization of voids through inexpensive cost using reflection method are proposed to improve the detectability and characterization of the void. The result shows 2-Dimensional and 3-Dimensional analyses of void based on reflection data with P-waves velocity at 454.54 m/s.
West, Jamie; Atherton, Jennifer; Costelloe, Seán J; Pourmahram, Ghazaleh; Stretton, Adam; Cornes, Michael
2017-01-01
Preanalytical errors have previously been shown to contribute a significant proportion of errors in laboratory processes and contribute to a number of patient safety risks. Accreditation against ISO 15189:2012 requires that laboratory Quality Management Systems consider the impact of preanalytical processes in areas such as the identification and control of non-conformances, continual improvement, internal audit and quality indicators. Previous studies have shown that there is a wide variation in the definition, repertoire and collection methods for preanalytical quality indicators. The International Federation of Clinical Chemistry Working Group on Laboratory Errors and Patient Safety has defined a number of quality indicators for the preanalytical stage, and the adoption of harmonized definitions will support interlaboratory comparisons and continual improvement. There are a variety of data collection methods, including audit, manual recording processes, incident reporting mechanisms and laboratory information systems. Quality management processes such as benchmarking, statistical process control, Pareto analysis and failure mode and effect analysis can be used to review data and should be incorporated into clinical governance mechanisms. In this paper, The Association for Clinical Biochemistry and Laboratory Medicine PreAnalytical Specialist Interest Group review the various data collection methods available. Our recommendation is the use of the laboratory information management systems as a recording mechanism for preanalytical errors as this provides the easiest and most standardized mechanism of data capture.
Comparing K-mer based methods for improved classification of 16S sequences.
Vinje, Hilde; Liland, Kristian Hovde; Almøy, Trygve; Snipen, Lars
2015-07-01
The need for precise and stable taxonomic classification is highly relevant in modern microbiology. Parallel to the explosion in the amount of sequence data accessible, there has also been a shift in focus for classification methods. Previously, alignment-based methods were the most applicable tools. Now, methods based on counting K-mers by sliding windows are the most interesting classification approach with respect to both speed and accuracy. Here, we present a systematic comparison on five different K-mer based classification methods for the 16S rRNA gene. The methods differ from each other both in data usage and modelling strategies. We have based our study on the commonly known and well-used naïve Bayes classifier from the RDP project, and four other methods were implemented and tested on two different data sets, on full-length sequences as well as fragments of typical read-length. The difference in classification error obtained by the methods seemed to be small, but they were stable and for both data sets tested. The Preprocessed nearest-neighbour (PLSNN) method performed best for full-length 16S rRNA sequences, significantly better than the naïve Bayes RDP method. On fragmented sequences the naïve Bayes Multinomial method performed best, significantly better than all other methods. For both data sets explored, and on both full-length and fragmented sequences, all the five methods reached an error-plateau. We conclude that no K-mer based method is universally best for classifying both full-length sequences and fragments (reads). All methods approach an error plateau indicating improved training data is needed to improve classification from here. Classification errors occur most frequent for genera with few sequences present. For improving the taxonomy and testing new classification methods, the need for a better and more universal and robust training data set is crucial.
Nakatsui, M; Horimoto, K; Lemaire, F; Ürgüplü, A; Sedoglavic, A; Boulier, F
2011-09-01
Recent remarkable advances in computer performance have enabled us to estimate parameter values by the huge power of numerical computation, the so-called 'Brute force', resulting in the high-speed simultaneous estimation of a large number of parameter values. However, these advancements have not been fully utilised to improve the accuracy of parameter estimation. Here the authors review a novel method for parameter estimation using symbolic computation power, 'Bruno force', named after Bruno Buchberger, who found the Gröbner base. In the method, the objective functions combining the symbolic computation techniques are formulated. First, the authors utilise a symbolic computation technique, differential elimination, which symbolically reduces an equivalent system of differential equations to a system in a given model. Second, since its equivalent system is frequently composed of large equations, the system is further simplified by another symbolic computation. The performance of the authors' method for parameter accuracy improvement is illustrated by two representative models in biology, a simple cascade model and a negative feedback model in comparison with the previous numerical methods. Finally, the limits and extensions of the authors' method are discussed, in terms of the possible power of 'Bruno force' for the development of a new horizon in parameter estimation.
NASA Astrophysics Data System (ADS)
Teal, Paul D.; Eccles, Craig
2015-04-01
The two most successful methods of estimating the distribution of nuclear magnetic resonance relaxation times from two dimensional data are data compression followed by application of the Butler-Reeds-Dawson algorithm, and a primal-dual interior point method using preconditioned conjugate gradient. Both of these methods have previously been presented using a truncated singular value decomposition of matrices representing the exponential kernel. In this paper it is shown that other matrix factorizations are applicable to each of these algorithms, and that these illustrate the different fundamental principles behind the operation of the algorithms. These are the rank-revealing QR (RRQR) factorization and the LDL factorization with diagonal pivoting, also known as the Bunch-Kaufman-Parlett factorization. It is shown that both algorithms can be improved by adaptation of the truncation as the optimization process progresses, improving the accuracy as the optimal value is approached. A variation on the interior method viz, the use of barrier function instead of the primal-dual approach, is found to offer considerable improvement in terms of speed and reliability. A third type of algorithm, related to the algorithm known as Fast iterative shrinkage-thresholding algorithm, is applied to the problem. This method can be efficiently formulated without the use of a matrix decomposition.
Hawes, Martha C; Brooks, William J
2002-01-01
This report describes improved signs and symptoms of previously untreated symptomatic spinal deformity in an adult female diagnosed with moderately severe thoracic scoliosis at the age of .7 years. Current treatment initiated at the age of forty included massage therapy, manual traction, ischemic pressure, and comprehensive manipulative medicine (CMM). A left-right chest circumference inequity was reduced by >10 cm, in correlation with improved appearance of the ribcage deformity and a 40% reduction in magnitude of Cobb angle, which had been stable for 30 years. The changes occurred gradually over an eight-year period, with the most rapid improvement occurring during two periods when CMM was employed.
The magnetofection method: using magnetic force to enhance gene delivery.
Plank, Christian; Schillinger, Ulrike; Scherer, Franz; Bergemann, Christian; Rémy, Jean-Serge; Krötz, Florian; Anton, Martina; Lausier, Jim; Rosenecker, Joseph
2003-05-01
In order to enhance and target gene delivery we have previously established a novel method, termed magnetofection, which uses magnetic force acting on gene vectors that are associated with magnetic particles. Here we review the benefits, the mechanism and the potential of the method with regard to overcoming physical limitations to gene delivery. Magnetic particle chemistry and physics are discussed, followed by a detailed presentation of vector formulation and optimization work. While magnetofection does not necessarily improve the overall performance of any given standard gene transfer method in vitro, its major potential lies in the extraordinarily rapid and efficient transfection at low vector doses and the possibility of remotely controlled vector targeting in vivo.
Bulk and surface event identification in p-type germanium detectors
NASA Astrophysics Data System (ADS)
Yang, L. T.; Li, H. B.; Wong, H. T.; Agartioglu, M.; Chen, J. H.; Jia, L. P.; Jiang, H.; Li, J.; Lin, F. K.; Lin, S. T.; Liu, S. K.; Ma, J. L.; Sevda, B.; Sharma, V.; Singh, L.; Singh, M. K.; Singh, M. K.; Soma, A. K.; Sonay, A.; Yang, S. W.; Wang, L.; Wang, Q.; Yue, Q.; Zhao, W.
2018-04-01
The p-type point-contact germanium detectors have been adopted for light dark matter WIMP searches and the studies of low energy neutrino physics. These detectors exhibit anomalous behavior to events located at the surface layer. The previous spectral shape method to identify these surface events from the bulk signals relies on spectral shape assumptions and the use of external calibration sources. We report an improved method in separating them by taking the ratios among different categories of in situ event samples as calibration sources. Data from CDEX-1 and TEXONO experiments are re-examined using the ratio method. Results are shown to be consistent with the spectral shape method.
A spot-matching method using cumulative frequency matrix in 2D gel images
Han, Chan-Myeong; Park, Joon-Ho; Chang, Chu-Seok; Ryoo, Myung-Chun
2014-01-01
A new method for spot matching in two-dimensional gel electrophoresis images using a cumulative frequency matrix is proposed. The method improves on the weak points of the previous method called ‘spot matching by topological patterns of neighbour spots’. It accumulates the frequencies of neighbour spot pairs produced through the entire matching process and determines spot pairs one by one in order of higher frequency. Spot matching by frequencies of neighbour spot pairs shows a fairly better performance. However, it can give researchers a hint for whether the matching results can be trustworthy or not, which can save researchers a lot of effort for verification of the results. PMID:26019609
NASA Astrophysics Data System (ADS)
Mundis, Nathan L.; Mavriplis, Dimitri J.
2017-09-01
The time-spectral method applied to the Euler and coupled aeroelastic equations theoretically offers significant computational savings for purely periodic problems when compared to standard time-implicit methods. However, attaining superior efficiency with time-spectral methods over traditional time-implicit methods hinges on the ability rapidly to solve the large non-linear system resulting from time-spectral discretizations which become larger and stiffer as more time instances are employed or the period of the flow becomes especially short (i.e. the maximum resolvable wave-number increases). In order to increase the efficiency of these solvers, and to improve robustness, particularly for large numbers of time instances, the Generalized Minimal Residual Method (GMRES) is used to solve the implicit linear system over all coupled time instances. The use of GMRES as the linear solver makes time-spectral methods more robust, allows them to be applied to a far greater subset of time-accurate problems, including those with a broad range of harmonic content, and vastly improves the efficiency of time-spectral methods. In previous work, a wave-number independent preconditioner that mitigates the increased stiffness of the time-spectral method when applied to problems with large resolvable wave numbers has been developed. This preconditioner, however, directly inverts a large matrix whose size increases in proportion to the number of time instances. As a result, the computational time of this method scales as the cube of the number of time instances. In the present work, this preconditioner has been reworked to take advantage of an approximate-factorization approach that effectively decouples the spatial and temporal systems. Once decoupled, the time-spectral matrix can be inverted in frequency space, where it has entries only on the main diagonal and therefore can be inverted quite efficiently. This new GMRES/preconditioner combination is shown to be over an order of magnitude more efficient than the previous wave-number independent preconditioner for problems with large numbers of time instances and/or large reduced frequencies.
Error analysis in inverse scatterometry. I. Modeling.
Al-Assaad, Rayan M; Byrne, Dale M
2007-02-01
Scatterometry is an optical technique that has been studied and tested in recent years in semiconductor fabrication metrology for critical dimensions. Previous work presented an iterative linearized method to retrieve surface-relief profile parameters from reflectance measurements upon diffraction. With the iterative linear solution model in this work, rigorous models are developed to represent the random and deterministic or offset errors in scatterometric measurements. The propagation of different types of error from the measurement data to the profile parameter estimates is then presented. The improvement in solution accuracies is then demonstrated with theoretical and experimental data by adjusting for the offset errors. In a companion paper (in process) an improved optimization method is presented to account for unknown offset errors in the measurements based on the offset error model.
Collocated electrodynamic FDTD schemes using overlapping Yee grids and higher-order Hodge duals
NASA Astrophysics Data System (ADS)
Deimert, C.; Potter, M. E.; Okoniewski, M.
2016-12-01
The collocated Lebedev grid has previously been proposed as an alternative to the Yee grid for electromagnetic finite-difference time-domain (FDTD) simulations. While it performs better in anisotropic media, it performs poorly in isotropic media because it is equivalent to four overlapping, uncoupled Yee grids. We propose to couple the four Yee grids and fix the Lebedev method using discrete exterior calculus (DEC) with higher-order Hodge duals. We find that higher-order Hodge duals do improve the performance of the Lebedev grid, but they also improve the Yee grid by a similar amount. The effectiveness of coupling overlapping Yee grids with a higher-order Hodge dual is thus questionable. However, the theoretical foundations developed to derive these methods may be of interest in other problems.
Ochratoxin A in cocoa and chocolate sampled in Canada.
Turcotte, A-M; Scott, P M
2011-06-01
In order to determine the levels of ochratoxin A (OTA) in cocoa and cocoa products available in Canada, a previously published analytical method, with minor modifications to the extraction and immunoaffinity clean-up and inclusion of an evaporation step, was initially used (Method I). To improve the low method recoveries (46-61%), 40% methanol was then included in the aqueous sodium bicarbonate extraction solvent (pH 7.8) (Method II). Clean-up was on an Ochratest™ immunoaffinity column and OTA was determined by liquid chromatography (LC) with fluorescence detection. Recoveries of OTA from spiked cocoa powder (0.5 and 5 ng g(-1)) were 75-84%; while recoveries from chocolate were 93-94%. The optimized method was sensitive (limit of quantification (LOQ) = 0.07-0.08 ng g(-1)), accurate (recovery = 75-94%) and precise (coefficient of variation (CV) < 5%). It is applicable to cocoa and chocolate. Analysis of 32 samples of cocoa powder (16 alkalized and 16 natural) for OTA showed an incidence of 100%, with concentrations ranging from 0.25 to 7.8 ng g(-1); in six samples the OTA level exceeded 2 ng g(-1), the previously considered European Union limit for cocoa. The frequency of detection of OTA in 28 chocolate samples (21 dark or baking chocolate and seven milk chocolate) was also 100% with concentrations ranging from 0.05 to 1.4 ng g(-1); one sample had a level higher than the previously considered European Union limit for chocolate (1 ng g(-1)).
NASA Technical Reports Server (NTRS)
Barranger, John P.
1990-01-01
A novel optical method of measuring 2-D surface strain is proposed. Two linear strains along orthogonal axes and the shear strain between those axes is determined by a variation of Yamaguchi's laser-speckle strain gage technique. It offers the advantages of shorter data acquisition times, less stringent alignment requirements, and reduced decorrelation effects when compared to a previously implemented optical strain rosette technique. The method automatically cancels the translational and rotational components of rigid body motion while simplifying the optical system and improving the speed of response.
Protein detection through different platforms of immuno-loop-mediated isothermal amplification
NASA Astrophysics Data System (ADS)
Pourhassan-Moghaddam, Mohammad; Rahmati-Yamchi, Mohammad; Akbarzadeh, Abolfazl; Daraee, Hadis; Nejati-Koshki, Kazem; Hanifehpour, Younes; Joo, Sang Woo
2013-11-01
Different immunoassay-based methods have been devised to detect protein targets. These methods have some challenges that make them inefficient for assaying ultra-low-amounted proteins. ELISA, iPCR, iRCA, and iNASBA are the common immunoassay-based methods of protein detection, each of which has specific and common technical challenges making it necessary to introduce a novel method in order to avoid their problems for detection of target proteins. Here we propose a new method nominated as `immuno-loop-mediated isothermal amplification' or `iLAMP'. This new method is free from the problems of the previous methods and has significant advantages over them. In this paper we also offer various configurations in order to improve the applicability of this method in real-world sample analyses. Important potential applications of this method are stated as well.
Classification of ligand molecules in PDB with graph match-based structural superposition.
Shionyu-Mitsuyama, Clara; Hijikata, Atsushi; Tsuji, Toshiyuki; Shirai, Tsuyoshi
2016-12-01
The fast heuristic graph match algorithm for small molecules, COMPLIG, was improved by adding a structural superposition process to verify the atom-atom matching. The modified method was used to classify the small molecule ligands in the Protein Data Bank (PDB) by their three-dimensional structures, and 16,660 types of ligands in the PDB were classified into 7561 clusters. In contrast, a classification by a previous method (without structure superposition) generated 3371 clusters from the same ligand set. The characteristic feature in the current classification system is the increased number of singleton clusters, which contained only one ligand molecule in a cluster. Inspections of the singletons in the current classification system but not in the previous one implied that the major factors for the isolation were differences in chirality, cyclic conformations, separation of substructures, and bond length. Comparisons between current and previous classification systems revealed that the superposition-based classification was effective in clustering functionally related ligands, such as drugs targeted to specific biological processes, owing to the strictness of the atom-atom matching.
Nonlinear PET parametric image reconstruction with MRI information using kernel method
NASA Astrophysics Data System (ADS)
Gong, Kuang; Wang, Guobao; Chen, Kevin T.; Catana, Ciprian; Qi, Jinyi
2017-03-01
Positron Emission Tomography (PET) is a functional imaging modality widely used in oncology, cardiology, and neurology. It is highly sensitive, but suffers from relatively poor spatial resolution, as compared with anatomical imaging modalities, such as magnetic resonance imaging (MRI). With the recent development of combined PET/MR systems, we can improve the PET image quality by incorporating MR information. Previously we have used kernel learning to embed MR information in static PET reconstruction and direct Patlak reconstruction. Here we extend this method to direct reconstruction of nonlinear parameters in a compartment model by using the alternating direction of multiplier method (ADMM) algorithm. Simulation studies show that the proposed method can produce superior parametric images compared with existing methods.
Experiences Using Formal Methods for Requirements Modeling
NASA Technical Reports Server (NTRS)
Easterbrook, Steve; Lutz, Robyn; Covington, Rick; Kelly, John; Ampo, Yoko; Hamilton, David
1996-01-01
This paper describes three cases studies in the lightweight application of formal methods to requirements modeling for spacecraft fault protection systems. The case studies differ from previously reported applications of formal methods in that formal methods were applied very early in the requirements engineering process, to validate the evolving requirements. The results were fed back into the projects, to improve the informal specifications. For each case study, we describe what methods were applied, how they were applied, how much effort was involved, and what the findings were. In all three cases, the formal modeling provided a cost effective enhancement of the existing verification and validation processes. We conclude that the benefits gained from early modeling of unstable requirements more than outweigh the effort needed to maintain multiple representations.
Deformable image registration for tissues with large displacements
Huang, Xishi; Ren, Jing; Green, Mark
2017-01-01
Abstract. Image registration for internal organs and soft tissues is considered extremely challenging due to organ shifts and tissue deformation caused by patients’ movements such as respiration and repositioning. In our previous work, we proposed a fast registration method for deformable tissues with small rotations. We extend our method to deformable registration of soft tissues with large displacements. We analyzed the deformation field of the liver by decomposing the deformation into shift, rotation, and pure deformation components and concluded that in many clinical cases, the liver deformation contains large rotations and small deformations. This analysis justified the use of linear elastic theory in our image registration method. We also proposed a region-based neuro-fuzzy transformation model to seamlessly stitch together local affine and local rigid models in different regions. We have performed the experiments on a liver MRI image set and showed the effectiveness of the proposed registration method. We have also compared the performance of the proposed method with the previous method on tissues with large rotations and showed that the proposed method outperformed the previous method when dealing with the combination of pure deformation and large rotations. Validation results show that we can achieve a target registration error of 1.87±0.87 mm and an average centerline distance error of 1.28±0.78 mm. The proposed technique has the potential to significantly improve registration capabilities and the quality of intraoperative image guidance. To the best of our knowledge, this is the first time that the complex displacement of the liver is explicitly separated into local pure deformation and rigid motion. PMID:28149924
Daniels, Noah M; Hosur, Raghavendra; Berger, Bonnie; Cowen, Lenore J
2012-05-01
One of the most successful methods to date for recognizing protein sequences that are evolutionarily related has been profile hidden Markov models (HMMs). However, these models do not capture pairwise statistical preferences of residues that are hydrogen bonded in beta sheets. These dependencies have been partially captured in the HMM setting by simulated evolution in the training phase and can be fully captured by Markov random fields (MRFs). However, the MRFs can be computationally prohibitive when beta strands are interleaved in complex topologies. We introduce SMURFLite, a method that combines both simplified MRFs and simulated evolution to substantially improve remote homology detection for beta structures. Unlike previous MRF-based methods, SMURFLite is computationally feasible on any beta-structural motif. We test SMURFLite on all propeller and barrel folds in the mainly-beta class of the SCOP hierarchy in stringent cross-validation experiments. We show a mean 26% (median 16%) improvement in area under curve (AUC) for beta-structural motif recognition as compared with HMMER (a well-known HMM method) and a mean 33% (median 19%) improvement as compared with RAPTOR (a well-known threading method) and even a mean 18% (median 10%) improvement in AUC over HHPred (a profile-profile HMM method), despite HHpred's use of extensive additional training data. We demonstrate SMURFLite's ability to scale to whole genomes by running a SMURFLite library of 207 beta-structural SCOP superfamilies against the entire genome of Thermotoga maritima, and make over a 100 new fold predictions. Availability and implementaion: A webserver that runs SMURFLite is available at: http://smurf.cs.tufts.edu/smurflite/
Torsional anharmonicity in the conformational thermodynamics of flexible molecules
NASA Astrophysics Data System (ADS)
Miller, Thomas F., III; Clary, David C.
We present an algorithm for calculating the conformational thermodynamics of large, flexible molecules that combines ab initio electronic structure theory calculations with a torsional path integral Monte Carlo (TPIMC) simulation. The new algorithm overcomes the previous limitations of the TPIMC method by including the thermodynamic contributions of non-torsional vibrational modes and by affordably incorporating the ab initio calculation of conformer electronic energies, and it improves the conventional ab initio treatment of conformational thermodynamics by accounting for the anharmonicity of the torsional modes. Using previously published ab initio results and new TPIMC calculations, we apply the algorithm to the conformers of the adrenaline molecule.
Selection of Thermal Worst-Case Orbits via Modified Efficient Global Optimization
NASA Technical Reports Server (NTRS)
Moeller, Timothy M.; Wilhite, Alan W.; Liles, Kaitlin A.
2014-01-01
Efficient Global Optimization (EGO) was used to select orbits with worst-case hot and cold thermal environments for the Stratospheric Aerosol and Gas Experiment (SAGE) III. The SAGE III system thermal model changed substantially since the previous selection of worst-case orbits (which did not use the EGO method), so the selections were revised to ensure the worst cases are being captured. The EGO method consists of first conducting an initial set of parametric runs, generated with a space-filling Design of Experiments (DoE) method, then fitting a surrogate model to the data and searching for points of maximum Expected Improvement (EI) to conduct additional runs. The general EGO method was modified by using a multi-start optimizer to identify multiple new test points at each iteration. This modification facilitates parallel computing and decreases the burden of user interaction when the optimizer code is not integrated with the model. Thermal worst-case orbits for SAGE III were successfully identified and shown by direct comparison to be more severe than those identified in the previous selection. The EGO method is a useful tool for this application and can result in computational savings if the initial Design of Experiments (DoE) is selected appropriately.
A gas-kinetic BGK scheme for the compressible Navier-Stokes equations
NASA Technical Reports Server (NTRS)
Xu, Kun
2000-01-01
This paper presents an improved gas-kinetic scheme based on the Bhatnagar-Gross-Krook (BGK) model for the compressible Navier-Stokes equations. The current method extends the previous gas-kinetic Navier-Stokes solver developed by Xu and Prendergast by implementing a general nonequilibrium state to represent the gas distribution function at the beginning of each time step. As a result, the requirement in the previous scheme, such as the particle collision time being less than the time step for the validity of the BGK Navier-Stokes solution, is removed. Therefore, the applicable regime of the current method is much enlarged and the Navier-Stokes solution can be obtained accurately regardless of the ratio between the collision time and the time step. The gas-kinetic Navier-Stokes solver developed by Chou and Baganoff is the limiting case of the current method, and it is valid only under such a limiting condition. Also, in this paper, the appropriate implementation of boundary condition for the kinetic scheme, different kinetic limiting cases, and the Prandtl number fix are presented. The connection among artificial dissipative central schemes, Godunov-type schemes, and the gas-kinetic BGK method is discussed. Many numerical tests are included to validate the current method.
Adaptive enhanced sampling by force-biasing using neural networks
NASA Astrophysics Data System (ADS)
Guo, Ashley Z.; Sevgen, Emre; Sidky, Hythem; Whitmer, Jonathan K.; Hubbell, Jeffrey A.; de Pablo, Juan J.
2018-04-01
A machine learning assisted method is presented for molecular simulation of systems with rugged free energy landscapes. The method is general and can be combined with other advanced sampling techniques. In the particular implementation proposed here, it is illustrated in the context of an adaptive biasing force approach where, rather than relying on discrete force estimates, one can resort to a self-regularizing artificial neural network to generate continuous, estimated generalized forces. By doing so, the proposed approach addresses several shortcomings common to adaptive biasing force and other algorithms. Specifically, the neural network enables (1) smooth estimates of generalized forces in sparsely sampled regions, (2) force estimates in previously unexplored regions, and (3) continuous force estimates with which to bias the simulation, as opposed to biases generated at specific points of a discrete grid. The usefulness of the method is illustrated with three different examples, chosen to highlight the wide range of applicability of the underlying concepts. In all three cases, the new method is found to enhance considerably the underlying traditional adaptive biasing force approach. The method is also found to provide improvements over previous implementations of neural network assisted algorithms.
Modeling and quantification of repolarization feature dependency on heart rate.
Minchole, A; Zacur, E; Pueyo, E; Laguna, P
2014-01-01
This article is part of the Focus Theme of Methods of Information in Medicine on "Biosignal Interpretation: Advanced Methods for Studying Cardiovascular and Respiratory Systems". This work aims at providing an efficient method to estimate the parameters of a non linear model including memory, previously proposed to characterize rate adaptation of repolarization indices. The physiological restrictions on the model parameters have been included in the cost function in such a way that unconstrained optimization techniques such as descent optimization methods can be used for parameter estimation. The proposed method has been evaluated on electrocardiogram (ECG) recordings of healthy subjects performing a tilt test, where rate adaptation of QT and Tpeak-to-Tend (Tpe) intervals has been characterized. The proposed strategy results in an efficient methodology to characterize rate adaptation of repolarization features, improving the convergence time with respect to previous strategies. Moreover, Tpe interval adapts faster to changes in heart rate than the QT interval. In this work an efficient estimation of the parameters of a model aimed at characterizing rate adaptation of repolarization features has been proposed. The Tpe interval has been shown to be rate related and with a shorter memory lag than the QT interval.
NASA Technical Reports Server (NTRS)
Halford, G. R.
1983-01-01
The presentation focuses primarily on the progress we at NASA Lewis Research Center have made. The understanding of the phenomenological processes of high temperature fatigue of metals for the purpose of calculating lives of turbine engine hot section components is discussed. Improved understanding resulted in the development of accurate and physically correct life prediction methods such as Strain-Range partitioning for calculating creep fatigue interactions and the Double Linear Damage Rule for predicting potentially severe interactions between high and low cycle fatigue. Examples of other life prediction methods are also discussed. Previously announced in STAR as A83-12159
Research on facial expression simulation based on depth image
NASA Astrophysics Data System (ADS)
Ding, Sha-sha; Duan, Jin; Zhao, Yi-wu; Xiao, Bo; Wang, Hao
2017-11-01
Nowadays, face expression simulation is widely used in film and television special effects, human-computer interaction and many other fields. Facial expression is captured by the device of Kinect camera .The method of AAM algorithm based on statistical information is employed to detect and track faces. The 2D regression algorithm is applied to align the feature points. Among them, facial feature points are detected automatically and 3D cartoon model feature points are signed artificially. The aligned feature points are mapped by keyframe techniques. In order to improve the animation effect, Non-feature points are interpolated based on empirical models. Under the constraint of Bézier curves we finish the mapping and interpolation. Thus the feature points on the cartoon face model can be driven if the facial expression varies. In this way the purpose of cartoon face expression simulation in real-time is came ture. The experiment result shows that the method proposed in this text can accurately simulate the facial expression. Finally, our method is compared with the previous method. Actual data prove that the implementation efficiency is greatly improved by our method.
Improved methods for the measurement and analysis of stellar magnetic fields
NASA Technical Reports Server (NTRS)
Saar, Steven H.
1988-01-01
The paper presents several improved methods for the measurement of magnetic fields on cool stars which take into account simple radiative transfer effects and the exact Zeeman patterns. Using these methods, high-resolution, low-noise data can be fitted with theoretical line profiles to determine the mean magnetic field strength in stellar active regions and a model-dependent fraction of the stellar surface (filling factor) covered by these regions. Random errors in the derived field strength and filling factor are parameterized in terms of signal-to-noise ratio, wavelength, spectral resolution, stellar rotation rate, and the magnetic parameters themselves. Weak line blends, if left uncorrected, can have significant systematic effects on the derived magnetic parameters, and thus several methods are developed to compensate partially for them. The magnetic parameters determined by previous methods likely have systematic errors because of such line blends and because of line saturation effects. Other sources of systematic error are explored in detail. These sources of error currently make it difficult to determine the magnetic parameters of individual stars to better than about + or - 20 percent.
Dynamic PET Image reconstruction for parametric imaging using the HYPR kernel method
NASA Astrophysics Data System (ADS)
Spencer, Benjamin; Qi, Jinyi; Badawi, Ramsey D.; Wang, Guobao
2017-03-01
Dynamic PET image reconstruction is a challenging problem because of the ill-conditioned nature of PET and the lowcounting statistics resulted from short time-frames in dynamic imaging. The kernel method for image reconstruction has been developed to improve image reconstruction of low-count PET data by incorporating prior information derived from high-count composite data. In contrast to most of the existing regularization-based methods, the kernel method embeds image prior information in the forward projection model and does not require an explicit regularization term in the reconstruction formula. Inspired by the existing highly constrained back-projection (HYPR) algorithm for dynamic PET image denoising, we propose in this work a new type of kernel that is simpler to implement and further improves the kernel-based dynamic PET image reconstruction. Our evaluation study using a physical phantom scan with synthetic FDG tracer kinetics has demonstrated that the new HYPR kernel-based reconstruction can achieve a better region-of-interest (ROI) bias versus standard deviation trade-off for dynamic PET parametric imaging than the post-reconstruction HYPR denoising method and the previously used nonlocal-means kernel.
Development of improved-durability plasma sprayed ceramic coatings for gas turbine engines
NASA Technical Reports Server (NTRS)
Sumner, I. E.; Ruckle, D. L.
1980-01-01
As part of a NASA program to reduce fuel consumption of current commercial aircraft engines, methods were investigated for improving the durability of plasma sprayed ceramic coatings for use on vane platforms in the JT9D turbofan engine. Increased durability concepts under evaluation include use of improved strain tolerant microstructures and control of the substrate temperature during coating application. Initial burner rig tests conducted at temperatures of 1010 C (1850 F) indicate that improvements in cyclic life greater than 20:1 over previous ceramic coating systems were achieved. Three plasma sprayed coating systems applied to first stage vane platforms in the high pressure turbine were subjected to a 100-cycle JT9D engine endurance test with only minor damage occurring to the coatings.
Mezouari, S; Liu, W Yun; Pace, G; Hartman, T G
2015-01-01
The objective of this study was to develop an improved analytical method for the determination of 3-chloro-1,2-propanediol (3-MCPD) and 1,3-dichloropropanol (1,3-DCP) in paper-type food packaging. The established method includes aqueous extraction, matrix spiking of a deuterated surrogate internal standard (3-MCPD-d₅), clean-up using Extrelut solid-phase extraction, derivatisation using a silylation reagent, and GC-MS analysis of the chloropropanols as their corresponding trimethyl silyl ethers. The new method is applicable to food-grade packaging samples using European Commission standard aqueous extraction and aqueous food stimulant migration tests. In this improved method, the derivatisation procedure was optimised; the cost and time of the analysis were reduced by using 10 times less sample, solvents and reagents than in previously described methods. Overall the validation data demonstrate that the method is precise and reliable. The limit of detection (LOD) of the aqueous extract was 0.010 mg kg(-1) (w/w) for both 3-MCPD and 1,3-DCP. Analytical precision had a relative standard deviation (RSD) of 3.36% for 3-MCPD and an RSD of 7.65% for 1,3-DCP. The new method was satisfactorily applied to the analysis of over 100 commercial paperboard packaging samples. The data are being used to guide the product development of a next generation of wet-strength resins with reduced chloropropanol content, and also for risk assessments to calculate the virtual safe dose (VSD).
NASA Astrophysics Data System (ADS)
Sun, Qianlai; Wang, Yin; Sun, Zhiyi
2018-05-01
For most surface defect detection methods based on image processing, image segmentation is a prerequisite for determining and locating the defect. In our previous work, a method based on singular value decomposition (SVD) was used to determine and approximately locate surface defects on steel strips without image segmentation. For the SVD-based method, the image to be inspected was projected onto its first left and right singular vectors respectively. If there were defects in the image, there would be sharp changes in the projections. Then the defects may be determined and located according sharp changes in the projections of each image to be inspected. This method was simple and practical but the SVD should be performed for each image to be inspected. Owing to the high time complexity of SVD itself, it did not have a significant advantage in terms of time consumption over image segmentation-based methods. Here, we present an improved SVD-based method. In the improved method, a defect-free image is considered as the reference image which is acquired under the same environment as the image to be inspected. The singular vectors of each image to be inspected are replaced by the singular vectors of the reference image, and SVD is performed only once for the reference image off-line before detecting of the defects, thus greatly reducing the time required. The improved method is more conducive to real-time defect detection. Experimental results confirm its validity.
Improved colour matching technique for fused nighttime imagery with daytime colours
NASA Astrophysics Data System (ADS)
Hogervorst, Maarten A.; Toet, Alexander
2016-10-01
Previously, we presented a method for applying daytime colours to fused nighttime (e.g., intensified and LWIR) imagery (Toet and Hogervorst, Opt.Eng. 51(1), 2012). Our colour mapping not only imparts a natural daylight appearance to multiband nighttime images but also enhances the contrast and visibility of otherwise obscured details. As a result, this colourizing method leads to increased ease of interpretation, better discrimination and identification of materials, faster reaction times and ultimately improved situational awareness (Toet e.a., Opt.Eng.53(4), 2014). A crucial step in this colouring process is the choice of a suitable colour mapping scheme. When daytime colour images and multiband sensor images of the same scene are available the colour mapping can be derived from matching image samples (i.e., by relating colour values to sensor signal intensities). When no exact matching reference images are available the colour transformation can be derived from the first-order statistical properties of the reference image and the multiband sensor image (Toet, Info. Fus. 4(3), 2003). In the current study we investigated new colour fusion schemes that combine the advantages of the both methods, using the correspondence between multiband sensor values and daytime colours (1st method) in a smooth transformation (2nd method). We designed and evaluated three new fusion schemes that focus on: i) a closer match with the daytime luminances, ii) improved saliency of hot targets and iii) improved discriminability of materials
NASA Astrophysics Data System (ADS)
Gibergans-Báguena, J.; Llasat, M. C.
2007-12-01
The objective of this paper is to present the improvement of quantitative forecasting of daily rainfall in Catalonia (NE Spain) from an analogues technique, taking into account synoptic and local data. This method is based on an analogues sorting technique: meteorological situations similar to the current one, in terms of 700 and 1000 hPa geopotential fields at 00 UTC, complemented with the inclusion of some thermodynamic parameters extracted from an historical data file. Thermodynamic analysis acts as a highly discriminating feature for situations in which the synoptic situation fails to explain either atmospheric phenomena or rainfall distribution. This is the case in heavy rainfall situations, where the existence of instability and high water vapor content is essential. With the objective of including these vertical thermodynamic features, information provided by the Palma de Mallorca radiosounding (Spain) has been used. Previously, a selection of the most discriminating thermodynamic parameters for the daily rainfall was made, and then the analogues technique applied to them. Finally, three analog forecasting methods were applied for the quantitative daily rainfall forecasting in Catalonia. The first one is based on analogies from geopotential fields to synoptic scale; the second one is exclusively based on the search of similarity from local thermodynamic information and the third method combines the other two methods. The results show that this last method provides a substantial improvement of quantitative rainfall estimation.
NASA Astrophysics Data System (ADS)
Brewick, Patrick T.; Smyth, Andrew W.
2016-12-01
The authors have previously shown that many traditional approaches to operational modal analysis (OMA) struggle to properly identify the modal damping ratios for bridges under traffic loading due to the interference caused by the driving frequencies of the traffic loads. This paper presents a novel methodology for modal parameter estimation in OMA that overcomes the problems presented by driving frequencies and significantly improves the damping estimates. This methodology is based on finding the power spectral density (PSD) of a given modal coordinate, and then dividing the modal PSD into separate regions, left- and right-side spectra. The modal coordinates were found using a blind source separation (BSS) algorithm and a curve-fitting technique was developed that uses optimization to find the modal parameters that best fit each side spectra of the PSD. Specifically, a pattern-search optimization method was combined with a clustering analysis algorithm and together they were employed in a series of stages in order to improve the estimates of the modal damping ratios. This method was used to estimate the damping ratios from a simulated bridge model subjected to moving traffic loads. The results of this method were compared to other established OMA methods, such as Frequency Domain Decomposition (FDD) and BSS methods, and they were found to be more accurate and more reliable, even for modes that had their PSDs distorted or altered by driving frequencies.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shen, Yufeng; Tolic, Nikola; Purvine, Samuel O.
2011-11-07
The peptidome (i.e. processed and degraded forms of proteins) of e.g. blood can potentially provide insights into disease processes, as well as a source of candidate biomarkers that are unobtainable using conventional bottom-up proteomics approaches. MS dissociation methods, including CID, HCD, and ETD, can each contribute distinct identifications using conventional peptide identification methods (Shen et al. J. Proteome Res. 2011), but such samples still pose significant analysis and informatics challenges. In this work, we explored a simple approach for better utilization of high accuracy fragment ion mass measurements provided e.g. by FT MS/MS and demonstrate significant improvements relative to conventionalmore » descriptive and probabilistic scores methods. For example, at the same FDR level we identified 20-40% more peptides than SEQUEST and Mascot scoring methods using high accuracy fragment ion information (e.g., <10 mass errors) from CID, HCD, and ETD spectra. Species identified covered >90% of all those identified from SEQUEST, Mascot, and MS-GF scoring methods. Additionally, we found that the merging the different fragment spectra provided >60% more species using the UStags method than achieved previously, and enabled >1000 peptidome components to be identified from a single human blood plasma sample with a 0.6% peptide-level FDR, and providing an improved basis for investigation of potentially disease-related peptidome components.« less
Ground-state energy of HeH{sup +}
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhou Binglu; Zhu Jiongming; Yan Zongchao
2006-06-15
The nonrelativistic ground-state energy of {sup 4}HeH{sup +} is calculated using a variational method in Hylleraas coordinates. Convergence to a few parts in 10{sup 10} is achieved, which improves the best previous result of Pavanello et al. [J. Chem. Phys. 123, 104306 (2005)]. Expectation values of the interparticle distances are evaluated. Similar results for {sup 3}HeH{sup +} are also presented.
An EVS Clicker Based Hybrid Assessment to Engage Students with Marking Criteria
ERIC Educational Resources Information Center
Bennett, Steve; Barker, Trevor; Lilley, Mariana
2014-01-01
Over 4 iterations of a large course (>180 students) in introductory emedia design in a first year computer science course we have seen a year on year improvement. We believe this is due to the use of EVS clickers for feed-forward assessment: that is to say a method of getting the whole class to evaluate previous cohorts' submissions in public…
Exchange and simple transfusion in sickle-cell diseases in pregnancy
Buckle, A. E. R.; Price, T. M. L.; Whitmore, D. N.
1969-01-01
The management of sickle-cell crisis in a pregnant patient by exchange transfusion is described, the procedure leading to immediate and dramatic improvement in the condition. Partial exchange transfusion in three other patients with sickle-cell anaemia, judged by episodes of crisis in previous pregnancies to be at particular risk, is also reported and the value of this method of management discussed. PMID:5359314
NASA Astrophysics Data System (ADS)
Cherry, M.; Dierken, J.; Boehnlein, T.; Pilchak, A.; Sathish, S.; Grandhi, R.
2018-01-01
A new technique for performing quantitative scanning acoustic microscopy imaging of Rayleigh surface wave (RSW) velocity was developed based on b-scan processing. In this technique, the focused acoustic beam is moved through many defocus distances over the sample and excited with an impulse excitation, and advanced algorithms based on frequency filtering and the Hilbert transform are used to post-process the b-scans to estimate the Rayleigh surface wave velocity. The new method was used to estimate the RSW velocity on an optically flat E6 glass sample, and the velocity was measured at ±2 m/s and the scanning time per point was on the order of 1.0 s, which are both improvement from the previous two-point defocus method. The new method was also applied to the analysis of two titanium samples, and the velocity was estimated with very low standard deviation in certain large grains on the sample. A new behavior was observed with the b-scan analysis technique where the amplitude of the surface wave decayed dramatically on certain crystallographic orientations. The new technique was also compared with previous results, and the new technique has been found to be much more reliable and to have higher contrast than previously possible with impulse excitation.
NASA Astrophysics Data System (ADS)
Fellers, R. S.; Braly, L. B.; Saykally, R. J.; Leforestier, C.
1999-04-01
The SWPS method is improved by the addition of H.E.G. contractions for generating a more compact basis. An error in the definition of the internal fragment axis system used in our previous calculation is described and corrected. Fully coupled 6D (rigid monomers) VRT states are computed for several new water dimer potential surfaces and compared with experiment and our earlier SWPS results. This work sets the stage for refinement of such potential surfaces via regression analysis of VRT spectroscopic data.
Experiences Using Lightweight Formal Methods for Requirements Modeling
NASA Technical Reports Server (NTRS)
Easterbrook, Steve; Lutz, Robyn; Covington, Rick; Kelly, John; Ampo, Yoko; Hamilton, David
1997-01-01
This paper describes three case studies in the lightweight application of formal methods to requirements modeling for spacecraft fault protection systems. The case studies differ from previously reported applications of formal methods in that formal methods were applied very early in the requirements engineering process, to validate the evolving requirements. The results were fed back into the projects, to improve the informal specifications. For each case study, we describe what methods were applied, how they were applied, how much effort was involved, and what the findings were. In all three cases, formal methods enhanced the existing verification and validation processes, by testing key properties of the evolving requirements, and helping to identify weaknesses. We conclude that the benefits gained from early modeling of unstable requirements more than outweigh the effort needed to maintain multiple representations.
Kumar, Dushyant; Hariharan, Hari; Faizy, Tobias D; Borchert, Patrick; Siemonsen, Susanne; Fiehler, Jens; Reddy, Ravinder; Sedlacik, Jan
2018-05-12
We present a computationally feasible and iterative multi-voxel spatially regularized algorithm for myelin water fraction (MWF) reconstruction. This method utilizes 3D spatial correlations present in anatomical/pathological tissues and underlying B1 + -inhomogeneity or flip angle inhomogeneity to enhance the noise robustness of the reconstruction while intrinsically accounting for stimulated echo contributions using T2-distribution data alone. Simulated data and in vivo data acquired using 3D non-selective multi-echo spin echo (3DNS-MESE) were used to compare the reconstruction quality of the proposed approach against those of the popular algorithm (the method by Prasloski et al.) and our previously proposed 2D multi-slice spatial regularization spatial regularization approach. We also investigated whether the inter-sequence correlations and agreements improved as a result of the proposed approach. MWF-quantifications from two sequences, 3DNS-MESE vs 3DNS-gradient and spin echo (3DNS-GRASE), were compared for both reconstruction approaches to assess correlations and agreements between inter-sequence MWF-value pairs. MWF values from whole-brain data of six volunteers and two multiple sclerosis patients are being reported as well. In comparison with competing approaches such as Prasloski's method or our previously proposed 2D multi-slice spatial regularization method, the proposed method showed better agreements with simulated truths using regression analyses and Bland-Altman analyses. For 3DNS-MESE data, MWF-maps reconstructed using the proposed algorithm provided better depictions of white matter structures in subcortical areas adjoining gray matter which agreed more closely with corresponding contrasts on T2-weighted images than MWF-maps reconstructed with the method by Prasloski et al. We also achieved a higher level of correlations and agreements between inter-sequence (3DNS-MESE vs 3DNS-GRASE) MWF-value pairs. The proposed algorithm provides more noise-robust fits to T2-decay data and improves MWF-quantifications in white matter structures especially in the sub-cortical white matter and major white matter tract regions. Copyright © 2018 Elsevier Inc. All rights reserved.
An improved cellular automaton method to model multispecies biofilms.
Tang, Youneng; Valocchi, Albert J
2013-10-01
Biomass-spreading rules used in previous cellular automaton methods to simulate multispecies biofilm introduced extensive mixing between different biomass species or resulted in spatially discontinuous biomass concentration and distribution; this caused results based on the cellular automaton methods to deviate from experimental results and those from the more computationally intensive continuous method. To overcome the problems, we propose new biomass-spreading rules in this work: Excess biomass spreads by pushing a line of grid cells that are on the shortest path from the source grid cell to the destination grid cell, and the fractions of different biomass species in the grid cells on the path change due to the spreading. To evaluate the new rules, three two-dimensional simulation examples are used to compare the biomass distribution computed using the continuous method and three cellular automaton methods, one based on the new rules and the other two based on rules presented in two previous studies. The relationship between the biomass species is syntrophic in one example and competitive in the other two examples. Simulation results generated using the cellular automaton method based on the new rules agree much better with the continuous method than do results using the other two cellular automaton methods. The new biomass-spreading rules are no more complex to implement than the existing rules. Copyright © 2013 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Leonard, Kevin Raymond
This dissertation concentrates on the development of two new tomographic techniques that enable wide-area inspection of pipe-like structures. By envisioning a pipe as a plate wrapped around upon itself, the previous Lamb Wave Tomography (LWT) techniques are adapted to cylindrical structures. Helical Ultrasound Tomography (HUT) uses Lamb-like guided wave modes transmitted and received by two circumferential arrays in a single crosshole geometry. Meridional Ultrasound Tomography (MUT) creates the same crosshole geometry with a linear array of transducers along the axis of the cylinder. However, even though these new scanning geometries are similar to plates, additional complexities arise because they are cylindrical structures. First, because it is a single crosshole geometry, the wave vector coverage is poorer than in the full LWT system. Second, since waves can travel in both directions around the circumference of the pipe, modes can also constructively and destructively interfere with each other. These complexities necessitate improved signal processing algorithms to produce accurate and unambiguous tomographic reconstructions. Consequently, this work also describes a new algorithm for improving the extraction of multi-mode arrivals from guided wave signals. Previous work has relied solely on the first arriving mode for the time-of-flight measurements. In order to improve the LWT, HUT and MUT systems reconstructions, improved signal processing methods are needed to extract information about the arrival times of the later arriving modes. Because each mode has different through-thickness displacement values, they are sensitive to different types of flaws, and the information gained from the multi-mode analysis improves understanding of the structural integrity of the inspected material. Both tomographic frequency compounding and mode sorting algorithms are introduced. It is also shown that each of these methods improve the reconstructed images both qualitatively and quantitatively.
Tsai, Po-Yen; Lee, I-Chin; Hsu, Hsin-Yun; Huang, Hong-Yuan; Fan, Shih-Kang; Liu, Cheng-Hsien
2016-01-01
Here, we describe a technique to manipulate a low number of beads to achieve high washing efficiency with zero bead loss in the washing process of a digital microfluidic (DMF) immunoassay. Previously, two magnetic bead extraction methods were reported in the DMF platform: (1) single-side electrowetting method and (2) double-side electrowetting method. The first approach could provide high washing efficiency, but it required a large number of beads. The second approach could reduce the required number of beads, but it was inefficient where multiple washes were required. More importantly, bead loss during the washing process was unavoidable in both methods. Here, an improved double-side electrowetting method is proposed for bead extraction by utilizing a series of unequal electrodes. It is shown that, with proper electrode size ratio, only one wash step is required to achieve 98% washing rate without any bead loss at bead number less than 100 in a droplet. It allows using only about 25 magnetic beads in DMF immunoassay to increase the number of captured analytes on each bead effectively. In our human soluble tumor necrosis factor receptor I (sTNF-RI) model immunoassay, the experimental results show that, comparing to our previous results without using the proposed bead extraction technique, the immunoassay with low bead number significantly enhances the fluorescence signal to provide a better limit of detection (3.14 pg/ml) with smaller reagent volumes (200 nl) and shorter analysis time (<1 h). This improved bead extraction technique not only can be used in the DMF immunoassay but also has great potential to be used in any other bead-based DMF systems for different applications. PMID:26858807
Bitton, Rachel R.; Webb, Taylor D.; Pauly, Kim Butts; Ghanouni, Pejman
2015-01-01
Purpose To investigate thermal dose volume (TDV) and non-perfused volume (NPV) of magnetic resonance-guided focused ultrasound (MRgFUS) treatments in patients with soft tissue tumors, and describe a method for MR thermal dosimetry using a baseline reference. Materials and Methods Agreement between TDV and immediate post treatment NPV was evaluated from MRgFUS treatments of five patients with biopsy-proven desmoid tumors. Thermometry data (gradient echo, 3T) were analyzed over the entire course of the treatments to discern temperature errors in the standard approach. The technique searches previously acquired baseline images for a match using 2D normalized cross-correlation and a weighted mean of phase difference images. Thermal dose maps and TDVs were recalculated using the matched baseline and compared to NPV. Results TDV and NPV showed between 47%–91% disagreement, using the standard immediate baseline method for calculating TDV. Long-term thermometry showed a nonlinear local temperature accrual, where peak additional temperature varied between 4–13°C (mean = 7.8°C) across patients. The prior baseline method could be implemented by finding a previously acquired matching baseline 61% ± 8% (mean ± SD) of the time. We found 7%–42% of the disagreement between TDV and NPV was due to errors in thermometry caused by heat accrual. For all patients, the prior baseline method increased the estimated treatment volume and reduced the discrepancies between TDV and NPV (P = 0.023). Conclusion This study presents a mismatch between in-treatment and post treatment efficacy measures. The prior baseline approach accounts for local heating and improves the accuracy of thermal dose-predicted volume. PMID:26119129
DOE Office of Scientific and Technical Information (OSTI.GOV)
Herrero, M.; Martinez-Gallegos, S.; Labajos, F.M.
2011-11-15
Conventional and microwave heating routes have been used to prepare PET-LDH (polyethylene terephthalate-layered double hydroxide) composites with 1-10 wt% LDH by in situ polymerization. To enhance the compatibility between PET and the LDH, terephthalate or dodecyl sulphate had been previously intercalated in the LDH. PXRD and TEM were used to detect the degree of dispersion of the filler and the type of the polymeric composites obtained, and FTIR spectroscopy confirmed that the polymerization process had taken place. The thermal stability of these composites, as studied by thermogravimetric analysis, was enhanced when the microwave heating method was applied. Dodecyl sulphate wasmore » more effective than terephthalate to exfoliate the samples, which only occurred for the terephthalate ones under microwave irradiation. - Graphical abstract: Conventional and microwave heating routes were used to prepare PET-LDH (polyethylene terephthalate-layered double hydroxide) composites with 1-10 wt% LDH by in situ polymerization. To enhance the compatibility between PET and the LDH, terephthalate or dodecyl sulphate was previously intercalated into the LDH. The microwave process improves the dispersion and the thermal stability of nanocomposites due to the interaction of the microwave radiation and the dipolar properties of EG and the homogeneous heating. Highlights: > LDH-PET compatibility is enhanced by preintercalation of organic anions. > Dodecylsulphate performance is much better than that of terephthalate. > Microwave heating improves the thermal stability of the composites. > Microwave heating improves as well the dispersion of the inorganic phase.« less
California Drought Recovery Assessment Using GRACE Satellite Gravimetry Information
NASA Astrophysics Data System (ADS)
Love, C. A.; Aghakouchak, A.; Madadgar, S.; Tourian, M. J.
2015-12-01
California has been experiencing its most extreme drought in recent history due to a combination of record high temperatures and exceptionally low precipitation. An estimate for when the drought can be expected to end is needed for risk mitigation and water management. A crucial component of drought recovery assessments is the estimation of terrestrial water storage (TWS) deficit. Previous studies on drought recovery have been limited to surface water hydrology (precipitation and/or runoff) for estimating changes in TWS, neglecting the contribution of groundwater deficits to the recovery time of the system. Groundwater requires more time to recover than surface water storage; therefore, the inclusion of groundwater storage in drought recovery assessments is essential for understanding the long-term vulnerability of a region. Here we assess the probability, for varying timescales, of California's current TWS deficit returning to its long-term historical mean. Our method consists of deriving the region's fluctuations in TWS from changes in the gravity field observed by NASA's Gravity Recovery and Climate Experiment (GRACE) satellites. We estimate the probability that meteorological inputs, precipitation minus evaporation and runoff, over different timespans will balance the current GRACE-derived TWS deficit (e.g. in 3, 6, 12 months). This method improves upon previous techniques as the GRACE-derived water deficit comprises all hydrologic sources, including surface water, groundwater, and snow cover. With this empirical probability assessment we expect to improve current estimates of California's drought recovery time, thereby improving risk mitigation.
Ma, Xin; Guo, Jing; Sun, Xiao
2016-01-01
DNA-binding proteins are fundamentally important in cellular processes. Several computational-based methods have been developed to improve the prediction of DNA-binding proteins in previous years. However, insufficient work has been done on the prediction of DNA-binding proteins from protein sequence information. In this paper, a novel predictor, DNABP (DNA-binding proteins), was designed to predict DNA-binding proteins using the random forest (RF) classifier with a hybrid feature. The hybrid feature contains two types of novel sequence features, which reflect information about the conservation of physicochemical properties of the amino acids, and the binding propensity of DNA-binding residues and non-binding propensities of non-binding residues. The comparisons with each feature demonstrated that these two novel features contributed most to the improvement in predictive ability. Furthermore, to improve the prediction performance of the DNABP model, feature selection using the minimum redundancy maximum relevance (mRMR) method combined with incremental feature selection (IFS) was carried out during the model construction. The results showed that the DNABP model could achieve 86.90% accuracy, 83.76% sensitivity, 90.03% specificity and a Matthews correlation coefficient of 0.727. High prediction accuracy and performance comparisons with previous research suggested that DNABP could be a useful approach to identify DNA-binding proteins from sequence information. The DNABP web server system is freely available at http://www.cbi.seu.edu.cn/DNABP/.
Blunt, L A; Bills, P J; Jiang, X-Q; Chakrabarty, G
2008-04-01
Total joint replacement is one of the most common elective surgical procedures performed worldwide, with an estimate of 1.5x 10(6) operations performed annually. Currently joint replacements are expected to function for 10-15 years; however, with an increase in life expectancy, and a greater call for knee replacement due to increased activity levels, there is a requirement to improve their function to offer longer-term improved quality of life for patients. Wear analysis of total joint replacements has long been an important means in determining failure mechanisms and improving longevity of these devices. The effectiveness of the coordinate-measuring machine (CMM) technique for assessing volumetric material loss during simulated life testing of a replacement knee joint has been proved previously by the present authors. The purpose of the current work is to present an improvement to this method for situations where no pre-wear data are available. To validate the method, simulator tests were run and gravimetric measurements taken throughout the test, such that the components measured had a known wear value. The implications of the results are then discussed in terms of assessment of joint functionality and development of standardized CMM-based product standards. The method was then expanded to allow assessment of clinically retrieved bearings so as to ascertain a measure of true clinical wear.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Morshed, Nader; Lawrence Berkeley National Laboratory, Berkeley, CA 94720; Echols, Nathaniel, E-mail: nechols@lbl.gov
2015-05-01
A method to automatically identify possible elemental ions in X-ray crystal structures has been extended to use support vector machine (SVM) classifiers trained on selected structures in the PDB, with significantly improved sensitivity over manually encoded heuristics. In the process of macromolecular model building, crystallographers must examine electron density for isolated atoms and differentiate sites containing structured solvent molecules from those containing elemental ions. This task requires specific knowledge of metal-binding chemistry and scattering properties and is prone to error. A method has previously been described to identify ions based on manually chosen criteria for a number of elements. Here,more » the use of support vector machines (SVMs) to automatically classify isolated atoms as either solvent or one of various ions is described. Two data sets of protein crystal structures, one containing manually curated structures deposited with anomalous diffraction data and another with automatically filtered, high-resolution structures, were constructed. On the manually curated data set, an SVM classifier was able to distinguish calcium from manganese, zinc, iron and nickel, as well as all five of these ions from water molecules, with a high degree of accuracy. Additionally, SVMs trained on the automatically curated set of high-resolution structures were able to successfully classify most common elemental ions in an independent validation test set. This method is readily extensible to other elemental ions and can also be used in conjunction with previous methods based on a priori expectations of the chemical environment and X-ray scattering.« less
A dichoptic custom-made action video game as a treatment for adult amblyopia.
Vedamurthy, Indu; Nahum, Mor; Huang, Samuel J; Zheng, Frank; Bayliss, Jessica; Bavelier, Daphne; Levi, Dennis M
2015-09-01
Previous studies have employed different experimental approaches to enhance visual function in adults with amblyopia including perceptual learning, videogame play, and dichoptic training. Here, we evaluated the efficacy of a novel dichoptic action videogame combining all three approaches. This experimental intervention was compared to a conventional, yet unstudied method of supervised occlusion while watching movies. Adults with unilateral amblyopia were assigned to either play the dichoptic action game (n=23; 'game' group), or to watch movies monocularly while the fellow eye was patched (n=15; 'movies' group) for a total of 40hours. Following training, visual acuity (VA) improved on average by ≈0.14logMAR (≈28%) in the game group, with improvements noted in both anisometropic and strabismic patients. This improvement is similar to that obtained following perceptual learning, video game play or dichoptic training. Surprisingly, patients with anisometropic amblyopia in the movies group showed similar improvement, revealing a greater impact of supervised occlusion in adults than typically thought. Stereoacuity, reading speed, and contrast sensitivity improved more for game group participants compared with movies group participants. Most improvements were largely retained following a 2-month no-contact period. This novel video game, which combines action gaming, perceptual learning and dichoptic presentation, results in VA improvements equivalent to those previously documented with each of these techniques alone. Our game intervention led to greater improvement than control training in a variety of visual functions, thus suggesting that this approach has promise for the treatment of adult amblyopia. Copyright © 2015 Elsevier Ltd. All rights reserved.
A dichoptic custom-made action video game as a treatment for adult amblyopia
Vedamurthy, Indu; Nahum, Mor; Huang, Samuel J.; Zheng, Frank; Bayliss, Jessica; Bavelier, Daphne; Levi, Dennis M.
2015-01-01
Previous studies have employed different experimental approaches to enhance visual function in adults with amblyopia including perceptual learning, videogame play, and dichoptic training. Here, we evaluated the efficacy of a novel dichoptic action videogame combining all three approaches. This experimental intervention was compared to a conventional, yet unstudied method of supervised occlusion while watching movies. Adults with unilateral amblyopia were assigned to either playing the dichoptic action game (n = 23; ‘game’ group), or to watching movies monocularly while the fellow eye was patched (n = 15; ‘movies’ group) for a total of 40 h. Following training, visual acuity (VA) improved on average by ≈0.14 logMAR (≈27%) in the game group, with improvements noted in both anisometropic and strabismic patients. This improvement is similar to that described after perceptual learning, video game play or dichoptic training. Surprisingly, patients with anisometropic amblyopia in the movies group showed similar improvement, revealing a greater impact of supervised occlusion in adults than typically thought. Stereoacuity, reading speed, and contrast sensitivity improved more for game group participants compared with movies group participants. Most improvements were largely retained following a 2-month no-contact period. This novel video game, which combines action gaming, perceptual learning and dichoptic presentation, results in VA improvements equivalent to those previously documented with each of these techniques alone. Interestingly, however, our game intervention led to greater improvement than control training in a variety of visual functions, thus suggesting that this approach has promise for the treatment of adult amblyopia. PMID:25917239
ADVANCEMENTS IN TIME-SPECTRA ANALYSIS METHODS FOR LEAD SLOWING-DOWN SPECTROSCOPY
DOE Office of Scientific and Technical Information (OSTI.GOV)
Smith, Leon E.; Anderson, Kevin K.; Gesh, Christopher J.
2010-08-11
Direct measurement of Pu in spent nuclear fuel remains a key challenge for safeguarding nuclear fuel cycles of today and tomorrow. Lead slowing-down spectroscopy (LSDS) is an active nondestructive assay method that has the potential to provide independent, direct measurement of Pu and U isotopic mass with an uncertainty lower than the approximately 10 percent typical of today’s confirmatory assay methods. Pacific Northwest National Laboratory’s (PNNL) previous work to assess the viability of LSDS for the assay of pressurized water reactor (PWR) assemblies indicated that the method could provide direct assay of Pu-239 and U-235 (and possibly Pu-240 and Pu-241)more » with uncertainties less than a few percent, assuming suitably efficient instrumentation, an intense pulsed neutron source, and improvements in the time-spectra analysis methods used to extract isotopic information from a complex LSDS signal. This previous simulation-based evaluation used relatively simple PWR fuel assembly definitions (e.g. constant burnup across the assembly) and a constant initial enrichment and cooling time. The time-spectra analysis method was founded on a preliminary analytical model of self-shielding intended to correct for assay-signal nonlinearities introduced by attenuation of the interrogating neutron flux within the assembly.« less
Improved spectrophotometric analysis of fullerenes C60 and C70 in high-solubility organic solvents.
Törpe, Alexander; Belton, Daniel J
2015-01-01
Fullerenes are among a number of recently discovered carbon allotropes that exhibit unique and versatile properties. The analysis of these materials is of great importance and interest. We present previously unreported spectroscopic data for C60 and C70 fullerenes in high-solubility solvents, including error bounds, so as to allow reliable colorimetric analysis of these materials. The Beer-Lambert-Bouguer law is found to be valid at all wavelengths. The measured data were highly reproducible, and yielded high-precision molar absorbance coefficients for C60 and C70 in o-xylene and o-dichlorobenzene, which both exhibit a high solubility for these fullerenes, and offer the prospect of improved extraction efficiency. A photometric method for a C60/C70 mixture analysis was validated with standard mixtures, and subsequently improved for real samples by correcting for light scattering, using a power-law fit. The method was successfully applied to the analysis of C60/C70 mixtures extracted from fullerene soot.
Image-optimized Coronal Magnetic Field Models
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jones, Shaela I.; Uritsky, Vadim; Davila, Joseph M., E-mail: shaela.i.jones-mecholsky@nasa.gov, E-mail: shaela.i.jonesmecholsky@nasa.gov
We have reported previously on a new method we are developing for using image-based information to improve global coronal magnetic field models. In that work, we presented early tests of the method, which proved its capability to improve global models based on flawed synoptic magnetograms, given excellent constraints on the field in the model volume. In this follow-up paper, we present the results of similar tests given field constraints of a nature that could realistically be obtained from quality white-light coronagraph images of the lower corona. We pay particular attention to difficulties associated with the line-of-sight projection of features outsidemore » of the assumed coronagraph image plane and the effect on the outcome of the optimization of errors in the localization of constraints. We find that substantial improvement in the model field can be achieved with these types of constraints, even when magnetic features in the images are located outside of the image plane.« less
Amini, Reza; Sabourin, Catherine; De Koninck, Joseph
2011-12-01
Scientific study of dreams requires the most objective methods to reliably analyze dream content. In this context, artificial intelligence should prove useful for an automatic and non subjective scoring technique. Past research has utilized word search and emotional affiliation methods, to model and automatically match human judges' scoring of dream report's negative emotional tone. The current study added word associations to improve the model's accuracy. Word associations were established using words' frequency of co-occurrence with their defining words as found in a dictionary and an encyclopedia. It was hypothesized that this addition would facilitate the machine learning model and improve its predictability beyond those of previous models. With a sample of 458 dreams, this model demonstrated an improvement in accuracy from 59% to 63% (kappa=.485) on the negative emotional tone scale, and for the first time reached an accuracy of 77% (kappa=.520) on the positive scale. Copyright © 2011 Elsevier Inc. All rights reserved.
An improved method for detecting circulating microRNAs with S-Poly(T) Plus real-time PCR
Niu, Yanqin; Zhang, Limin; Qiu, Huiling; Wu, Yike; Wang, Zhiwei; Zai, Yujia; Liu, Lin; Qu, Junle; Kang, Kang; Gou, Deming
2015-01-01
We herein describe a simple, sensitive and specific method for analysis of circulating microRNAs (miRNA), termed S-Poly(T) Plus real-time PCR assay. This new method is based on our previously developed S-Poly(T) method, in which a unique S-Poly(T) primer is used during reverse-transcription to increase sensitivity and specificity. Further increased sensitivity and simplicity of S-Poly(T) Plus, in comparison with the S-Poly(T) method, were achieved by a single-step, multiple-stage reaction, where RNAs were polyadenylated and reverse-transcribed at the same time. The sensitivity of circulating miRNA detection was further improved by a modified method of total RNA isolation from serum/plasma, S/P miRsol, in which glycogen was used to increase the RNA yield. We validated our methods by quantifying miRNA expression profiles in the sera of the patients with pulmonary arterial hypertension associated with congenital heart disease. In conclusion, we developed a simple, sensitive, and specific method for detecting circulating miRNAs that allows the measurement of 266 miRNAs from 100 μl of serum or plasma. This method presents a promising tool for basic miRNA research and clinical diagnosis of human diseases based on miRNA biomarkers. PMID:26459910
Mansouri, Majdi; Nounou, Mohamed N; Nounou, Hazem N
2017-09-01
In our previous work, we have demonstrated the effectiveness of the linear multiscale principal component analysis (PCA)-based moving window (MW)-generalized likelihood ratio test (GLRT) technique over the classical PCA and multiscale principal component analysis (MSPCA)-based GLRT methods. The developed fault detection algorithm provided optimal properties by maximizing the detection probability for a particular false alarm rate (FAR) with different values of windows, and however, most real systems are nonlinear, which make the linear PCA method not able to tackle the issue of non-linearity to a great extent. Thus, in this paper, first, we apply a nonlinear PCA to obtain an accurate principal component of a set of data and handle a wide range of nonlinearities using the kernel principal component analysis (KPCA) model. The KPCA is among the most popular nonlinear statistical methods. Second, we extend the MW-GLRT technique to one that utilizes exponential weights to residuals in the moving window (instead of equal weightage) as it might be able to further improve fault detection performance by reducing the FAR using exponentially weighed moving average (EWMA). The developed detection method, which is called EWMA-GLRT, provides improved properties, such as smaller missed detection and FARs and smaller average run length. The idea behind the developed EWMA-GLRT is to compute a new GLRT statistic that integrates current and previous data information in a decreasing exponential fashion giving more weight to the more recent data. This provides a more accurate estimation of the GLRT statistic and provides a stronger memory that will enable better decision making with respect to fault detection. Therefore, in this paper, a KPCA-based EWMA-GLRT method is developed and utilized in practice to improve fault detection in biological phenomena modeled by S-systems and to enhance monitoring process mean. The idea behind a KPCA-based EWMA-GLRT fault detection algorithm is to combine the advantages brought forward by the proposed EWMA-GLRT fault detection chart with the KPCA model. Thus, it is used to enhance fault detection of the Cad System in E. coli model through monitoring some of the key variables involved in this model such as enzymes, transport proteins, regulatory proteins, lysine, and cadaverine. The results demonstrate the effectiveness of the proposed KPCA-based EWMA-GLRT method over Q , GLRT, EWMA, Shewhart, and moving window-GLRT methods. The detection performance is assessed and evaluated in terms of FAR, missed detection rates, and average run length (ARL 1 ) values.
Jo, Bum Seak; Myong, Jun Pyo; Rhee, Chin Kook; Yoon, Hyoung Kyu; Koo, Jung Wan; Kim, Hyoung Ryoul
2018-01-15
The present study aimed to update the prediction equations for spirometry and their lower limits of normal (LLN) by using the lambda, mu, sigma (LMS) method and to compare the outcomes with the values of previous spirometric reference equations. Spirometric data of 10,249 healthy non-smokers (8,776 females) were extracted from the fourth and fifth versions of the Korea National Health and Nutrition Examination Survey (KNHANES IV, 2007-2009; V, 2010-2012). Reference equations were derived using the LMS method which allows modeling skewness (lambda [L]), mean (mu [M]), and coefficient of variation (sigma [S]). The outcome equations were compared with previous reference values. Prediction equations were presented in the following form: predicted value = e{a + b × ln(height) + c × ln(age) + M - spline}. The new predicted values for spirometry and their LLN derived using the LMS method were shown to more accurately reflect transitions in pulmonary function in young adults than previous prediction equations derived using conventional regression analysis in 2013. There were partial discrepancies between the new reference values and the reference values from the Global Lung Function Initiative in 2012. The results should be interpreted with caution for young adults and elderly males, particularly in terms of the LLN for forced expiratory volume in one second/forced vital capacity in elderly males. Serial spirometry follow-up, together with correlations with other clinical findings, should be emphasized in evaluating the pulmonary function of individuals. Future studies are needed to improve the accuracy of reference data and to develop continuous reference values for spirometry across all ages. © 2018 The Korean Academy of Medical Sciences.
Improved Modeling of Side-Chain–Base Interactions and Plasticity in Protein–DNA Interface Design
Thyme, Summer B.; Baker, David; Bradley, Philip
2012-01-01
Combinatorial sequence optimization for protein design requires libraries of discrete side-chain conformations. The discreteness of these libraries is problematic, particularly for long, polar side chains, since favorable interactions can be missed. Previously, an approach to loop remodeling where protein backbone movement is directed by side-chain rotamers predicted to form interactions previously observed in native complexes (termed “motifs”) was described. Here, we show how such motif libraries can be incorporated into combinatorial sequence optimization protocols and improve native complex recapitulation. Guided by the motif rotamer searches, we made improvements to the underlying energy function, increasing recapitulation of native interactions. To further test the methods, we carried out a comprehensive experimental scan of amino acid preferences in the I-AniI protein–DNA interface and found that many positions tolerated multiple amino acids. This sequence plasticity is not observed in the computational results because of the fixed-backbone approximation of the model. We improved modeling of this diversity by introducing DNA flexibility and reducing the convergence of the simulated annealing algorithm that drives the design process. In addition to serving as a benchmark, this extensive experimental data set provides insight into the types of interactions essential to maintain the function of this potential gene therapy reagent. PMID:22426128
Improved modeling of side-chain--base interactions and plasticity in protein--DNA interface design.
Thyme, Summer B; Baker, David; Bradley, Philip
2012-06-08
Combinatorial sequence optimization for protein design requires libraries of discrete side-chain conformations. The discreteness of these libraries is problematic, particularly for long, polar side chains, since favorable interactions can be missed. Previously, an approach to loop remodeling where protein backbone movement is directed by side-chain rotamers predicted to form interactions previously observed in native complexes (termed "motifs") was described. Here, we show how such motif libraries can be incorporated into combinatorial sequence optimization protocols and improve native complex recapitulation. Guided by the motif rotamer searches, we made improvements to the underlying energy function, increasing recapitulation of native interactions. To further test the methods, we carried out a comprehensive experimental scan of amino acid preferences in the I-AniI protein-DNA interface and found that many positions tolerated multiple amino acids. This sequence plasticity is not observed in the computational results because of the fixed-backbone approximation of the model. We improved modeling of this diversity by introducing DNA flexibility and reducing the convergence of the simulated annealing algorithm that drives the design process. In addition to serving as a benchmark, this extensive experimental data set provides insight into the types of interactions essential to maintain the function of this potential gene therapy reagent. Published by Elsevier Ltd.
A comparative intelligibility study of single-microphone noise reduction algorithms.
Hu, Yi; Loizou, Philipos C
2007-09-01
The evaluation of intelligibility of noise reduction algorithms is reported. IEEE sentences and consonants were corrupted by four types of noise including babble, car, street and train at two signal-to-noise ratio levels (0 and 5 dB), and then processed by eight speech enhancement methods encompassing four classes of algorithms: spectral subtractive, sub-space, statistical model based and Wiener-type algorithms. The enhanced speech was presented to normal-hearing listeners for identification. With the exception of a single noise condition, no algorithm produced significant improvements in speech intelligibility. Information transmission analysis of the consonant confusion matrices indicated that no algorithm improved significantly the place feature score, significantly, which is critically important for speech recognition. The algorithms which were found in previous studies to perform the best in terms of overall quality, were not the same algorithms that performed the best in terms of speech intelligibility. The subspace algorithm, for instance, was previously found to perform the worst in terms of overall quality, but performed well in the present study in terms of preserving speech intelligibility. Overall, the analysis of consonant confusion matrices suggests that in order for noise reduction algorithms to improve speech intelligibility, they need to improve the place and manner feature scores.
Zhang, Jiayong; Zhang, Hongwu; Ye, Hongfei; Zheng, Yonggang
2016-09-07
A free-end adaptive nudged elastic band (FEA-NEB) method is presented for finding transition states on minimum energy paths, where the energy barrier is very narrow compared to the whole paths. The previously proposed free-end nudged elastic band method may suffer from convergence problems because of the kinks arising on the elastic band if the initial elastic band is far from the minimum energy path and weak springs are adopted. We analyze the origin of the formation of kinks and present an improved free-end algorithm to avoid the convergence problem. Moreover, by coupling the improved free-end algorithm and an adaptive strategy, we develop a FEA-NEB method to accurately locate the transition state with the elastic band cut off repeatedly and the density of images near the transition state increased. Several representative numerical examples, including the dislocation nucleation in a penta-twinned nanowire, the twin boundary migration under a shear stress, and the cross-slip of screw dislocation in face-centered cubic metals, are investigated by using the FEA-NEB method. Numerical results demonstrate both the stability and efficiency of the proposed method.
An improved AE detection method of rail defect based on multi-level ANC with VSS-LMS
NASA Astrophysics Data System (ADS)
Zhang, Xin; Cui, Yiming; Wang, Yan; Sun, Mingjian; Hu, Hengshan
2018-01-01
In order to ensure the safety and reliability of railway system, Acoustic Emission (AE) method is employed to investigate rail defect detection. However, little attention has been paid to the defect detection at high speed, especially for noise interference suppression. Based on AE technology, this paper presents an improved rail defect detection method by multi-level ANC with VSS-LMS. Multi-level noise cancellation based on SANC and ANC is utilized to eliminate complex noises at high speed, and tongue-shaped curve with index adjustment factor is proposed to enhance the performance of variable step-size algorithm. Defect signals and reference signals are acquired by the rail-wheel test rig. The features of noise signals and defect signals are analyzed for effective detection. The effectiveness of the proposed method is demonstrated by comparing with the previous study, and different filter lengths are investigated to obtain a better noise suppression performance. Meanwhile, the detection ability of the proposed method is verified at the top speed of the test rig. The results clearly illustrate that the proposed method is effective in detecting rail defects at high speed, especially for noise interference suppression.
Deeley, MA; Chen, A; Datteri, R; Noble, J; Cmelak, A; Donnelly, EF; Malcolm, A; Moretti, L; Jaboin, J; Niermann, K; Yang, Eddy S; Yu, David S; Dawant, BM
2013-01-01
Image segmentation has become a vital and often rate limiting step in modern radiotherapy treatment planning. In recent years the pace and scope of algorithm development, and even introduction into the clinic, have far exceeded evaluative studies. In this work we build upon our previous evaluation of a registration driven segmentation algorithm in the context of 8 expert raters and 20 patients who underwent radiotherapy for large space-occupying tumors in the brain. In this work we tested four hypotheses concerning the impact of manual segmentation editing in a randomized single-blinded study. We tested these hypotheses on the normal structures of the brainstem, optic chiasm, eyes and optic nerves using the Dice similarity coefficient, volume, and signed Euclidean distance error to evaluate the impact of editing on inter-rater variance and accuracy. Accuracy analyses relied on two simulated ground truth estimation methods: STAPLE and a novel implementation of probability maps. The experts were presented with automatic, their own, and their peers’ segmentations from our previous study to edit. We found, independent of source, editing reduced inter-rater variance while maintaining or improving accuracy and improving efficiency with at least 60% reduction in contouring time. In areas where raters performed poorly contouring from scratch, editing of the automatic segmentations reduced the prevalence of total anatomical miss from approximately 16% to 8% of the total slices contained within the ground truth estimations. These findings suggest that contour editing could be useful for consensus building such as in developing delineation standards, and that both automated methods and even perhaps less sophisticated atlases could improve efficiency, inter-rater variance, and accuracy. PMID:23685866
Increasing patient safety and efficiency in transfusion therapy using formal process definitions.
Henneman, Elizabeth A; Avrunin, George S; Clarke, Lori A; Osterweil, Leon J; Andrzejewski, Chester; Merrigan, Karen; Cobleigh, Rachel; Frederick, Kimberly; Katz-Bassett, Ethan; Henneman, Philip L
2007-01-01
The administration of blood products is a common, resource-intensive, and potentially problem-prone area that may place patients at elevated risk in the clinical setting. Much of the emphasis in transfusion safety has been targeted toward quality control measures in laboratory settings where blood products are prepared for administration as well as in automation of certain laboratory processes. In contrast, the process of transfusing blood in the clinical setting (ie, at the point of care) has essentially remained unchanged over the past several decades. Many of the currently available methods for improving the quality and safety of blood transfusions in the clinical setting rely on informal process descriptions, such as flow charts and medical algorithms, to describe medical processes. These informal descriptions, although useful in presenting an overview of standard processes, can be ambiguous or incomplete. For example, they often describe only the standard process and leave out how to handle possible failures or exceptions. One alternative to these informal descriptions is to use formal process definitions, which can serve as the basis for a variety of analyses because these formal definitions offer precision in the representation of all possible ways that a process can be carried out in both standard and exceptional situations. Formal process definitions have not previously been used to describe and improve medical processes. The use of such formal definitions to prospectively identify potential error and improve the transfusion process has not previously been reported. The purpose of this article is to introduce the concept of formally defining processes and to describe how formal definitions of blood transfusion processes can be used to detect and correct transfusion process errors in ways not currently possible using existing quality improvement methods.
NASA Astrophysics Data System (ADS)
Zhou, Xiangrong; Takayama, Ryosuke; Wang, Song; Zhou, Xinxin; Hara, Takeshi; Fujita, Hiroshi
2017-02-01
We have proposed an end-to-end learning approach that trained a deep convolutional neural network (CNN) for automatic CT image segmentation, which accomplished a voxel-wised multiple classification to directly map each voxel on 3D CT images to an anatomical label automatically. The novelties of our proposed method were (1) transforming the anatomical structures segmentation on 3D CT images into a majority voting of the results of 2D semantic image segmentation on a number of 2D-slices from different image orientations, and (2) using "convolution" and "deconvolution" networks to achieve the conventional "coarse recognition" and "fine extraction" functions which were integrated into a compact all-in-one deep CNN for CT image segmentation. The advantage comparing to previous works was its capability to accomplish real-time image segmentations on 2D slices of arbitrary CT-scan-range (e.g. body, chest, abdomen) and produced correspondingly-sized output. In this paper, we propose an improvement of our proposed approach by adding an organ localization module to limit CT image range for training and testing deep CNNs. A database consisting of 240 3D CT scans and a human annotated ground truth was used for training (228 cases) and testing (the remaining 12 cases). We applied the improved method to segment pancreas and left kidney regions, respectively. The preliminary results showed that the accuracies of the segmentation results were improved significantly (pancreas was 34% and kidney was 8% increased in Jaccard index from our previous results). The effectiveness and usefulness of proposed improvement for CT image segmentations were confirmed.
Attardi, Stefanie M; Barbeau, Michele L; Rogers, Kem A
2018-03-01
An online section of a face-to-face (F2F) undergraduate (bachelor's level) anatomy course with a prosection laboratory was offered in 2013-2014. Lectures for F2F students (353) were broadcast to online students (138) using Blackboard Collaborate (BBC) virtual classroom. Online laboratories were offered using BBC and three-dimensional (3D) anatomical computer models. This iteration of the course was modified from the previous year to improve online student-teacher and student-student interactions. Students were divided into laboratory groups that rotated through virtual breakout rooms, giving them the opportunity to interact with three instructors. The objectives were to assess student performance outcomes, perceptions of student-teacher and student-student interactions, methods of peer interaction, and helpfulness of the 3D computer models. Final grades were statistically identical between the online and F2F groups. There were strong, positive correlations between incoming grade average and final anatomy grade in both groups, suggesting prior academic performance, and not delivery format, predicts anatomy grades. Quantitative student perception surveys (273 F2F; 101 online) revealed that both groups agreed they were engaged by teachers, could interact socially with teachers and peers, and ask them questions in both the lecture and laboratory sessions, though agreement was significantly greater for the F2F students in most comparisons. The most common methods of peer communication were texting, Facebook, and meeting F2F. The perceived helpfulness of the 3D computer models improved from the previous year. While virtual breakout rooms can be used to adequately replace traditional prosection laboratories and improve interactions, they are not equivalent to F2F laboratories. Anat Sci Educ. © 2018 American Association of Anatomists. © 2018 American Association of Anatomists.
Improving informed consent: Stakeholder views.
Anderson, Emily E; Newman, Susan B; Matthews, Alicia K
2017-01-01
Innovation will be required to improve the informed consent process in research. We aimed to obtain input from key stakeholders-research participants and those responsible for obtaining informed consent-to inform potential development of a multimedia informed consent "app." This descriptive study used a mixed-methods approach. Five 90-minute focus groups were conducted with volunteer samples of former research participants and researchers/research staff responsible for obtaining informed consent. Participants also completed a brief survey that measured background information and knowledge and attitudes regarding research and the use of technology. Established qualitative methods were used to conduct the focus groups and data analysis. We conducted five focus groups with 41 total participants: three groups with former research participants (total n = 22), and two groups with researchers and research coordinators (total n = 19). Overall, individuals who had previously participated in research had positive views regarding their experiences. However, further discussion elicited that the informed consent process often did not meet its intended objectives. Findings from both groups are presented according to three primary themes: content of consent forms, experience of the informed consent process, and the potential of technology to improve the informed consent process. A fourth theme, need for lay input on informed consent, emerged from the researcher groups. Our findings add to previous research that suggests that the use of interactive technology has the potential to improve the process of informed consent. However, our focus-group findings provide additional insight that technology cannot replace the human connection that is central to the informed consent process. More research that incorporates the views of key stakeholders is needed to ensure that multimedia consent processes do not repeat the mistakes of paper-based consent forms.
Kutcher, Stan; Wei, Yifeng; Morgan, Catherine
2015-01-01
Objective: To investigate whether the significant and substantive findings from a previous study of youth mental health literacy (MHL) could be replicated using the same methods in another population. Method: We examined the impact of a curriculum resource, the Mental Health and High School Curriculum Guide (The Guide), taught by usual classroom teachers on students’ knowledge and attitudes related to mental health and mental illness in Canadian secondary schools. Survey data were collected before, immediately after, and 2 months after implementation of The Guide by teachers in usual classroom teaching. We conducted paired-sample t tests and calculated the Cohen d value to determine outcomes and impact of the curriculum resource application. Results: One hundred fourteen students were matched for analysis of knowledge data and 112 students were matched for analysis of attitude data at pre-intervention, post-intervention, and 2-month follow-up time periods. Following classroom exposure to the curriculum resource, students’ knowledge scores increased significantly and substantively, compared with baseline (P < 0.001, d = 1.11), and this was maintained at 2-month follow-up (P < 0.001, d = 0.91). Similar findings for attitude improvement were found (P < 0.001, d = 0.66), and this improvement was maintained at 2-month follow-up (P < 0.001, d = 0.52). Conclusions: These findings corroborate those from a previous study conducted in a different location. Taken together these results suggest a simple but effective approach to improving MHL in young people by embedding a classroom resource, delivered by usual classroom teachers in usual school settings. PMID:26720827
Eisenberg, Dan T A; Kuzawa, Christopher W; Hayes, M Geoffrey
2015-01-01
Telomere length (TL) is commonly measured using quantitative PCR (qPCR). Although, easier than the southern blot of terminal restriction fragments (TRF) TL measurement method, one drawback of qPCR is that it introduces greater measurement error and thus reduces the statistical power of analyses. To address a potential source of measurement error, we consider the effect of well position on qPCR TL measurements. qPCR TL data from 3,638 people run on a Bio-Rad iCycler iQ are reanalyzed here. To evaluate measurement validity, correspondence with TRF, age, and between mother and offspring are examined. First, we present evidence for systematic variation in qPCR TL measurements in relation to thermocycler well position. Controlling for these well-position effects consistently improves measurement validity and yields estimated improvements in statistical power equivalent to increasing sample sizes by 16%. We additionally evaluated the linearity of the relationships between telomere and single copy gene control amplicons and between qPCR and TRF measures. We find that, unlike some previous reports, our data exhibit linear relationships. We introduce the standard error in percent, a superior method for quantifying measurement error as compared to the commonly used coefficient of variation. Using this measure, we find that excluding samples with high measurement error does not improve measurement validity in our study. Future studies using block-based thermocyclers should consider well position effects. Since additional information can be gleaned from well position corrections, rerunning analyses of previous results with well position correction could serve as an independent test of the validity of these results. © 2015 Wiley Periodicals, Inc.
Enhancing Breast Cancer Recurrence Algorithms Through Selective Use of Medical Record Data
Chubak, Jessica; Johnson, Lisa; Castillo, Adrienne; Weltzien, Erin; Caan, Bette J.
2016-01-01
Abstract Background: The utility of data-based algorithms in research has been questioned because of errors in identification of cancer recurrences. We adapted previously published breast cancer recurrence algorithms, selectively using medical record (MR) data to improve classification. Methods: We evaluated second breast cancer event (SBCE) and recurrence-specific algorithms previously published by Chubak and colleagues in 1535 women from the Life After Cancer Epidemiology (LACE) and 225 women from the Women’s Health Initiative cohorts and compared classification statistics to published values. We also sought to improve classification with minimal MR examination. We selected pairs of algorithms—one with high sensitivity/high positive predictive value (PPV) and another with high specificity/high PPV—using MR information to resolve discrepancies between algorithms, properly classifying events based on review; we called this “triangulation.” Finally, in LACE, we compared associations between breast cancer survival risk factors and recurrence using MR data, single Chubak algorithms, and triangulation. Results: The SBCE algorithms performed well in identifying SBCE and recurrences. Recurrence-specific algorithms performed more poorly than published except for the high-specificity/high-PPV algorithm, which performed well. The triangulation method (sensitivity = 81.3%, specificity = 99.7%, PPV = 98.1%, NPV = 96.5%) improved recurrence classification over two single algorithms (sensitivity = 57.1%, specificity = 95.5%, PPV = 71.3%, NPV = 91.9%; and sensitivity = 74.6%, specificity = 97.3%, PPV = 84.7%, NPV = 95.1%), with 10.6% MR review. Triangulation performed well in survival risk factor analyses vs analyses using MR-identified recurrences. Conclusions: Use of multiple recurrence algorithms in administrative data, in combination with selective examination of MR data, may improve recurrence data quality and reduce research costs. PMID:26582243
Resist process optimization for further defect reduction
NASA Astrophysics Data System (ADS)
Tanaka, Keiichi; Iseki, Tomohiro; Marumoto, Hiroshi; Takayanagi, Koji; Yoshida, Yuichi; Uemura, Ryouichi; Yoshihara, Kosuke
2012-03-01
Defect reduction has become one of the most important technical challenges in device mass-production. Knowing that resist processing on a clean track strongly impacts defect formation in many cases, we have been trying to improve the track process to enhance customer yield. For example, residual type defect and pattern collapse are strongly related to process parameters in developer, and we have reported new develop and rinse methods in the previous papers. Also, we have reported the optimization method of filtration condition to reduce bridge type defects, which are mainly caused by foreign substances such as gels in resist. Even though we have contributed resist caused defect reduction in past studies, defect reduction requirements continue to be very important. In this paper, we will introduce further process improvements in terms of resist defect reduction, including the latest experimental data.
Framing Electronic Medical Records as Polylingual Documents in Query Expansion
Huang, Edward W; Wang, Sheng; Lee, Doris Jung-Lin; Zhang, Runshun; Liu, Baoyan; Zhou, Xuezhong; Zhai, ChengXiang
2017-01-01
We present a study of electronic medical record (EMR) retrieval that emulates situations in which a doctor treats a new patient. Given a query consisting of a new patient’s symptoms, the retrieval system returns the set of most relevant records of previously treated patients. However, due to semantic, functional, and treatment synonyms in medical terminology, queries are often incomplete and thus require enhancement. In this paper, we present a topic model that frames symptoms and treatments as separate languages. Our experimental results show that this method improves retrieval performance over several baselines with statistical significance. These baselines include methods used in prior studies as well as state-of-the-art embedding techniques. Finally, we show that our proposed topic model discovers all three types of synonyms to improve medical record retrieval. PMID:29854161
Sotiropoulou, P; Fountos, G; Martini, N; Koukou, V; Michail, C; Kandarakis, I; Nikiforidis, G
2016-12-01
An X-ray dual energy (XRDE) method was examined, using polynomial nonlinear approximation of inverse functions for the determination of the bone Calcium-to-Phosphorus (Ca/P) mass ratio. Inverse fitting functions with the least-squares estimation were used, to determine calcium and phosphate thicknesses. The method was verified by measuring test bone phantoms with a dedicated dual energy system and compared with previously published dual energy data. The accuracy in the determination of the calcium and phosphate thicknesses improved with the polynomial nonlinear inverse function method, introduced in this work, (ranged from 1.4% to 6.2%), compared to the corresponding linear inverse function method (ranged from 1.4% to 19.5%). Copyright © 2016 Elsevier Ltd. All rights reserved.
Treatment of sexual dysfunctions in male-only groups: predicting outcome.
Dekker, J; Dronkers, J; Staffeleu, J
1985-01-01
Forty men complaining of sexual dysfunctions were treated in male-only groups, using RET, masturbation exercises and social skills training. Sexual functioning improved and social anxiety decreased. Combining these data with previously reported data on 21 men, we tried to predict treatment outcome. Sexual functioning of men with a steady partner and men with varying partners improved; in men without partner(s) no effect could be demonstrated, probably due to a methodological artifact. Inhibited sexual desire was associated with a poor outcome. Several other variables (among them type of dysfunction, social anxiety, age, educational level) did not predict improvement of sexual functioning. This method seems to provide adequate treatment for various complaints of men with quite different backgrounds.
NASA Astrophysics Data System (ADS)
Yoon, Jangyeol; Yim, Seongjin; Cho, Wanki; Koo, Bongyeong; Yi, Kyongsu
2010-11-01
This paper describes a unified chassis control (UCC) strategy to prevent vehicle rollover and improve both manoeuvrability and lateral stability. Since previous researches on rollover prevention are only focused on the reduction of lateral acceleration, the manoeuvrability and lateral stability cannot be guaranteed. For this reason, it is necessary to design a UCC controller to prevent rollover and improve lateral stability by integrating electronic stability control, active front steering and continuous damping control. This integration is performed through switching among several control modes and a simulation is performed to validate the proposed method. Simulation results indicate that a significant improvement in rollover prevention, manoeuvrability and lateral stability can be expected from the proposed UCC system.
Robust Eye Center Localization through Face Alignment and Invariant Isocentric Patterns
Teng, Dongdong; Chen, Dihu; Tan, Hongzhou
2015-01-01
The localization of eye centers is a very useful cue for numerous applications like face recognition, facial expression recognition, and the early screening of neurological pathologies. Several methods relying on available light for accurate eye-center localization have been exploited. However, despite the considerable improvements that eye-center localization systems have undergone in recent years, only few of these developments deal with the challenges posed by the profile (non-frontal face). In this paper, we first use the explicit shape regression method to obtain the rough location of the eye centers. Because this method extracts global information from the human face, it is robust against any changes in the eye region. We exploit this robustness and utilize it as a constraint. To locate the eye centers accurately, we employ isophote curvature features, the accuracy of which has been demonstrated in a previous study. By applying these features, we obtain a series of eye-center locations which are candidates for the actual position of the eye-center. Among these locations, the estimated locations which minimize the reconstruction error between the two methods mentioned above are taken as the closest approximation for the eye centers locations. Therefore, we combine explicit shape regression and isophote curvature feature analysis to achieve robustness and accuracy, respectively. In practical experiments, we use BioID and FERET datasets to test our approach to obtaining an accurate eye-center location while retaining robustness against changes in scale and pose. In addition, we apply our method to non-frontal faces to test its robustness and accuracy, which are essential in gaze estimation but have seldom been mentioned in previous works. Through extensive experimentation, we show that the proposed method can achieve a significant improvement in accuracy and robustness over state-of-the-art techniques, with our method ranking second in terms of accuracy. According to our implementation on a PC with a Xeon 2.5Ghz CPU, the frame rate of the eye tracking process can achieve 38 Hz. PMID:26426929
Terahertz wave electro-optic measurements with optical spectral filtering
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ilyakov, I. E., E-mail: igor-ilyakov@mail.ru; Shishkin, B. V.; Kitaeva, G. Kh.
We propose electro-optic detection techniques based on variations of the laser pulse spectrum induced during pulse co-propagation with terahertz wave radiation in a nonlinear crystal. Quantitative comparison with two other detection methods is made. Substantial improvement of the sensitivity compared to the standard electro-optic detection technique (at high frequencies) and to the previously shown technique based on laser pulse energy changes is demonstrated in experiment.
Improved Characterization of Healthy and Malignant Tissue by NMR Line-Shape Relaxation Correlations
Peemoeller, H.; Shenoy, R.K.; Pintar, M.M.; Kydon, D.W.; Inch, W.R.
1982-01-01
We performed a relaxation-line-shape correlation NMR experiment on muscle, liver, kidney, and spleen tissues of healthy mice and of mouse tumor tissue. In each tissue studied, five spin groups were resolved and characterized by their relaxation parameters. We report a previously uncharacterized semi-solid spin group and discuss briefly the value of this method for the identification of malignant tissues. PMID:7104438
ERIC Educational Resources Information Center
Johnson, Mats; Fransson, Gunnar; Östlund, Sven; Areskoug, Björn; Gillberg, Christopher
2017-01-01
Background: Previous research has shown positive effects of Omega 3/6 fatty acids in children with inattention and reading difficulties. We aimed to investigate if Omega 3/6 improved reading ability in mainstream schoolchildren. Methods: We performed a 3-month parallel, randomized, double-blind, placebo-controlled trial followed by 3-month active…
Existence of topological multi-string solutions in Abelian gauge field theories
NASA Astrophysics Data System (ADS)
Han, Jongmin; Sohn, Juhee
2017-11-01
In this paper, we consider a general form of self-dual equations arising from Abelian gauge field theories coupled with the Einstein equations. By applying the super/subsolution method, we prove that topological multi-string solutions exist for any coupling constant, which improves previously known results. We provide two examples for application: the self-dual Einstein-Maxwell-Higgs model and the gravitational Maxwell gauged O(3) sigma model.
Medical Device Plug-and-Play Interoperability Standards and Technology Leadership
2010-10-01
Philips Medical Systems Impact of ARRA/HITECH on Device Connectivity: Safe? Effective? Say what?! Todd Cooper President Breakthrough Solutions...that could notify the physician when, say , one of the devices comes discon- nected in the high-vibration environment of the plane. There is no way at...Electronic record-keeping promises to be an improvement over previous methods (eliminating problems such as illeg- ible handwriting and records
Advanced Methods for Passive Acoustic Detection, Classification, and Localization of Marine Mammals
2012-09-30
floor 1176 Howell St Newport RI 02842 phone: (401) 832-5749 fax: (401) 832-4441 email: David.Moretti@navy.mil Steve W. Martin SPAWAR...multiclass support vector machine (SVM) classifier was previously developed ( Jarvis et al. 2008). This classifier both detects and classifies echolocation...whales. Here Moretti’s group, especially S. Jarvis , will improve the SVM classifier by resolving confusion between species whose clicks overlap in
Advanced Methods for Passive Acoustic Detection, Classification, and Localization of Marine Mammals
2014-09-30
floor 1176 Howell St Newport RI 02842 phone: (401) 832-5749 fax: (401) 832-4441 email: David.Moretti@navy.mil Steve W. Martin SPAWAR...APPROACH Odontocete click detection and classification. A multi-class support vector machine (SVM) classifier was previously developed ( Jarvis ...beaked whales, Risso’s dolphins, short-finned pilot whales, and sperm whales. Here Moretti’s group, particularly S. Jarvis , is improving the SVM
Advanced Methods for Passive Acoustic Detection, Classification, and Localization of Marine Mammals
2011-09-30
Newport RI 02842 phone: (401) 832-5749 fax: (401) 832-4441 email: David.Moretti@navy.mil Steve W. Martin SPAWAR Systems Center Pacific...APPROACH Odontocete click detection and classification. A multiclass support vector machine (SVM) classifier was previously developed ( Jarvis et...beaked whales, Risso’s dolphins, short-finned pilot whales, and sperm whales. Here Moretti’s group, especially S. Jarvis , will improve the SVM classifier
Two-Phase chief complaint mapping to the UMLS metathesaurus in Korean electronic medical records.
Kang, Bo-Yeong; Kim, Dae-Won; Kim, Hong-Gee
2009-01-01
The task of automatically determining the concepts referred to in chief complaint (CC) data from electronic medical records (EMRs) is an essential component of many EMR applications aimed at biosurveillance for disease outbreaks. Previous approaches that have been used for this concept mapping have mainly relied on term-level matching, whereby the medical terms in the raw text and their synonyms are matched with concepts in a terminology database. These previous approaches, however, have shortcomings that limit their efficacy in CC concept mapping, where the concepts for CC data are often represented by associative terms rather than by synonyms. Therefore, herein we propose a concept mapping scheme based on a two-phase matching approach, especially for application to Korean CCs, which uses term-level complete matching in the first phase and concept-level matching based on concept learning in the second phase. The proposed concept-level matching suggests the method to learn all the terms (associative terms as well as synonyms) that represent the concept and predict the most probable concept for a CC based on the learned terms. Experiments on 1204 CCs extracted from 15,618 discharge summaries of Korean EMRs showed that the proposed method gave significantly improved F-measure values compared to the baseline system, with improvements of up to 73.57%.
NASA Technical Reports Server (NTRS)
Jutte, Christine V.; Ko, William L.; Stephens, Craig A.; Bakalyar, John A.; Richards, W. Lance
2011-01-01
A ground loads test of a full-scale wing (175-ft span) was conducted using a fiber optic strain-sensing system to obtain distributed surface strain data. These data were input into previously developed deformed shape equations to calculate the wing s bending and twist deformation. A photogrammetry system measured actual shape deformation. The wing deflections reached 100 percent of the positive design limit load (equivalent to 3 g) and 97 percent of the negative design limit load (equivalent to -1 g). The calculated wing bending results were in excellent agreement with the actual bending; tip deflections were within +/- 2.7 in. (out of 155-in. max deflection) for 91 percent of the load steps. Experimental testing revealed valuable opportunities for improving the deformed shape equations robustness to real world (not perfect) strain data, which previous analytical testing did not detect. These improvements, which include filtering methods developed in this work, minimize errors due to numerical anomalies discovered in the remaining 9 percent of the load steps. As a result, all load steps attained +/- 2.7 in. accuracy. Wing twist results were very sensitive to errors in bending and require further development. A sensitivity analysis and recommendations for fiber implementation practices, along with, effective filtering methods are included
Improved and Robust Detection of Cell Nuclei from Four Dimensional Fluorescence Images
Bashar, Md. Khayrul; Yamagata, Kazuo; Kobayashi, Tetsuya J.
2014-01-01
Segmentation-free direct methods are quite efficient for automated nuclei extraction from high dimensional images. A few such methods do exist but most of them do not ensure algorithmic robustness to parameter and noise variations. In this research, we propose a method based on multiscale adaptive filtering for efficient and robust detection of nuclei centroids from four dimensional (4D) fluorescence images. A temporal feedback mechanism is employed between the enhancement and the initial detection steps of a typical direct method. We estimate the minimum and maximum nuclei diameters from the previous frame and feed back them as filter lengths for multiscale enhancement of the current frame. A radial intensity-gradient function is optimized at positions of initial centroids to estimate all nuclei diameters. This procedure continues for processing subsequent images in the sequence. Above mechanism thus ensures proper enhancement by automated estimation of major parameters. This brings robustness and safeguards the system against additive noises and effects from wrong parameters. Later, the method and its single-scale variant are simplified for further reduction of parameters. The proposed method is then extended for nuclei volume segmentation. The same optimization technique is applied to final centroid positions of the enhanced image and the estimated diameters are projected onto the binary candidate regions to segment nuclei volumes.Our method is finally integrated with a simple sequential tracking approach to establish nuclear trajectories in the 4D space. Experimental evaluations with five image-sequences (each having 271 3D sequential images) corresponding to five different mouse embryos show promising performances of our methods in terms of nuclear detection, segmentation, and tracking. A detail analysis with a sub-sequence of 101 3D images from an embryo reveals that the proposed method can improve the nuclei detection accuracy by 9 over the previous methods, which used inappropriate large valued parameters. Results also confirm that the proposed method and its variants achieve high detection accuracies ( 98 mean F-measure) irrespective of the large variations of filter parameters and noise levels. PMID:25020042
DOE Office of Scientific and Technical Information (OSTI.GOV)
Minelli, Annalisa, E-mail: Annalisa.Minelli@univ-brest.fr; Marchesini, Ivan, E-mail: Ivan.Marchesini@irpi.cnr.it; Taylor, Faith E., E-mail: Faith.Taylor@kcl.ac.uk
Although there are clear economic and environmental incentives for producing energy from solar and wind power, there can be local opposition to their installation due to their impact upon the landscape. To date, no international guidelines exist to guide quantitative visual impact assessment of these facilities, making the planning process somewhat subjective. In this paper we demonstrate the development of a method and an Open Source GIS tool to quantitatively assess the visual impact of these facilities using line-of-site techniques. The methods here build upon previous studies by (i) more accurately representing the shape of energy producing facilities, (ii) takingmore » into account the distortion of the perceived shape and size of facilities caused by the location of the observer, (iii) calculating the possible obscuring of facilities caused by terrain morphology and (iv) allowing the combination of various facilities to more accurately represent the landscape. The tool has been applied to real and synthetic case studies and compared to recently published results from other models, and demonstrates an improvement in accuracy of the calculated visual impact of facilities. The tool is named r.wind.sun and is freely available from GRASS GIS AddOns. - Highlights: • We develop a tool to quantify wind turbine and photovoltaic panel visual impact. • The tool is freely available to download and edit as a module of GRASS GIS. • The tool takes into account visual distortion of the shape and size of objects. • The accuracy of calculation of visual impact is improved over previous methods.« less
High-resolution mapping of vehicle emissions in China in 2008
NASA Astrophysics Data System (ADS)
Zheng, B.; Huo, H.; Zhang, Q.; Yao, Z. L.; Wang, X. T.; Yang, X. F.; Liu, H.; He, K. B.
2014-09-01
This study is the first in a series of papers that aim to develop high-resolution emission databases for different anthropogenic sources in China. Here we focus on on-road transportation. Because of the increasing impact of on-road transportation on regional air quality, developing an accurate and high-resolution vehicle emission inventory is important for both the research community and air quality management. This work proposes a new inventory methodology to improve the spatial and temporal accuracy and resolution of vehicle emissions in China. We calculate, for the first time, the monthly vehicle emissions for 2008 in 2364 counties (an administrative unit one level lower than city) by developing a set of approaches to estimate vehicle stock and monthly emission factors at county-level, and technology distribution at provincial level. We then introduce allocation weights for the vehicle kilometers traveled to assign the county-level emissions onto 0.05° × 0.05° grids based on the China Digital Road-network Map (CDRM). The new methodology overcomes the common shortcomings of previous inventory methods, including neglecting the geographical differences between key parameters and using surrogates that are weakly related to vehicle activities to allocate vehicle emissions. The new method has great advantages over previous methods in depicting the spatial distribution characteristics of vehicle activities and emissions. This work provides a better understanding of the spatial representation of vehicle emissions in China and can benefit both air quality modeling and management with improved spatial accuracy.
New determination of the gravitational constant G with time-of-swing method
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tu Liangcheng; Li Qing; Wang Qinglan
A new determination of the Newtonian gravitational constant G is presented by using a torsion pendulum with the time-of-swing method. Compared with our previous measurement with the same method, several improvements greatly reduced the uncertainties as follows: (i) two stainless steel spheres with more homogeneous density are used as the source masses instead of the cylinders used in the previous experiment, and the offset of the mass center from the geometric center is measured and found to be much smaller than that of the cylinders; (ii) a rectangular glass block is used as the main body of the pendulum, whichmore » has fewer vibration modes and hence improves the stability of the period and reduces the uncertainty of the moment of inertia; (iii) both the pendulum and source masses are placed in the same vacuum chamber to reduce the error of measuring the relative positions; (iv) changing the configurations between the ''near'' and ''far'' positions is remotely operated by using a stepper motor to lower the environmental disturbances; and (v) the anelastic effect of the torsion fiber is first measured directly by using two disk pendulums with the help of a high-Q quartz fiber. We have performed two independent G measurements, and the two G values differ by only 9 ppm. The combined value of G is (6.673 49{+-}0.000 18)x10{sup -11} m{sup 3} kg{sup -1} s{sup -2} with a relative uncertainty of 26 ppm.« less
Maliken, Bryan D; Avrin, William F; Nelson, James E; Mooney, Jody; Kumar, Sankaran; Kowdley, Kris V
2012-01-01
There is an ongoing clinical need for novel methods to measure hepatic iron content (HIC) noninvasively. Both magnetic resonance imaging (MRI) and superconducting quantum interference device (SQUID) methods have previously shown promise for estimation of HIC, but these methods can be expensive and are not widely available. Room-temperature susceptometry (RTS) represents an inexpensive alternative and was previously found to be strongly correlated with HIC estimated by SQUID measurements among patients with transfusional iron overload related to thalassemia. The goal of the current study was to examine the relationship between RTS and biochemical HIC measured in liver biopsy specimens in a more varied patient cohort. Susceptometry was performed in a diverse group of patients with hyperferritinemia due to hereditary hemochromatosis (HHC) (n = 2), secondary iron overload (n = 3), nonalcoholic fatty liver disease (NAFLD) (n = 2), and chronic viral hepatitis (n = 3) within one month of liver biopsy in the absence of iron depletion therapy. The correlation coefficient between HIC estimated by susceptometry and by biochemical iron measurement in liver tissue was 0.71 (p = 0.022). Variance between liver iron measurement and susceptometry measurement was primarily related to reliance on the patient's body-mass index (BMI) to estimate the magnetic susceptibility of tissue overlying the liver. We believe RTS holds promise for noninvasive measurement of HIC. Improved measurement techniques, including more accurate overlayer correction, may further improve the accuracy of liver susceptometry in patients with liver disease.
Simplified Method to Isolate Highly Pure Canine Pancreatic Islets
Woolcott, Orison O.; Bergman, Richard N.; Richey, Joyce M.; Kirkman, Erlinda L.; Harrison, L. Nicole; Ionut, Viorica; Lottati, Maya; Zheng, Dan; Hsu, Isabel R.; Stefanovski, Darko; Kabir, Morvarid; Kim, Stella P.; Catalano, Karyn J.; Chiu, Jenny D.; Chow, Robert H.
2015-01-01
Objectives The canine model has been used extensively to improve the human pancreatic islet isolation technique. At the functional level, dog islets show high similarity to human islets and thus can be a helpful tool for islet research. We describe and compare 2 manual isolation methods, M1 (initial) and M2 (modified), and analyze the variables associated with the outcomes, including islet yield, purity, and glucose-stimulated insulin secretion (GSIS). Methods Male mongrel dogs were used in the study. M2 (n = 7) included higher collagenase concentration, shorter digestion time, faster shaking speed, colder purification temperature, and higher differential density gradient than M1 (n = 7). Results Islet yield was similar between methods (3111.0 ± 309.1 and 3155.8 ± 644.5 islets/g, M1 and M2, respectively; P = 0.951). Pancreas weight and purity together were directly associated with the yield (adjusted R2 = 0.61; P = 0.002). Purity was considerably improved with M2 (96.7% ± 1.2% vs 75.0% ± 6.3%; P = 0.006). M2 improved GSIS (P = 0.021). Independently, digestion time was inversely associated with GSIS. Conclusions We describe an isolation method (M2) to obtain a highly pure yield of dog islets with adequate β-cell glucose responsiveness. The isolation variables associated with the outcomes in our canine model confirm previous reports in other species, including humans. PMID:21792087
Liu, Jiaen; Zhang, Xiaotong; Schmitter, Sebastian; Van de Moortele, Pierre-Francois; He, Bin
2014-01-01
Purpose To develop high-resolution electrical properties tomography (EPT) methods and investigate a gradient-based EPT (gEPT) approach which aims to reconstruct the electrical properties (EP), including conductivity and permittivity, of an imaged sample from experimentally measured B1 maps with improved boundary reconstruction and robustness against measurement noise. Theory and Methods Using a multi-channel transmit/receive stripline head coil, with acquired B1 maps for each coil element, by assuming negligible Bz component compared to transverse B1 components, a theory describing the relationship between B1 field, EP value and their spatial gradient has been proposed. The final EP images were obtained through spatial integration over the reconstructed EP gradient. Numerical simulation, physical phantom and in vivo human experiments at 7 T have been conducted to evaluate the performance of the proposed methods. Results Reconstruction results were compared with target EP values in both simulations and phantom experiments. Human experimental results were compared with EP values in literature. Satisfactory agreement was observed with improved boundary reconstruction. Importantly, the proposed gEPT method proved to be more robust against noise when compared to previously described non-gradient-based EPT approaches. Conclusion The proposed gEPT approach holds promises to improve EP mapping quality by recovering the boundary information and enhancing robustness against noise. PMID:25213371
The value of vital sign trends for detecting clinical deterioration on the wards
Churpek, Matthew M; Adhikari, Richa; Edelson, Dana P
2016-01-01
Aim Early detection of clinical deterioration on the wards may improve outcomes, and most early warning scores only utilize a patient’s current vital signs. The added value of vital sign trends over time is poorly characterized. We investigated whether adding trends improves accuracy and which methods are optimal for modelling trends. Methods Patients admitted to five hospitals over a five-year period were included in this observational cohort study, with 60% of the data used for model derivation and 40% for validation. Vital signs were utilized to predict the combined outcome of cardiac arrest, intensive care unit transfer, and death. The accuracy of models utilizing both the current value and different trend methods were compared using the area under the receiver operating characteristic curve (AUC). Results A total of 269,999 patient admissions were included, which resulted in 16,452 outcomes. Overall, trends increased accuracy compared to a model containing only current vital signs (AUC 0.78 vs. 0.74; p<0.001). The methods that resulted in the greatest average increase in accuracy were the vital sign slope (AUC improvement 0.013) and minimum value (AUC improvement 0.012), while the change from the previous value resulted in an average worsening of the AUC (change in AUC −0.002). The AUC increased most for systolic blood pressure when trends were added (AUC improvement 0.05). Conclusion Vital sign trends increased the accuracy of models designed to detect critical illness on the wards. Our findings have important implications for clinicians at the bedside and for the development of early warning scores. PMID:26898412
Lee, Jack; Zee, Benny Chung Ying; Li, Qing
2013-01-01
Diabetic retinopathy is a major cause of blindness. Proliferative diabetic retinopathy is a result of severe vascular complication and is visible as neovascularization of the retina. Automatic detection of such new vessels would be useful for the severity grading of diabetic retinopathy, and it is an important part of screening process to identify those who may require immediate treatment for their diabetic retinopathy. We proposed a novel new vessels detection method including statistical texture analysis (STA), high order spectrum analysis (HOS), fractal analysis (FA), and most importantly we have shown that by incorporating their associated interactions the accuracy of new vessels detection can be greatly improved. To assess its performance, the sensitivity, specificity and accuracy (AUC) are obtained. They are 96.3%, 99.1% and 98.5% (99.3%), respectively. It is found that the proposed method can improve the accuracy of new vessels detection significantly over previous methods. The algorithm can be automated and is valuable to detect relatively severe cases of diabetic retinopathy among diabetes patients.
Self-assembled monolayer and method of making
Fryxell, Glen E [Kennewick, WA; Zemanian, Thomas S [Richland, WA; Liu, Jun [West Richland, WA; Shin, Yongsoon [Richland, WA
2003-03-11
According to the present invention, the previously known functional material having a self-assembled monolayer on a substrate has a plurality of assembly molecules each with an assembly atom with a plurality of bonding sites (four sites when silicon is the assembly molecule) wherein a bonding fraction (or fraction) of fully bonded assembly atoms (the plurality of bonding sites bonded to an oxygen atom) has a maximum when made by liquid solution deposition, for example a maximum of 40% when silicon is the assembly molecule, and maximum surface density of assembly molecules was 5 silanes per square nanometer. Note that bonding fraction and surface population are independent parameters. The method of the present invention is an improvement to the known method for making a siloxane layer on a substrate, wherein instead of a liquid phase solution chemistry, the improvement is a supercritical phase chemistry. The present invention has the advantages of greater fraction of oxygen bonds, greater surface density of assembly molecules and reduced time for reaction of about 5 minutes to about 24 hours.
Self-assembled monolayer and method of making
Fryxell, Glen E.; Zemanian, Thomas S.; Liu, Jun; Shin, Yongsoon
2004-05-11
According to the present invention, the previously known functional material having a self-assembled monolayer on a substrate has a plurality of assembly molecules each with an assembly atom with a plurality of bonding sites (four sites when silicon is the assembly molecule) wherein a bonding fraction (or fraction) of fully bonded assembly atoms (the plurality of bonding sites bonded to an oxygen atom) has a maximum when made by liquid solution deposition, for example a maximum of 40% when silicon is the assembly molecule, and maximum surface density of assembly molecules was 5 silanes per square nanometer. Note that bonding fraction and surface population are independent parameters. The method of the present invention is an improvement to the known method for making a siloxane layer on a substrate, wherein instead of a liquid phase solution chemistry, the improvement is a supercritical phase chemistry. The present invention has the advantages of greater fraction of oxygen bonds, greater surface density of assembly molecules and reduced time for reaction of about 5 minutes to about 24 hours.
Self-Assembled Monolayer And Method Of Making
Fryxell, Glen E.; Zemanian, Thomas S.; Liu, Jun; Shin, Yongsoon
2004-06-22
According to the present invention, the previously known functional material having a self-assembled monolayer on a substrate has a plurality of assembly molecules each with an assembly atom with a plurality of bonding sites (four sites when silicon is the assembly molecule) wherein a bonding fraction (or fraction) of fully bonded assembly atoms (the plurality of bonding sites bonded to an oxygen atom) has a maximum when made by liquid solution deposition, for example a maximum of 40% when silicon is the assembly molecule, and maximum surface density of assembly molecules was 5 silanes per square nanometer. Note that bonding fraction and surface population are independent parameters. The method of the present invention is an improvement to the known method for making a siloxane layer on a substrate, wherein instead of a liquid phase solution chemistry, the improvement is a supercritical phase chemistry. The present invention has the advantages of greater fraction of oxygen bonds, greater surface density of assembly molecules and reduced time for reaction of about 5 minutes to about 24 hours.
Self-Assembled Monolayer And Method Of Making
Fryxell, Glen E.; Zemanian, Thomas S.; Liu, Jun; Shin, Yongsoon
2005-01-25
According to the present invention, the previously known functional material having a self-assembled monolayer on a substrate has a plurality of assembly molecules each with an assembly atom with a plurality of bonding sites (four sites when silicon is the assembly molecule) wherein a bonding fraction (or fraction) of fully bonded assembly atoms (the plurality of bonding sites bonded to an oxygen atom) has a maximum when made by liquid solution deposition, for example a maximum of 40% when silicon is the assembly molecule, and maximum surface density of assembly molecules was 5 silanes per square nanometer. Note that bonding fraction and surface population are independent parameters. The method of the present invention is an improvement to the known method for making a siloxane layer on a substrate, wherein instead of a liquid phase solution chemistry, the improvement is a supercritical phase chemistry. The present invention has the advantages of greater fraction of oxygen bonds, greater surface density of assembly molecules and reduced time for reaction of about 5 minutes to about 24 hours.
Yu, Hualong; Ni, Jun
2014-01-01
Training classifiers on skewed data can be technically challenging tasks, especially if the data is high-dimensional simultaneously, the tasks can become more difficult. In biomedicine field, skewed data type often appears. In this study, we try to deal with this problem by combining asymmetric bagging ensemble classifier (asBagging) that has been presented in previous work and an improved random subspace (RS) generation strategy that is called feature subspace (FSS). Specifically, FSS is a novel method to promote the balance level between accuracy and diversity of base classifiers in asBagging. In view of the strong generalization capability of support vector machine (SVM), we adopt it to be base classifier. Extensive experiments on four benchmark biomedicine data sets indicate that the proposed ensemble learning method outperforms many baseline approaches in terms of Accuracy, F-measure, G-mean and AUC evaluation criterions, thus it can be regarded as an effective and efficient tool to deal with high-dimensional and imbalanced biomedical data.
Learning and tuning fuzzy logic controllers through reinforcements
NASA Technical Reports Server (NTRS)
Berenji, Hamid R.; Khedkar, Pratap
1992-01-01
A new method for learning and tuning a fuzzy logic controller based on reinforcements from a dynamic system is presented. In particular, our Generalized Approximate Reasoning-based Intelligent Control (GARIC) architecture: (1) learns and tunes a fuzzy logic controller even when only weak reinforcements, such as a binary failure signal, is available; (2) introduces a new conjunction operator in computing the rule strengths of fuzzy control rules; (3) introduces a new localized mean of maximum (LMOM) method in combining the conclusions of several firing control rules; and (4) learns to produce real-valued control actions. Learning is achieved by integrating fuzzy inference into a feedforward network, which can then adaptively improve performance by using gradient descent methods. We extend the AHC algorithm of Barto, Sutton, and Anderson to include the prior control knowledge of human operators. The GARIC architecture is applied to a cart-pole balancing system and has demonstrated significant improvements in terms of the speed of learning and robustness to changes in the dynamic system's parameters over previous schemes for cart-pole balancing.
Berger, Jason; Upton, Colin; Springer, Elyah
2018-04-23
Visualization of nitrite residues is essential in gunshot distance determination. Current protocols for the detection of nitrites include, among other tests, the Modified Griess Test (MGT). This method is limited as nitrite residues are unstable in the environment and limited to partially burned gunpowder. Previous research demonstrated the ability of alkaline hydrolysis to convert nitrates to nitrites, allowing visualization of unburned gunpowder particles using the MGT. This is referred to as Total Nitrite Pattern Visualization (TNV). TNV techniques were modified and a study conducted to streamline the procedure outlined in the literature to maximize the efficacy of the TNV in casework, while reducing the required time from 1 h to 5 min, and enhancing effectiveness on blood-soiled samples. The TNV method was found to provide significant improvement in the ability to detect significant nitrite residues, without sacrificing efficiency, that would allow for the determination of the muzzle-to-target distance. © 2018 American Academy of Forensic Sciences.
Crystalline phases by an improved gradient expansion technique
NASA Astrophysics Data System (ADS)
Carignano, S.; Mannarelli, M.; Anzuini, F.; Benhar, O.
2018-02-01
We develop an innovative technique for studying inhomogeneous phases with a spontaneous broken symmetry. The method relies on the knowledge of the exact form of the free energy in the homogeneous phase and on a specific gradient expansion of the order parameter. We apply this method to quark matter at vanishing temperature and large chemical potential, which is expected to be relevant for astrophysical considerations. The method is remarkably reliable and fast as compared to performing the full numerical diagonalization of the quark Hamiltonian in momentum space and is designed to improve the standard Ginzburg-Landau expansion close to the phase transition points. For definiteness, we focus on inhomogeneous chiral symmetry breaking, accurately reproducing known results for one-dimensional and two-dimensional modulations and examining novel crystalline structures, as well. Consistently with previous results, we find that the energetically favored modulation is the so-called one-dimensional real-kink crystal. We propose a qualitative description of the pairing mechanism to motivate this result.
NASA Astrophysics Data System (ADS)
Shi, Xiaoyu; Shang, Ming-Sheng; Luo, Xin; Khushnood, Abbas; Li, Jian
2017-02-01
As the explosion growth of Internet economy, recommender system has become an important technology to solve the problem of information overload. However, recommenders are not one-size-fits-all, different recommenders have different virtues, making them be suitable for different users. In this paper, we propose a novel personalized recommender based on user preferences, which allows multiple recommenders to exist in E-commerce system simultaneously. We find that output of a recommender to each user is quite different when using different recommenders, the recommendation accuracy can be significantly improved if each user is assigned with his/her optimal personalized recommender. Furthermore, different from previous works focusing on short-term effects on recommender, we also evaluate the long-term effect of the proposed method by modeling the evolution of mutual feedback between user and online system. Finally, compared with single recommender running on the online system, the proposed method can improve the accuracy of recommendation significantly and get better trade-offs between short- and long-term performances of recommendation.
Antarctic ice shelf thickness from CryoSat-2 radar altimetry
NASA Astrophysics Data System (ADS)
Chuter, Stephen; Bamber, Jonathan
2016-04-01
The Antarctic ice shelves provide buttressing to the inland grounded ice sheet, and therefore play a controlling role in regulating ice dynamics and mass imbalance. Accurate knowledge of ice shelf thickness is essential for input-output method mass balance calculations, sub-ice shelf ocean models and buttressing parameterisations in ice sheet models. Ice shelf thickness has previously been inferred from satellite altimetry elevation measurements using the assumption of hydrostatic equilibrium, as direct measurements of ice thickness do not provide the spatial coverage necessary for these applications. The sensor limitations of previous radar altimeters have led to poor data coverage and a lack of accuracy, particularly the grounding zone where a break in slope exists. We present a new ice shelf thickness dataset using four years (2011-2014) of CryoSat-2 elevation measurements, with its SARIn dual antennae mode of operation alleviating the issues affecting previous sensors. These improvements and the dense across track spacing of the satellite has resulted in ˜92% coverage of the ice shelves, with substantial improvements, for example, of over 50% across the Venable and Totten Ice Shelves in comparison to the previous dataset. Significant improvements in coverage and accuracy are also seen south of 81.5° for the Ross and Filchner-Ronne Ice Shelves. Validation of the surface elevation measurements, used to derive ice thickness, against NASA ICESat laser altimetry data shows a mean bias of less than 1 m (equivalent to less than 9 m in ice thickness) and a fourfold decrease in standard deviation in comparison to the previous continental dataset. Importantly, the most substantial improvements are found in the grounding zone. Validation of the derived thickness data has been carried out using multiple Radio Echo Sounding (RES) campaigns across the continent. Over the Amery ice shelf, where extensive RES measurements exist, the mean difference between the datasets is 3.3% and 4.7% across the whole shelf and within 10 km of the grounding line, respectively. These represent a two to three fold improvement in accuracy when compared to the previous data product. The impact of these improvements on Input-Output estimates of mass balance is illustrated for the Abbot Ice Shelf. Our new product shows a mean reduction of 29% in thickness at the grounding line when compared to the previous dataset as well as the elimination of non-physical 'data spikes' that were prevalent in the previous product in areas of complex terrain. The reduction in grounding line thickness equates to a change in mass balance for the areas from -14±9 GTyr-1to -4±9 GTyr-1. We show examples from other sectors including the Getz and George VI ice shelves. The updated estimate is more consistent with the positive surface elevation rate in this region obtained from satellite altimetry. The new thickness dataset will greatly reduce the uncertainty in Input-Output estimates of mass balance for the ˜30% of the grounding line of Antarctica where direct ice thickness measurements do not exist.
How blockchain-timestamped protocols could improve the trustworthiness of medical science
Irving, Greg; Holden, John
2017-01-01
Trust in scientific research is diminished by evidence that data are being manipulated. Outcome switching, data dredging and selective publication are some of the problems that undermine the integrity of published research. Methods for using blockchain to provide proof of pre-specified endpoints in clinical trial protocols were first reported by Carlisle. We wished to empirically test such an approach using a clinical trial protocol where outcome switching has previously been reported. Here we confirm the use of blockchain as a low cost, independently verifiable method to audit and confirm the reliability of scientific studies. PMID:27239273
Measuring Distance of Fuzzy Numbers by Trapezoidal Fuzzy Numbers
NASA Astrophysics Data System (ADS)
Hajjari, Tayebeh
2010-11-01
Fuzzy numbers and more generally linguistic values are approximate assessments, given by experts and accepted by decision-makers when obtaining value that is more accurate is impossible or unnecessary. Distance between two fuzzy numbers plays an important role in linguistic decision-making. It is reasonable to define a fuzzy distance between fuzzy objects. To achieve this aim, the researcher presents a new distance measure for fuzzy numbers by means of improved centroid distance method. The metric properties are also studied. The advantage is the calculation of the proposed method is far simple than previous approaches.
How blockchain-timestamped protocols could improve the trustworthiness of medical science.
Irving, Greg; Holden, John
2016-01-01
Trust in scientific research is diminished by evidence that data are being manipulated. Outcome switching, data dredging and selective publication are some of the problems that undermine the integrity of published research. Methods for using blockchain to provide proof of pre-specified endpoints in clinical trial protocols were first reported by Carlisle. We wished to empirically test such an approach using a clinical trial protocol where outcome switching has previously been reported. Here we confirm the use of blockchain as a low cost, independently verifiable method to audit and confirm the reliability of scientific studies.
Accurate proteome-wide protein quantification from high-resolution 15N mass spectra
2011-01-01
In quantitative mass spectrometry-based proteomics, the metabolic incorporation of a single source of 15N-labeled nitrogen has many advantages over using stable isotope-labeled amino acids. However, the lack of a robust computational framework for analyzing the resulting spectra has impeded wide use of this approach. We have addressed this challenge by introducing a new computational methodology for analyzing 15N spectra in which quantification is integrated with identification. Application of this method to an Escherichia coli growth transition reveals significant improvement in quantification accuracy over previous methods. PMID:22182234
Differential equations as a tool for community identification.
Krawczyk, Małgorzata J
2008-06-01
We consider the task of identification of a cluster structure in random networks. The results of two methods are presented: (i) the Newman algorithm [M. E. J. Newman and M. Girvan, Phys. Rev. E 69, 026113 (2004)]; and (ii) our method based on differential equations. A series of computer experiments is performed to check if in applying these methods we are able to determine the structure of the network. The trial networks consist initially of well-defined clusters and are disturbed by introducing noise into their connectivity matrices. Further, we show that an improvement of the previous version of our method is possible by an appropriate choice of the threshold parameter beta . With this change, the results obtained by the two methods above are similar, and our method works better, for all the computer experiments we have done.
Resolved-particle simulation by the Physalis method: Enhancements and new capabilities
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sierakowski, Adam J., E-mail: sierakowski@jhu.edu; Prosperetti, Andrea; Faculty of Science and Technology and J.M. Burgers Centre for Fluid Dynamics, University of Twente, P.O. Box 217, 7500 AE Enschede
2016-03-15
We present enhancements and new capabilities of the Physalis method for simulating disperse multiphase flows using particle-resolved simulation. The current work enhances the previous method by incorporating a new type of pressure-Poisson solver that couples with a new Physalis particle pressure boundary condition scheme and a new particle interior treatment to significantly improve overall numerical efficiency. Further, we implement a more efficient method of calculating the Physalis scalar products and incorporate short-range particle interaction models. We provide validation and benchmarking for the Physalis method against experiments of a sedimenting particle and of normal wall collisions. We conclude with an illustrativemore » simulation of 2048 particles sedimenting in a duct. In the appendix, we present a complete and self-consistent description of the analytical development and numerical methods.« less
Hard-Rock Stability Analysis for Span Design in Entry-Type Excavations with Learning Classifiers
García-Gonzalo, Esperanza; Fernández-Muñiz, Zulima; García Nieto, Paulino José; Bernardo Sánchez, Antonio; Menéndez Fernández, Marta
2016-01-01
The mining industry relies heavily on empirical analysis for design and prediction. An empirical design method, called the critical span graph, was developed specifically for rock stability analysis in entry-type excavations, based on an extensive case-history database of cut and fill mining in Canada. This empirical span design chart plots the critical span against rock mass rating for the observed case histories and has been accepted by many mining operations for the initial span design of cut and fill stopes. Different types of analysis have been used to classify the observed cases into stable, potentially unstable and unstable groups. The main purpose of this paper is to present a new method for defining rock stability areas of the critical span graph, which applies machine learning classifiers (support vector machine and extreme learning machine). The results show a reasonable correlation with previous guidelines. These machine learning methods are good tools for developing empirical methods, since they make no assumptions about the regression function. With this software, it is easy to add new field observations to a previous database, improving prediction output with the addition of data that consider the local conditions for each mine. PMID:28773653
Hard-Rock Stability Analysis for Span Design in Entry-Type Excavations with Learning Classifiers.
García-Gonzalo, Esperanza; Fernández-Muñiz, Zulima; García Nieto, Paulino José; Bernardo Sánchez, Antonio; Menéndez Fernández, Marta
2016-06-29
The mining industry relies heavily on empirical analysis for design and prediction. An empirical design method, called the critical span graph, was developed specifically for rock stability analysis in entry-type excavations, based on an extensive case-history database of cut and fill mining in Canada. This empirical span design chart plots the critical span against rock mass rating for the observed case histories and has been accepted by many mining operations for the initial span design of cut and fill stopes. Different types of analysis have been used to classify the observed cases into stable, potentially unstable and unstable groups. The main purpose of this paper is to present a new method for defining rock stability areas of the critical span graph, which applies machine learning classifiers (support vector machine and extreme learning machine). The results show a reasonable correlation with previous guidelines. These machine learning methods are good tools for developing empirical methods, since they make no assumptions about the regression function. With this software, it is easy to add new field observations to a previous database, improving prediction output with the addition of data that consider the local conditions for each mine.
Retrieval evaluation and distance learning from perceived similarity between endomicroscopy videos.
André, Barbara; Vercauteren, Tom; Buchner, Anna M; Wallace, Michael B; Ayache, Nicholas
2011-01-01
Evaluating content-based retrieval (CBR) is challenging because it requires an adequate ground-truth. When the available groundtruth is limited to textual metadata such as pathological classes, retrieval results can only be evaluated indirectly, for example in terms of classification performance. In this study we first present a tool to generate perceived similarity ground-truth that enables direct evaluation of endomicroscopic video retrieval. This tool uses a four-points Likert scale and collects subjective pairwise similarities perceived by multiple expert observers. We then evaluate against the generated ground-truth a previously developed dense bag-of-visual-words method for endomicroscopic video retrieval. Confirming the results of previous indirect evaluation based on classification, our direct evaluation shows that this method significantly outperforms several other state-of-the-art CBR methods. In a second step, we propose to improve the CBR method by learning an adjusted similarity metric from the perceived similarity ground-truth. By minimizing a margin-based cost function that differentiates similar and dissimilar video pairs, we learn a weight vector applied to the visual word signatures of videos. Using cross-validation, we demonstrate that the learned similarity distance is significantly better correlated with the perceived similarity than the original visual-word-based distance.
Measuring global monopole velocities, one by one
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lopez-Eiguren, Asier; Urrestilla, Jon; Achúcarro, Ana, E-mail: asier.lopez@ehu.eus, E-mail: jon.urrestilla@ehu.eus, E-mail: achucar@lorentz.leidenuniv.nl
We present an estimation of the average velocity of a network of global monopoles in a cosmological setting using large numerical simulations. In order to obtain the value of the velocity, we improve some already known methods, and present a new one. This new method estimates individual global monopole velocities in a network, by means of detecting each monopole position in the lattice and following the path described by each one of them. Using our new estimate we can settle an open question previously posed in the literature: velocity-dependent one-scale (VOS) models for global monopoles predict two branches of scalingmore » solutions, one with monopoles moving at subluminal speeds and one with monopoles moving at luminal speeds. Previous attempts to estimate monopole velocities had large uncertainties and were not able to settle that question. Our simulations find no evidence of a luminal branch. We also estimate the values of the parameters of the VOS model. With our new method we can also study the microphysics of the complicated dynamics of individual monopoles. Finally we use our large simulation volume to compare the results from the different estimator methods, as well as to asses the validity of the numerical approximations made.« less
NASA Astrophysics Data System (ADS)
Dai, Mingzhi; Khan, Karim; Zhang, Shengnan; Jiang, Kemin; Zhang, Xingye; Wang, Weiliang; Liang, Lingyan; Cao, Hongtao; Wang, Pengjun; Wang, Peng; Miao, Lijing; Qin, Haiming; Jiang, Jun; Xue, Lixin; Chu, Junhao
2016-06-01
Sub-gap density of states (DOS) is a key parameter to impact the electrical characteristics of semiconductor materials-based transistors in integrated circuits. Previously, spectroscopy methodologies for DOS extractions include the static methods, temperature dependent spectroscopy and photonic spectroscopy. However, they might involve lots of assumptions, calculations, temperature or optical impacts into the intrinsic distribution of DOS along the bandgap of the materials. A direct and simpler method is developed to extract the DOS distribution from amorphous oxide-based thin-film transistors (TFTs) based on Dual gate pulse spectroscopy (GPS), introducing less extrinsic factors such as temperature and laborious numerical mathematical analysis than conventional methods. From this direct measurement, the sub-gap DOS distribution shows a peak value on the band-gap edge and in the order of 1017-1021/(cm3·eV), which is consistent with the previous results. The results could be described with the model involving both Gaussian and exponential components. This tool is useful as a diagnostics for the electrical properties of oxide materials and this study will benefit their modeling and improvement of the electrical properties and thus broaden their applications.
Neville, David C A; Coquard, Virginie; Priestman, David A; te Vruchte, Danielle J M; Sillence, Daniel J; Dwek, Raymond A; Platt, Frances M; Butters, Terry D
2004-08-15
Interest in cellular glycosphingolipid (GSL) function has necessitated the development of a rapid and sensitive method to both analyze and characterize the full complement of structures present in various cells and tissues. An optimized method to characterize oligosaccharides released from glycosphingolipids following ceramide glycanase digestion has been developed. The procedure uses the fluorescent compound anthranilic acid (2-aminobenzoic acid; 2-AA) to label oligosaccharides prior to analysis using normal-phase high-performance liquid chromatography. The labeling procedure is rapid, selective, and easy to perform and is based on the published method of Anumula and Dhume [Glycobiology 8 (1998) 685], originally used to analyze N-linked oligosaccharides. It is less time consuming than a previously published 2-aminobenzamide labeling method [Anal. Biochem. 298 (2001) 207] for analyzing GSL-derived oligosaccharides, as the fluorescent labeling is performed on the enzyme reaction mixture. The purification of 2-AA-labeled products has been improved to ensure recovery of oligosaccharides containing one to four monosaccharide units, which was not previously possible using the Anumula and Dhume post-derivatization purification procedure. This new approach may also be used to analyze both N- and O-linked oligosaccharides.
PolyaPeak: Detecting Transcription Factor Binding Sites from ChIP-seq Using Peak Shape Information
Wu, Hao; Ji, Hongkai
2014-01-01
ChIP-seq is a powerful technology for detecting genomic regions where a protein of interest interacts with DNA. ChIP-seq data for mapping transcription factor binding sites (TFBSs) have a characteristic pattern: around each binding site, sequence reads aligned to the forward and reverse strands of the reference genome form two separate peaks shifted away from each other, and the true binding site is located in between these two peaks. While it has been shown previously that the accuracy and resolution of binding site detection can be improved by modeling the pattern, efficient methods are unavailable to fully utilize that information in TFBS detection procedure. We present PolyaPeak, a new method to improve TFBS detection by incorporating the peak shape information. PolyaPeak describes peak shapes using a flexible Pólya model. The shapes are automatically learnt from the data using Minorization-Maximization (MM) algorithm, then integrated with the read count information via a hierarchical model to distinguish true binding sites from background noises. Extensive real data analyses show that PolyaPeak is capable of robustly improving TFBS detection compared with existing methods. An R package is freely available. PMID:24608116
Molecular Dynamics Information Improves cis-Peptide-Based Function Annotation of Proteins.
Das, Sreetama; Bhadra, Pratiti; Ramakumar, Suryanarayanarao; Pal, Debnath
2017-08-04
cis-Peptide bonds, whose occurrence in proteins is rare but evolutionarily conserved, are implicated to play an important role in protein function. This has led to their previous use in a homology-independent, fragment-match-based protein function annotation method. However, proteins are not static molecules; dynamics is integral to their activity. This is nicely epitomized by the geometric isomerization of cis-peptide to trans form for molecular activity. Hence we have incorporated both static (cis-peptide) and dynamics information to improve the prediction of protein molecular function. Our results show that cis-peptide information alone cannot detect functional matches in cases where cis-trans isomerization exists but 3D coordinates have been obtained for only the trans isomer or when the cis-peptide bond is incorrectly assigned as trans. On the contrary, use of dynamics information alone includes false-positive matches for cases where fragments with similar secondary structure show similar dynamics, but the proteins do not share a common function. Combining the two methods reduces errors while detecting the true matches, thereby enhancing the utility of our method in function annotation. A combined approach, therefore, opens up new avenues of improving existing automated function annotation methodologies.
Model-based ultrasound temperature visualization during and following HIFU exposure.
Ye, Guoliang; Smith, Penny Probert; Noble, J Alison
2010-02-01
This paper describes the application of signal processing techniques to improve the robustness of ultrasound feedback for displaying changes in temperature distribution in treatment using high-intensity focused ultrasound (HIFU), especially at the low signal-to-noise ratios that might be expected in in vivo abdominal treatment. Temperature estimation is based on the local displacements in ultrasound images taken during HIFU treatment, and a method to improve robustness to outliers is introduced. The main contribution of the paper is in the application of a Kalman filter, a statistical signal processing technique, which uses a simple analytical temperature model of heat dispersion to improve the temperature estimation from the ultrasound measurements during and after HIFU exposure. To reduce the sensitivity of the method to previous assumptions on the material homogeneity and signal-to-noise ratio, an adaptive form is introduced. The method is illustrated using data from HIFU exposure of ex vivo bovine liver. A particular advantage of the stability it introduces is that the temperature can be visualized not only in the intervals between HIFU exposure but also, for some configurations, during the exposure itself. 2010 World Federation for Ultrasound in Medicine & Biology. Published by Elsevier Inc. All rights reserved.
Matthews, Jennifer L; Murphy, Joy M; Carmichael, Carrie; Yang, Huiping; Tiersch, Terrence; Westerfield, Monte; Varga, Zoltan M
2018-01-25
Sperm cryopreservation is a highly efficient method for preserving genetic resources. It extends the reproductive period of males and significantly reduces costs normally associated with maintenance of live animal colonies. However, previous zebrafish (Danio rerio) cryopreservation methods have produced variable outcomes and low post-thaw fertilization rates. To improve post-thaw fertilization rates after cryopreservation, we developed a new extender and cryoprotective medium (CPM), introduced quality assessment (QA), determined the optimal cooling rate, and improved the post-thaw in vitro fertilization process. We found that the hypertonic extender E400 preserved motility of sperm held on ice for at least 6 h. We implemented QA by measuring sperm cell densities with a NanoDrop spectrophotometer and sperm motility with computer-assisted sperm analysis (CASA). We developed a CPM, RMMB, which contains raffinose, skim milk, methanol, and bicine buffer. Post-thaw motility indicated that the optimal cooling rate in two types of cryogenic vials was between 10 and 15°C/min. Test thaws from this method produced average motility of 20% ± 13% and an average post-thaw fertilization rate of 68% ± 16%.
Trakoolwilaiwan, Thanawin; Behboodi, Bahareh; Lee, Jaeseok; Kim, Kyungsoo; Choi, Ji-Woong
2018-01-01
The aim of this work is to develop an effective brain-computer interface (BCI) method based on functional near-infrared spectroscopy (fNIRS). In order to improve the performance of the BCI system in terms of accuracy, the ability to discriminate features from input signals and proper classification are desired. Previous studies have mainly extracted features from the signal manually, but proper features need to be selected carefully. To avoid performance degradation caused by manual feature selection, we applied convolutional neural networks (CNNs) as the automatic feature extractor and classifier for fNIRS-based BCI. In this study, the hemodynamic responses evoked by performing rest, right-, and left-hand motor execution tasks were measured on eight healthy subjects to compare performances. Our CNN-based method provided improvements in classification accuracy over conventional methods employing the most commonly used features of mean, peak, slope, variance, kurtosis, and skewness, classified by support vector machine (SVM) and artificial neural network (ANN). Specifically, up to 6.49% and 3.33% improvement in classification accuracy was achieved by CNN compared with SVM and ANN, respectively.
Statistical Coupling Analysis-Guided Library Design for the Discovery of Mutant Luciferases.
Liu, Mira D; Warner, Elliot A; Morrissey, Charlotte E; Fick, Caitlyn W; Wu, Taia S; Ornelas, Marya Y; Ochoa, Gabriela V; Zhang, Brendan S; Rathbun, Colin M; Porterfield, William B; Prescher, Jennifer A; Leconte, Aaron M
2018-02-06
Directed evolution has proven to be an invaluable tool for protein engineering; however, there is still a need for developing new approaches to continue to improve the efficiency and efficacy of these methods. Here, we demonstrate a new method for library design that applies a previously developed bioinformatic method, Statistical Coupling Analysis (SCA). SCA uses homologous enzymes to identify amino acid positions that are mutable and functionally important and engage in synergistic interactions between amino acids. We use SCA to guide a library of the protein luciferase and demonstrate that, in a single round of selection, we can identify luciferase mutants with several valuable properties. Specifically, we identify luciferase mutants that possess both red-shifted emission spectra and improved stability relative to those of the wild-type enzyme. We also identify luciferase mutants that possess a >50-fold change in specificity for modified luciferins. To understand the mutational origin of these improved mutants, we demonstrate the role of mutations at N229, S239, and G246 in altered function. These studies show that SCA can be used to guide library design and rapidly identify synergistic amino acid mutations from a small library.
Li, Xueqi; Woodman, Michael; Wang, Selina C
2015-08-01
Pheophytins and pyropheophytin are degradation products of chlorophyll pigments, and their ratios can be used as a sensitive indicator of stress during the manufacturing and storage of olive oil. They increase over time depending on the storage condition and if the oil is exposed to heat treatments during the refining process. The traditional analysis method includes solvent- and time-consuming steps of solid-phase extraction followed by analysis by high-performance liquid chromatography with ultraviolet detection. We developed an improved dilute/fluorescence method where multi-step sample preparation was replaced by a simple isopropanol dilution before the high-performance liquid chromatography injection. A quaternary solvent gradient method was used to include a fourth strong solvent wash on a quaternary gradient pump, which avoided the need to premix any solvents and greatly reduced the oil residues on the column from previous analysis. This new method not only reduces analysis cost and time but shows reliability, repeatability, and improved sensitivity, especially important for low-level samples. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.