A simple three dimensional wide-angle beam propagation method
NASA Astrophysics Data System (ADS)
Ma, Changbao; van Keuren, Edward
2006-05-01
The development of three dimensional (3-D) waveguide structures for chip scale planar lightwave circuits (PLCs) is hampered by the lack of effective 3-D wide-angle (WA) beam propagation methods (BPMs). We present a simple 3-D wide-angle beam propagation method (WA-BPM) using Hoekstra’s scheme along with a new 3-D wave equation splitting method. The applicability, accuracy and effectiveness of our method are demonstrated by applying it to simulations of wide-angle beam propagation and comparing them with analytical solutions.
A simple three dimensional wide-angle beam propagation method.
Ma, Changbao; Van Keuren, Edward
2006-05-29
The development of three dimensional (3-D) waveguide structures for chip scale planar lightwave circuits (PLCs) is hampered by the lack of effective 3-D wide-angle (WA) beam propagation methods (BPMs). We present a simple 3-D wide-angle beam propagation method (WA-BPM) using Hoekstra's scheme along with a new 3-D wave equation splitting method. The applicability, accuracy and effectiveness of our method are demonstrated by applying it to simulations of wide-angle beam propagation and comparing them with analytical solutions.
Applying the Multiple Signal Classification Method to Silent Object Detection Using Ambient Noise
NASA Astrophysics Data System (ADS)
Mori, Kazuyoshi; Yokoyama, Tomoki; Hasegawa, Akio; Matsuda, Minoru
2004-05-01
The revolutionary concept of using ocean ambient noise positively to detect objects, called acoustic daylight imaging, has attracted much attention. The authors attempted the detection of a silent target object using ambient noise and a wide-band beam former consisting of an array of receivers. In experimental results obtained in air, using the wide-band beam former, we successfully applied the delay-sum array (DSA) method to detect a silent target object in an acoustic noise field generated by a large number of transducers. This paper reports some experimental results obtained by applying the multiple signal classification (MUSIC) method to a wide-band beam former to detect silent targets. The ocean ambient noise was simulated by transducers decentralized to many points in air. Both MUSIC and DSA detected a spherical target object in the noise field. The relative power levels near the target obtained with MUSIC were compared with those obtained by DSA. Then the effectiveness of the MUSIC method was evaluated according to the rate of increase in the maximum and minimum relative power levels.
A Design Method for a State Feedback Microcomputer Controller of a Wide Bandwidth Analog Plant.
1983-12-01
Il IIIz NAVAL POSTGRADUATE SCHOOLMonterey, California THESIS A A DESIGN METHOD FOR A STATE FEEDBACK MICROCOMPUTER CONTROLLER OF A WIDE BANDWIDTH...of a microcomputer regulator, continuous or discrete method can be applied. The o:bjective of this thesis is to provide a continuous controller ...estimation and control type problem. In this thesis , a wide bandwidth analog computer system is chosen as the plant so that the effect of transport
Flotation of Mineral and Dyes: A Laboratory Experiment for Separation Method Molecular Hitchhikers
ERIC Educational Resources Information Center
Rappon, Tim; Sylvestre, Jarrett A.; Rappon, Manit
2016-01-01
Flotation as a method of separation is widely researched and is applied in many industries. It has been used to address a wide range of environmental issues including treatment of wastewater, recovery of heavy metals for recycling, extraction of minerals in mining, and so forth. This laboratory attempts to show how such a simple method can be used…
Acoustic agglomeration methods and apparatus
NASA Technical Reports Server (NTRS)
Barmatz, M. B. (Inventor)
1984-01-01
Methods are described for using acoustic energy to agglomerate fine particles on the order of one micron diameter that are suspended in gas, to provide agglomerates large enough for efficient removal by other techniques. The gas with suspended particles, is passed through the length of a chamber while acoustic energy at a resonant chamber mode is applied to set up one or more acoustic standing wave patterns that vibrate the suspended particles to bring them together so they agglomerate. Several widely different frequencies can be applied to efficiently vibrate particles of widely differing sizes. The standing wave pattern can be applied along directions transversed to the flow of the gas. The particles can be made to move in circles by applying acoustic energy in perpendicular directions with the energy in both directions being of the same wavelength but 90 deg out of phase.
ERIC Educational Resources Information Center
Jiang, Yong
2017-01-01
Traditional mathematical methods built around exactitude have limitations when applied to the processing of educational information, due to their uncertainty and imperfection. Alternative mathematical methods, such as grey system theory, have been widely applied in processing incomplete information systems and have proven effective in a number of…
Enhanced Molecular Dynamics Methods Applied to Drug Design Projects.
Ziada, Sonia; Braka, Abdennour; Diharce, Julien; Aci-Sèche, Samia; Bonnet, Pascal
2018-01-01
Nobel Laureate Richard P. Feynman stated: "[…] everything that living things do can be understood in terms of jiggling and wiggling of atoms […]." The importance of computer simulations of macromolecules, which use classical mechanics principles to describe atom behavior, is widely acknowledged and nowadays, they are applied in many fields such as material sciences and drug discovery. With the increase of computing power, molecular dynamics simulations can be applied to understand biological mechanisms at realistic timescales. In this chapter, we share our computational experience providing a global view of two of the widely used enhanced molecular dynamics methods to study protein structure and dynamics through the description of their characteristics, limits and we provide some examples of their applications in drug design. We also discuss the appropriate choice of software and hardware. In a detailed practical procedure, we describe how to set up, run, and analyze two main molecular dynamics methods, the umbrella sampling (US) and the accelerated molecular dynamics (aMD) methods.
Explosion yield estimation from pressure wave template matching
Arrowsmith, Stephen; Bowman, Daniel
2017-01-01
A method for estimating the yield of explosions from shock-wave and acoustic-wave measurements is presented. The method exploits full waveforms by comparing pressure measurements against an empirical stack of prior observations using scaling laws. The approach can be applied to measurements across a wide-range of source-to-receiver distances. The method is applied to data from two explosion experiments in different regions, leading to mean relative errors in yield estimates of 0.13 using prior data from the same region, and 0.2 when applied to a new region. PMID:28618805
Palladium-Catalyzed Direct C–H Arylation of Cyclic Enaminones with Aryl Iodides
Yu, Yi-Yun; Bi, Lei
2013-01-01
A ligand-free method for the Pd-catalyzed direct arylation of cyclic enaminones using aryl iodides was developed. This method can be applied to a wide range of cyclic enaminones and aryl iodides with excellent C5-regioselectivity. Using widely available aryl iodides, the generality of this transformation provides easy access to a variety of 3-arylpiperidine structural motifs. PMID:23750615
USDA-ARS?s Scientific Manuscript database
Scab (caused by Venturia effusa) is the most destructive disease of pecan in the southeastern USA. The most widely used method to apply fungicide is air-blast (AB) sprayers. Aerially (A) applied sprays are also used, but the disease distribution and spray coverage of these two methods has not been c...
NASA Astrophysics Data System (ADS)
Yamamoto, Kazuya; Takaoka, Toshimitsu; Fukui, Hidetoshi; Haruta, Yasuyuki; Yamashita, Tomoya; Kitagawa, Seiichiro
2016-03-01
In general, thin-film coating process is widely applied on optical lens surface as anti-reflection function. In normal production process, at first lens is manufactured by molding, then anti-reflection is added by thin-film coating. In recent years, instead of thin-film coating, sub-wavelength structures adding on surface of molding die are widely studied and development to keep anti-reflection performance. As merits, applying sub-wavelength structure, coating process becomes unnecessary and it is possible to reduce man-hour costs. In addition to cost merit, these are some technical advantages on this study. Adhesion of coating depends on material of plastic, and it is impossible to apply anti-reflection function on arbitrary surface. Sub-wavelength structure can solve both problems. Manufacturing method of anti-reflection structure can be divided into two types mainly. One method is with the resist patterning, and the other is mask-less method that does not require patterning. What we have developed is new mask-less method which is no need for resist patterning and possible to impart an anti-reflection structure to large area and curved lens surface, and can be expected to apply to various market segments. We report developed technique and characteristics of production lens.
The research of network database security technology based on web service
NASA Astrophysics Data System (ADS)
Meng, Fanxing; Wen, Xiumei; Gao, Liting; Pang, Hui; Wang, Qinglin
2013-03-01
Database technology is one of the most widely applied computer technologies, its security is becoming more and more important. This paper introduced the database security, network database security level, studies the security technology of the network database, analyzes emphatically sub-key encryption algorithm, applies this algorithm into the campus-one-card system successfully. The realization process of the encryption algorithm is discussed, this method is widely used as reference in many fields, particularly in management information system security and e-commerce.
Applying an analytical method to study neutron behavior for dosimetry
NASA Astrophysics Data System (ADS)
Shirazi, S. A. Mousavi
2016-12-01
In this investigation, a new dosimetry process is studied by applying an analytical method. This novel process is associated with a human liver tissue. The human liver tissue has compositions including water, glycogen and etc. In this study, organic compound materials of liver are decomposed into their constituent elements based upon mass percentage and density of every element. The absorbed doses are computed by analytical method in all constituent elements of liver tissue. This analytical method is introduced applying mathematical equations based on neutron behavior and neutron collision rules. The results show that the absorbed doses are converged for neutron energy below 15MeV. This method can be applied to study the interaction of neutrons in other tissues and estimating the absorbed dose for a wide range of neutron energy.
Applied Cognitive Task Analysis (ACTA) Methodology
1997-11-01
experienced based cognitive skills. The primary goal of this project was to develop streamlined methods of Cognitive Task Analysis that would fill this need...We have made important progression this direction. We have developed streamlined methods of Cognitive Task Analysis . Our evaluation study indicates...developed a CD-based stand alone instructional package, which will make the Applied Cognitive Task Analysis (ACTA) tools widely accessible. A survey of the
Why conventional detection methods fail in identifying the existence of contamination events.
Liu, Shuming; Li, Ruonan; Smith, Kate; Che, Han
2016-04-15
Early warning systems are widely used to safeguard water security, but their effectiveness has raised many questions. To understand why conventional detection methods fail to identify contamination events, this study evaluates the performance of three contamination detection methods using data from a real contamination accident and two artificial datasets constructed using a widely applied contamination data construction approach. Results show that the Pearson correlation Euclidean distance (PE) based detection method performs better for real contamination incidents, while the Euclidean distance method (MED) and linear prediction filter (LPF) method are more suitable for detecting sudden spike-like variation. This analysis revealed why the conventional MED and LPF methods failed to identify existence of contamination events. The analysis also revealed that the widely used contamination data construction approach is misleading. Copyright © 2016 Elsevier Ltd. All rights reserved.
Differential electrophoretic separation of cells and its effect on cell viability
NASA Technical Reports Server (NTRS)
Leise, E. M.; Lesane, F.
1974-01-01
An electrophoretic separation method was applied to the separation of cells. To determine the efficiency of the separation, it was necessary to apply existing methodology and develop new methods to assess the characteristics and functions of the separated subpopulations. Through appropriate application of the widely used isoelectric focusing procedure, a reproducible separation method was developed. Cells accumulated at defined pH and 70-80% remained viable. The cells were suitable for further biologic, biochemical and immunologic studies.
USDA-ARS?s Scientific Manuscript database
A wide range of analytical techniques are available for the detection, quantitation, and evaluation of vitamin K in foods. The methods vary from simple to complex depending on extraction, separation, identification and detection of the analyte. Among the extraction methods applied for vitamin K anal...
A three-dimensional wide-angle BPM for optical waveguide structures.
Ma, Changbao; Van Keuren, Edward
2007-01-22
Algorithms for effective modeling of optical propagation in three- dimensional waveguide structures are critical for the design of photonic devices. We present a three-dimensional (3-D) wide-angle beam propagation method (WA-BPM) using Hoekstra's scheme. A sparse matrix algebraic equation is formed and solved using iterative methods. The applicability, accuracy and effectiveness of our method are demonstrated by applying it to simulations of wide-angle beam propagation, along with a technique for shifting the simulation window to reduce the dimension of the numerical equation and a threshold technique to further ensure its convergence. These techniques can ensure the implementation of iterative methods for waveguide structures by relaxing the convergence problem, which will further enable us to develop higher-order 3-D WA-BPMs based on Padé approximant operators.
A three-dimensional wide-angle BPM for optical waveguide structures
NASA Astrophysics Data System (ADS)
Ma, Changbao; van Keuren, Edward
2007-01-01
Algorithms for effective modeling of optical propagation in three- dimensional waveguide structures are critical for the design of photonic devices. We present a three-dimensional (3-D) wide-angle beam propagation method (WA-BPM) using Hoekstra’s scheme. A sparse matrix algebraic equation is formed and solved using iterative methods. The applicability, accuracy and effectiveness of our method are demonstrated by applying it to simulations of wide-angle beam propagation, along with a technique for shifting the simulation window to reduce the dimension of the numerical equation and a threshold technique to further ensure its convergence. These techniques can ensure the implementation of iterative methods for waveguide structures by relaxing the convergence problem, which will further enable us to develop higher-order 3-D WA-BPMs based on Padé approximant operators.
Rapid field method for determining the polish susceptibility of carbonate aggregates.
DOT National Transportation Integrated Search
1974-08-01
A quick and simple method by which limestones and related carbonate : paving aggregates can be rated as to their relative susceptibility to : polishing has been successfully applied to a wide number of aggregate : sources used on Texas Highway projec...
Yu, Fei; Ji, Zhanglong
2014-01-01
In response to the growing interest in genome-wide association study (GWAS) data privacy, the Integrating Data for Analysis, Anonymization and SHaring (iDASH) center organized the iDASH Healthcare Privacy Protection Challenge, with the aim of investigating the effectiveness of applying privacy-preserving methodologies to human genetic data. This paper is based on a submission to the iDASH Healthcare Privacy Protection Challenge. We apply privacy-preserving methods that are adapted from Uhler et al. 2013 and Yu et al. 2014 to the challenge's data and analyze the data utility after the data are perturbed by the privacy-preserving methods. Major contributions of this paper include new interpretation of the χ2 statistic in a GWAS setting and new results about the Hamming distance score, a key component for one of the privacy-preserving methods.
2014-01-01
In response to the growing interest in genome-wide association study (GWAS) data privacy, the Integrating Data for Analysis, Anonymization and SHaring (iDASH) center organized the iDASH Healthcare Privacy Protection Challenge, with the aim of investigating the effectiveness of applying privacy-preserving methodologies to human genetic data. This paper is based on a submission to the iDASH Healthcare Privacy Protection Challenge. We apply privacy-preserving methods that are adapted from Uhler et al. 2013 and Yu et al. 2014 to the challenge's data and analyze the data utility after the data are perturbed by the privacy-preserving methods. Major contributions of this paper include new interpretation of the χ2 statistic in a GWAS setting and new results about the Hamming distance score, a key component for one of the privacy-preserving methods. PMID:25521367
Reliability of tanoak volume equations when applied to different areas
Norman H. Pillsbury; Philip M. McDonald; Victor Simon
1995-01-01
Tree volume equations for tanoak (Lithocarpus densiflorus) were developed for seven stands throughout its natural range and compared by a volume prediction and a parameter difference method. The objective was to test if volume estimates from a species growing in a local, relatively uniform habitat could be applied more widely. Results indicated...
Applications of DNA-Stable Isotope Probing in Bioremediation Studies
NASA Astrophysics Data System (ADS)
Chen, Yin; Vohra, Jyotsna; Murrell, J. Colin
DNA-stable isotope probing, a method to identify active microorganisms without the prerequisite of cultivation, has been widely applied in the study of microorganisms involved in the degradation of environmental pollutants. Recent advances and technique considerations in applying DNA-SIP in bioremediation are highlighted. A detailed protocol of a DNA-SIP experiment is provided.
Applications of DNA-stable isotope probing in bioremediation studies.
Chen, Yin; Vohra, Jyotsna; Murrell, J Colin
2010-01-01
DNA-stable isotope probing, a method to identify active microorganisms without the prerequisite of cultivation, has been widely applied in the study of microorganisms involved in the degradation of environmental pollutants. Recent advances and technique considerations in applying DNA-SIP in bioremediation are highlighted. A detailed protocol of a DNA-SIP experiment is provided.
Improved numerical methods for turbulent viscous recirculating flows
NASA Technical Reports Server (NTRS)
Turan, A.; Vandoormaal, J. P.
1988-01-01
The performance of discrete methods for the prediction of fluid flows can be enhanced by improving the convergence rate of solvers and by increasing the accuracy of the discrete representation of the equations of motion. This report evaluates the gains in solver performance that are available when various acceleration methods are applied. Various discretizations are also examined and two are recommended because of their accuracy and robustness. Insertion of the improved discretization and solver accelerator into a TEACH mode, that has been widely applied to combustor flows, illustrates the substantial gains to be achieved.
Processes for manufacturing multifocal diffractive-refractive intraocular lenses
NASA Astrophysics Data System (ADS)
Iskakov, I. A.
2017-09-01
Manufacturing methods and design features of modern diffractive-refractive intraocular lenses are discussed. The implantation of multifocal intraocular lenses is the most optimal method of restoring the accommodative ability of the eye after removal of the natural lens. Diffractive-refractive intraocular lenses are the most widely used implantable multifocal lenses worldwide. Existing methods for manufacturing such lenses implement various design solutions to provide the best vision function after surgery. The wide variety of available diffractive-refractive intraocular lens designs reflects the demand for this method of vision correction in clinical practice and the importance of further applied research and development of new technologies for designing improved lens models.
ERIC Educational Resources Information Center
Marsh, Herbert W.; Hocevar, Dennis
The advantages of applying confirmatory factor analysis (CFA) to multitrait-multimethod (MTMM) data are widely recognized. However, because CFA as traditionally applied to MTMM data incorporates single indicators of each scale (i.e., each trait/method combination), important weaknesses are the failure to: (1) correct appropriately for measurement…
Brosch, Tom; Tang, Lisa Y W; Youngjin Yoo; Li, David K B; Traboulsee, Anthony; Tam, Roger
2016-05-01
We propose a novel segmentation approach based on deep 3D convolutional encoder networks with shortcut connections and apply it to the segmentation of multiple sclerosis (MS) lesions in magnetic resonance images. Our model is a neural network that consists of two interconnected pathways, a convolutional pathway, which learns increasingly more abstract and higher-level image features, and a deconvolutional pathway, which predicts the final segmentation at the voxel level. The joint training of the feature extraction and prediction pathways allows for the automatic learning of features at different scales that are optimized for accuracy for any given combination of image types and segmentation task. In addition, shortcut connections between the two pathways allow high- and low-level features to be integrated, which enables the segmentation of lesions across a wide range of sizes. We have evaluated our method on two publicly available data sets (MICCAI 2008 and ISBI 2015 challenges) with the results showing that our method performs comparably to the top-ranked state-of-the-art methods, even when only relatively small data sets are available for training. In addition, we have compared our method with five freely available and widely used MS lesion segmentation methods (EMS, LST-LPA, LST-LGA, Lesion-TOADS, and SLS) on a large data set from an MS clinical trial. The results show that our method consistently outperforms these other methods across a wide range of lesion sizes.
Sauwen, Nicolas; Acou, Marjan; Bharath, Halandur N; Sima, Diana M; Veraart, Jelle; Maes, Frederik; Himmelreich, Uwe; Achten, Eric; Van Huffel, Sabine
2017-01-01
Non-negative matrix factorization (NMF) has become a widely used tool for additive parts-based analysis in a wide range of applications. As NMF is a non-convex problem, the quality of the solution will depend on the initialization of the factor matrices. In this study, the successive projection algorithm (SPA) is proposed as an initialization method for NMF. SPA builds on convex geometry and allocates endmembers based on successive orthogonal subspace projections of the input data. SPA is a fast and reproducible method, and it aligns well with the assumptions made in near-separable NMF analyses. SPA was applied to multi-parametric magnetic resonance imaging (MRI) datasets for brain tumor segmentation using different NMF algorithms. Comparison with common initialization methods shows that SPA achieves similar segmentation quality and it is competitive in terms of convergence rate. Whereas SPA was previously applied as a direct endmember extraction tool, we have shown improved segmentation results when using SPA as an initialization method, as it allows further enhancement of the sources during the NMF iterative procedure.
Electrostatics of crossed arrays of strips.
Danicki, Eugene
2010-07-01
The BIS-expansion method is widely applied in analysis of SAW devices. Its generalization is presented for two planar periodic systems of perfectly conducting strips arranged perpendicularly on both sides of a dielectric layer. The generalized method can be applied in the evaluation of capacitances of strips on printed circuits boards and certain microwave devices, but primarily it may help in evaluation of 2-D piezoelectric sensors and actuators, with row and column addressing their elements, and also piezoelectric bulk wave resonators.
A two-fluid model of the solar wind
NASA Technical Reports Server (NTRS)
Sandbaek, O.; Leer, E.; Holzer, T. E.
1992-01-01
A method is presented for the integration of the two-fluid solar-wind equations which is applicable to a wide variety of coronal base densities and temperatures. The method involves proton heat conduction, and may be applied to coronal base conditions for which subsonic-supersonic solar wind solutions exist.
Gene flow analysis method, the D-statistic, is robust in a wide parameter space.
Zheng, Yichen; Janke, Axel
2018-01-08
We evaluated the sensitivity of the D-statistic, a parsimony-like method widely used to detect gene flow between closely related species. This method has been applied to a variety of taxa with a wide range of divergence times. However, its parameter space and thus its applicability to a wide taxonomic range has not been systematically studied. Divergence time, population size, time of gene flow, distance of outgroup and number of loci were examined in a sensitivity analysis. The sensitivity study shows that the primary determinant of the D-statistic is the relative population size, i.e. the population size scaled by the number of generations since divergence. This is consistent with the fact that the main confounding factor in gene flow detection is incomplete lineage sorting by diluting the signal. The sensitivity of the D-statistic is also affected by the direction of gene flow, size and number of loci. In addition, we examined the ability of the f-statistics, [Formula: see text] and [Formula: see text], to estimate the fraction of a genome affected by gene flow; while these statistics are difficult to implement to practical questions in biology due to lack of knowledge of when the gene flow happened, they can be used to compare datasets with identical or similar demographic background. The D-statistic, as a method to detect gene flow, is robust against a wide range of genetic distances (divergence times) but it is sensitive to population size. The D-statistic should only be applied with critical reservation to taxa where population sizes are large relative to branch lengths in generations.
Application of genomic selection in farm animal breeding.
Tan, Cheng; Bian, Cheng; Yang, Da; Li, Ning; Wu, Zhen-Fang; Hu, Xiao-Xiang
2017-11-20
Genomic selection (GS) has become a widely accepted method in animal breeding to genetically improve economic traits. With the declining costs of high-density SNP chips and next-generation sequencing, GS has been applied in dairy cattle, swine, poultry and other animals and gained varying degrees of success. Currently, major challenges in GS studies include further reducing the cost of genome-wide SNP genotyping and improving the predictive accuracy of genomic estimated breeding value (GEBV). In this review, we summarize various methods for genome-wide SNP genotyping and GEBV prediction, and give a brief introduction of GS in livestock and poultry breeding. This review will provide a reference for further implementation of GS in farm animal breeding.
Li, Jiangeng; Su, Lei; Pang, Zenan
2015-12-01
Feature selection techniques have been widely applied to tumor gene expression data analysis in recent years. A filter feature selection method named marginal Fisher analysis score (MFA score) which is based on graph embedding has been proposed, and it has been widely used mainly because it is superior to Fisher score. Considering the heavy redundancy in gene expression data, we proposed a new filter feature selection technique in this paper. It is named MFA score+ and is based on MFA score and redundancy excluding. We applied it to an artificial dataset and eight tumor gene expression datasets to select important features and then used support vector machine as the classifier to classify the samples. Compared with MFA score, t test and Fisher score, it achieved higher classification accuracy.
López-Hernández, Y; Patiño-Rodríguez, O; García-Orta, S T; Pinos-Rodríguez, J M
2016-12-01
An adequate and effective tuberculosis (TB) diagnosis system has been identified by the World Health Organization as a priority in the fight against this disease. Over the years, several methods have been developed to identify the bacillus, but bacterial culture remains one of the most affordable methods for most countries. For rapid and accurate identification, however, it is more feasible to implement molecular techniques, taking advantage of the availability of public databases containing protein sequences. Mass spectrometry (MS) has become an interesting technique for the identification of TB. Here, we review some of the most widely employed methods for identifying Mycobacterium tuberculosis and present an update on MS applied for the identification of mycobacterial species. © 2016 The Society for Applied Microbiology.
Determination of Peukert's Constant Using Impedance Spectroscopy: Application to Supercapacitors.
Mills, Edmund Martin; Kim, Sangtae
2016-12-15
Peukert's equation is widely used to model the rate dependence of battery capacity, and has recently attracted attention for application to supercapacitors. Here we present a newly developed method to readily determine Peukert's constant using impedance spectroscopy. Impedance spectroscopy is ideal for this purpose as it has the capability of probing electrical performance of a device over a wide range of time-scales within a single measurement. We demonstrate that the new method yields consistent results with conventional galvanostatic measurements through applying it to commercially available supercapacitors. Additionally, the novel method is much simpler and more precise, making it an attractive alternative for the determination of Peukert's constant.
Boric Acid in Kjeldahl Analysis
ERIC Educational Resources Information Center
Cruz, Gregorio
2013-01-01
The use of boric acid in the Kjeldahl determination of nitrogen is a variant of the original method widely applied in many laboratories all over the world. Its use is recommended by control organizations such as ISO, IDF, and EPA because it yields reliable and accurate results. However, the chemical principles the method is based on are not…
Using FTIR-ATR Spectroscopy to Teach the Internal Standard Method
ERIC Educational Resources Information Center
Bellamy, Michael K.
2010-01-01
The internal standard method is widely applied in quantitative analyses. However, most analytical chemistry textbooks either omit this topic or only provide examples of a single-point internal standardization. An experiment designed to teach students how to prepare an internal standard calibration curve is described. The experiment is a modified…
Wide Binaries in TGAS: Search Method and First Results
NASA Astrophysics Data System (ADS)
Andrews, Jeff J.; Chanamé, Julio; Agüeros, Marcel A.
2018-04-01
Half of all stars reside in binary systems, many of which have orbital separations in excess of 1000 AU. Such binaries are typically identified in astrometric catalogs by matching the proper motions vectors of close stellar pairs. We present a fully Bayesian method that properly takes into account positions, proper motions, parallaxes, and their correlated uncertainties to identify widely separated stellar binaries. After applying our method to the >2 × 106 stars in the Tycho-Gaia astrometric solution from Gaia DR1, we identify over 6000 candidate wide binaries. For those pairs with separations less than 40,000 AU, we determine the contamination rate to be ~5%. This sample has an orbital separation (a) distribution that is roughly flat in log space for separations less than ~5000 AU and follows a power law of a -1.6 at larger separations.
Force Enhancement Packages for Countering Nuclear Threats in the 2022-2027 Time Frame
2015-09-01
characterization methods . • Apply proper radioisotope identification techniques. c. A one-week CNT operations exercise at Fort Belvoir, Virginia. Team members...on experiments to seek better methods , holding active teaching until later. The team expects that better methods would involve collection using...conduct more effective wide-area searches than those commonly employed by civil law enforcement agencies. The IDA team suggests that better methods
SvABA: genome-wide detection of structural variants and indels by local assembly.
Wala, Jeremiah A; Bandopadhayay, Pratiti; Greenwald, Noah F; O'Rourke, Ryan; Sharpe, Ted; Stewart, Chip; Schumacher, Steve; Li, Yilong; Weischenfeldt, Joachim; Yao, Xiaotong; Nusbaum, Chad; Campbell, Peter; Getz, Gad; Meyerson, Matthew; Zhang, Cheng-Zhong; Imielinski, Marcin; Beroukhim, Rameen
2018-04-01
Structural variants (SVs), including small insertion and deletion variants (indels), are challenging to detect through standard alignment-based variant calling methods. Sequence assembly offers a powerful approach to identifying SVs, but is difficult to apply at scale genome-wide for SV detection due to its computational complexity and the difficulty of extracting SVs from assembly contigs. We describe SvABA, an efficient and accurate method for detecting SVs from short-read sequencing data using genome-wide local assembly with low memory and computing requirements. We evaluated SvABA's performance on the NA12878 human genome and in simulated and real cancer genomes. SvABA demonstrates superior sensitivity and specificity across a large spectrum of SVs and substantially improves detection performance for variants in the 20-300 bp range, compared with existing methods. SvABA also identifies complex somatic rearrangements with chains of short (<1000 bp) templated-sequence insertions copied from distant genomic regions. We applied SvABA to 344 cancer genomes from 11 cancer types and found that short templated-sequence insertions occur in ∼4% of all somatic rearrangements. Finally, we demonstrate that SvABA can identify sites of viral integration and cancer driver alterations containing medium-sized (50-300 bp) SVs. © 2018 Wala et al.; Published by Cold Spring Harbor Laboratory Press.
The Measurement of Unsteady Surface Pressure Using a Remote Microphone Probe.
Guan, Yaoyi; Berntsen, Carl R; Bilka, Michael J; Morris, Scott C
2016-12-03
Microphones are widely applied to measure pressure fluctuations at the walls of solid bodies immersed in turbulent flows. Turbulent motions with various characteristic length scales can result in pressure fluctuations over a wide frequency range. This property of turbulence requires sensing devices to have sufficient sensitivity over a wide range of frequencies. Furthermore, the small characteristic length scales of turbulent structures require small sensing areas and the ability to place the sensors in very close proximity to each other. The complex geometries of the solid bodies, often including large surface curvatures or discontinuities, require the probe to have the ability to be set up in very limited spaces. The development of a remote microphone probe, which is inexpensive, consistent, and repeatable, is described in the present communication. It allows for the measurement of pressure fluctuations with high spatial resolution and dynamic response over a wide range of frequencies. The probe is small enough to be placed within the interior of typical wind tunnel models. The remote microphone probe includes a small, rigid, and hollow tube that penetrates the model surface to form the sensing area. This tube is connected to a standard microphone, at some distance away from the surface, using a "T" junction. An experimental method is introduced to determine the dynamic response of the remote microphone probe. In addition, an analytical method for determining the dynamic response is described. The analytical method can be applied in the design stage to determine the dimensions and properties of the RMP components.
NASA Astrophysics Data System (ADS)
Styk, Adam
2014-07-01
Classical time-averaging and stroboscopic interferometry are widely used for MEMS/MOEMS dynamic behavior investigations. Unfortunately both methods require an amplitude magnitude of at least 0.19λ to be able to detect resonant frequency of the object. Moreover the precision of measurement is limited. That puts strong constrains on the type of element to be tested. In this paper the comparison of two methods of microobject vibration measurements that overcome aforementioned problems are presented. Both methods maintain high speed measurement time and extend the range of amplitudes to be measured (below 0.19λ), moreover can be easily applied to MEMS/MOEMS dynamic parameters measurements.
A new technique to expose the hypopharyngeal space: The modified Killian's method.
Sakai, Akihiro; Okami, Kenji; Sugimoto, Ryousuke; Ebisumoto, Koji; Yamamoto, Hikaru; Maki, Daisuke; Saito, Kosuke; Iida, Masahiro
2014-04-01
Recent remarkable progress in endoscopic technology has enabled the detection of superficial cancers that were undetectable in the past. However, even though advanced endoscopic technology can detect early lesions, it is useless unless it can provide wide exposure of an area. By modifying the Killian position, it is possible to observe a wider range of the hypopharyngeal space than is possible with conventional head positions. We report a revolutionary method that uses a new head position to widely open the hypopharynx. The technique is named "the Modified Killian's method." The patient is initially placed in the Killian position and then bent further forward from the original position (i.e., the modified Killian position). While in this position, the patient's head is turned and the Valsalva maneuver is applied. These additional maneuvers constitute the Modified Killian's method and widely expands the hypopharyngeal space. The conventional head position cannot open the hypopharyngeal space sufficiently; however, the Modified Killian's method opens the hypopharyngeal space very widely. The Modified Killian's method enables observation of the entire circumference of the hypopharyngeal space and the cervical esophageal entry. The Modified Killian's method may become an indispensable technique for observing the hypopharynx and detecting hypopharyngeal cancers. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.
Transonic Unsteady Aerodynamics and Aeroelasticity 1987, part 1
NASA Technical Reports Server (NTRS)
Bland, Samuel R. (Compiler)
1989-01-01
Computational fluid dynamics methods have been widely accepted for transonic aeroelastic analysis. Previously, calculations with the TSD methods were used for 2-D airfoils, but now the TSD methods are applied to the aeroelastic analysis of the complete aircraft. The Symposium papers are grouped into five subject areas, two of which are covered in this part: (1) Transonic Small Disturbance (TSD) theory for complete aircraft configurations; and (2) Full potential and Euler equation methods.
Detecting long-term growth trends using tree rings: a critical evaluation of methods.
Peters, Richard L; Groenendijk, Peter; Vlam, Mart; Zuidema, Pieter A
2015-05-01
Tree-ring analysis is often used to assess long-term trends in tree growth. A variety of growth-trend detection methods (GDMs) exist to disentangle age/size trends in growth from long-term growth changes. However, these detrending methods strongly differ in approach, with possible implications for their output. Here, we critically evaluate the consistency, sensitivity, reliability and accuracy of four most widely used GDMs: conservative detrending (CD) applies mathematical functions to correct for decreasing ring widths with age; basal area correction (BAC) transforms diameter into basal area growth; regional curve standardization (RCS) detrends individual tree-ring series using average age/size trends; and size class isolation (SCI) calculates growth trends within separate size classes. First, we evaluated whether these GDMs produce consistent results applied to an empirical tree-ring data set of Melia azedarach, a tropical tree species from Thailand. Three GDMs yielded similar results - a growth decline over time - but the widely used CD method did not detect any change. Second, we assessed the sensitivity (probability of correct growth-trend detection), reliability (100% minus probability of detecting false trends) and accuracy (whether the strength of imposed trends is correctly detected) of these GDMs, by applying them to simulated growth trajectories with different imposed trends: no trend, strong trends (-6% and +6% change per decade) and weak trends (-2%, +2%). All methods except CD, showed high sensitivity, reliability and accuracy to detect strong imposed trends. However, these were considerably lower in the weak or no-trend scenarios. BAC showed good sensitivity and accuracy, but low reliability, indicating uncertainty of trend detection using this method. Our study reveals that the choice of GDM influences results of growth-trend studies. We recommend applying multiple methods when analysing trends and encourage performing sensitivity and reliability analysis. Finally, we recommend SCI and RCS, as these methods showed highest reliability to detect long-term growth trends. © 2014 John Wiley & Sons Ltd.
Scientific use of the finite element method in Orthodontics
Knop, Luegya; Gandini, Luiz Gonzaga; Shintcovsk, Ricardo Lima; Gandini, Marcia Regina Elisa Aparecida Schiavon
2015-01-01
INTRODUCTION: The finite element method (FEM) is an engineering resource applied to calculate the stress and deformation of complex structures, and has been widely used in orthodontic research. With the advantage of being a non-invasive and accurate method that provides quantitative and detailed data on the physiological reactions possible to occur in tissues, applying the FEM can anticipate the visualization of these tissue responses through the observation of areas of stress created from applied orthodontic mechanics. OBJECTIVE: This article aims at reviewing and discussing the stages of the finite element method application and its applicability in Orthodontics. RESULTS: FEM is able to evaluate the stress distribution at the interface between periodontal ligament and alveolar bone, and the shifting trend in various types of tooth movement when using different types of orthodontic devices. Therefore, it is necessary to know specific software for this purpose. CONCLUSIONS: FEM is an important experimental method to answer questions about tooth movement, overcoming the disadvantages of other experimental methods. PMID:25992996
Electrodeposition of metals from supercritical fluids
Ke, Jie; Su, Wenta; Howdle, Steven M.; George, Michael W.; Cook, David; Perdjon-Abel, Magda; Bartlett, Philip N.; Zhang, Wenjian; Cheng, Fei; Levason, William; Reid, Gillian; Hyde, Jason; Wilson, James; Smith, David C.; Mallik, Kanad; Sazio, Pier
2009-01-01
Electrodeposition is a widely used materials-deposition technology with a number of unique features, in particular, the efficient use of starting materials, conformal, and directed coating. The properties of the solvent medium for electrodeposition are critical to the technique's applicability. Supercritical fluids are unique solvents which give a wide range of advantages for chemistry in general, and materials processing in particular. However, a widely applicable approach to electrodeposition from supercritical fluids has not yet been developed. We present here a method that allows electrodeposition of a range of metals from supercritical carbon dioxide, using acetonitrile as a co-solvent and supercritical difluoromethane. This method is based on a careful selection of reagent and supporting electrolyte. There are no obvious barriers preventing this method being applied to deposit a range of materials from many different supercritical fluids. We present the deposition of 3-nm diameter nanowires in mesoporous silica templates using this methodology. PMID:19706479
Short-focus and ultra-wide-angle lens design in wavefront coding
NASA Astrophysics Data System (ADS)
Zhang, Jiyan; Huang, Yuanqing; Xiong, Feibing
2016-10-01
Wavefront coding (WFC) is a hybrid technology designed to increase depth of field of conventional optics. The goal of our research is to apply this technology to the short-focus and ultra-wide-angle lens which suffers from the aberration related with large field of view (FOV) such as coma and astigmatism. WFC can also be used to compensate for other aberration which is sensitive to the FOV. Ultra-wide-angle lens has a little depth of focus because it has small F number and short-focus. We design a hybrid lens combing WFC with the ultra-wide-angle lens. The full FOV and relative aperture of the final design are up to170° and 1/1.8 respectively. The focal length is 2 mm. We adopt the cubic phase mask (CPM) in the design. The conventional design will have a wide variation of the point spread function (PSF) across the FOV and it is very sensitive with the variation of the FOV. The new design we obtain the PSF is nearly invariant over the whole FOV. But the result of the design also shows the little difference between the horizontal and vertical length of the PSF. We analyze that the CPM is non-symmetric phase mask and the FOV is so large, which will generate variation in the final image quality. For that reason, we apply a new method to avoid that happened. We try to make the rays incident on the CPM with small angle and decrease the deformation of the PSF. The experimental result shows the new method to optimize the CPM is fit for the ultra-wide-angle lens. The research above will be a helpful instruction to design the ultra-wide-angle lens with WFC.
A modified conjugate gradient method based on the Tikhonov system for computerized tomography (CT).
Wang, Qi; Wang, Huaxiang
2011-04-01
During the past few decades, computerized tomography (CT) was widely used for non-destructive testing (NDT) and non-destructive examination (NDE) in the industrial area because of its characteristics of non-invasiveness and visibility. Recently, CT technology has been applied to multi-phase flow measurement. Using the principle of radiation attenuation measurements along different directions through the investigated object with a special reconstruction algorithm, cross-sectional information of the scanned object can be worked out. It is a typical inverse problem and has always been a challenge for its nonlinearity and ill-conditions. The Tikhonov regulation method is widely used for similar ill-posed problems. However, the conventional Tikhonov method does not provide reconstructions with qualities good enough, the relative errors between the reconstructed images and the real distribution should be further reduced. In this paper, a modified conjugate gradient (CG) method is applied to a Tikhonov system (MCGT method) for reconstructing CT images. The computational load is dominated by the number of independent measurements m, and a preconditioner is imported to lower the condition number of the Tikhonov system. Both simulation and experiment results indicate that the proposed method can reduce the computational time and improve the quality of image reconstruction. Copyright © 2010 ISA. Published by Elsevier Ltd. All rights reserved.
Nonlinear Processing of Auditory Brainstem Response
2001-10-25
Kraków, Poland Abstract: - Auditory brainstem response potentials (ABR) are signals calculated from the EEG signals registered as responses to an...acoustic activation of the auditory system. The ABR signals provide an objective, diagnostic method, widely applied in examinations of hearing organs
NASA Astrophysics Data System (ADS)
Imaki, Masaharu; Kameyama, Shumpei; Ishimura, Eitaro; Nakaji, Masaharu; Yoshinaga, Hideo; Hirano, Yoshihito
2017-03-01
We developed a line scanning time-of-flight (TOF) laser sensor for an intelligent transport system (ITS), which combines wide field-of-view (FOV) receiving optics of 30 deg and a high-speed microelectro mechanical system scanner of 0.9 ms/line with a simple sensor configuration. The newly developed high-aspect ratio photodiode realizes the scanless and wide FOV receiver. The sinusoidal wave intensity modulation method is used for the TOF measurement. This enables the noise reduction of the trans-impedance amplifier by applying the LC-resonant method. The vehicle detection and axle counting, which are the important functions in ITS, are also demonstrated.
A comparison of heuristic and model-based clustering methods for dietary pattern analysis.
Greve, Benjamin; Pigeot, Iris; Huybrechts, Inge; Pala, Valeria; Börnhorst, Claudia
2016-02-01
Cluster analysis is widely applied to identify dietary patterns. A new method based on Gaussian mixture models (GMM) seems to be more flexible compared with the commonly applied k-means and Ward's method. In the present paper, these clustering approaches are compared to find the most appropriate one for clustering dietary data. The clustering methods were applied to simulated data sets with different cluster structures to compare their performance knowing the true cluster membership of observations. Furthermore, the three methods were applied to FFQ data assessed in 1791 children participating in the IDEFICS (Identification and Prevention of Dietary- and Lifestyle-Induced Health Effects in Children and Infants) Study to explore their performance in practice. The GMM outperformed the other methods in the simulation study in 72 % up to 100 % of cases, depending on the simulated cluster structure. Comparing the computationally less complex k-means and Ward's methods, the performance of k-means was better in 64-100 % of cases. Applied to real data, all methods identified three similar dietary patterns which may be roughly characterized as a 'non-processed' cluster with a high consumption of fruits, vegetables and wholemeal bread, a 'balanced' cluster with only slight preferences of single foods and a 'junk food' cluster. The simulation study suggests that clustering via GMM should be preferred due to its higher flexibility regarding cluster volume, shape and orientation. The k-means seems to be a good alternative, being easier to use while giving similar results when applied to real data.
Evaluation of new techniques for the calculation of internal recirculating flows
NASA Technical Reports Server (NTRS)
Van Doormaal, J. P.; Turan, A.; Raithby, G. D.
1987-01-01
The performance of discrete methods for the prediction of fluid flows can be enhanced by improving the convergence rate of solvers and by increasing the accuracy of the discrete representation of the equations of motion. This paper evaluates the gains in solver performance that are available when various acceleration methods are applied. Various discretizations are also examined and two are recommended because of their accuracy and robustness. Insertion of the improved discretization and solver accelerator into a TEACH code, that has been widely applied to combustor flows, illustrates the substantial gains that can be achieved.
Water quality assessment with hierarchical cluster analysis based on Mahalanobis distance.
Du, Xiangjun; Shao, Fengjing; Wu, Shunyao; Zhang, Hanlin; Xu, Si
2017-07-01
Water quality assessment is crucial for assessment of marine eutrophication, prediction of harmful algal blooms, and environment protection. Previous studies have developed many numeric modeling methods and data driven approaches for water quality assessment. The cluster analysis, an approach widely used for grouping data, has also been employed. However, there are complex correlations between water quality variables, which play important roles in water quality assessment but have always been overlooked. In this paper, we analyze correlations between water quality variables and propose an alternative method for water quality assessment with hierarchical cluster analysis based on Mahalanobis distance. Further, we cluster water quality data collected form coastal water of Bohai Sea and North Yellow Sea of China, and apply clustering results to evaluate its water quality. To evaluate the validity, we also cluster the water quality data with cluster analysis based on Euclidean distance, which are widely adopted by previous studies. The results show that our method is more suitable for water quality assessment with many correlated water quality variables. To our knowledge, it is the first attempt to apply Mahalanobis distance for coastal water quality assessment.
Method of testing gear wheels in impact bending
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tikhonov, A.K.; Palagin, Y.M.
1995-05-01
Chemicothermal treatment processes are widely used in engineering to improve the working lives of important components, of which the most common is nitrocementation. That process has been applied at the Volga Automobile Plant mainly to sprockets in gear transmissions, which need high hardness and wear resistance in the surfaces with relatively ductile cores. Although various forms of chemicothermal treatment are widely used, there has been no universal method of evaluating the strengths of gear wheels. Standard methods of estimating strength ({sigma}{sub u}, {sigma}{sub t}, {sigma}{sub b}, and hardness) have a major shortcoming: They can determine only the characteristics of themore » cores for case-hardened materials. Here we consider a method of impact bending test, which enables one to evaluate the actual strength of gear teeth.« less
ERIC Educational Resources Information Center
Grados, Marco A.
2010-01-01
Objective: To provide a contemporary perspective on genetic discovery methods applied to obsessive-compulsive disorder (OCD) and Tourette syndrome (TS). Method: A review of research trends in genetics research in OCD and TS is conducted, with emphasis on novel approaches. Results: Genome-wide association studies (GWAS) are now in progress in OCD…
An Adaptive Niching Genetic Algorithm using a niche size equalization mechanism
NASA Astrophysics Data System (ADS)
Nagata, Yuichi
Niching GAs have been widely investigated to apply genetic algorithms (GAs) to multimodal function optimization problems. In this paper, we suggest a new niching GA that attempts to form niches, each consisting of an equal number of individuals. The proposed GA can be applied also to combinatorial optimization problems by defining a distance metric in the search space. We apply the proposed GA to the job-shop scheduling problem (JSP) and demonstrate that the proposed niching method enhances the ability to maintain niches and improve the performance of GAs.
Controlled dehydration improves the diffraction quality of two RNA crystals.
Park, HaJeung; Tran, Tuan; Lee, Jun Hyuck; Park, Hyun; Disney, Matthew D
2016-11-03
Post-crystallization dehydration methods, applying either vapor diffusion or humidity control devices, have been widely used to improve the diffraction quality of protein crystals. Despite the fact that RNA crystals tend to diffract poorly, there is a dearth of reports on the application of dehydration methods to improve the diffraction quality of RNA crystals. We use dehydration techniques with a Free Mounting System (FMS, a humidity control device) to recover the poor diffraction quality of RNA crystals. These approaches were applied to RNA constructs that model various RNA-mediated repeat expansion disorders. The method we describe herein could serve as a general tool to improve diffraction quality of RNA crystals to facilitate structure determinations.
NASA Astrophysics Data System (ADS)
Bellver, Fernando Gimeno; Garratón, Manuel Caravaca; Soto Meca, Antonio; López, Juan Antonio Vera; Guirao, Juan L. G.; Fernández-Martínez, Manuel
In this paper, we explore the chaotic behavior of resistively and capacitively shunted Josephson junctions via the so-called Network Simulation Method. Such a numerical approach establishes a formal equivalence among physical transport processes and electrical networks, and hence, it can be applied to efficiently deal with a wide range of differential systems. The generality underlying that electrical equivalence allows to apply the circuit theory to several scientific and technological problems. In this work, the Fast Fourier Transform has been applied for chaos detection purposes and the calculations have been carried out in PSpice, an electrical circuit software. Overall, it holds that such a numerical approach leads to quickly computationally solve Josephson differential models. An empirical application regarding the study of the Josephson model completes the paper.
Fertigation uniformity under sprinkler irrigation: evaluation and analysis
USDA-ARS?s Scientific Manuscript database
n modern farming systems, fertigation is widely practiced as a cost effective and convenient method for applying soluble fertilizers to crops. Along with efficiency and adequacy, uniformity is an important fertigation performance evaluation criterion. Fertigation uniformity is defined here as a comp...
Status and updates to the rangeland health model
USDA-ARS?s Scientific Manuscript database
Development of the widely applied rangeland health protocol, “Interpreting Indicators of Rangeland Health” (IIRH) was stimulated by the publication of the National Research Council’s 1994 publication, Rangeland Health: New Methods to Classify, Inventory, and Monitor Rangelands. In a parallel effort,...
Recent archaeomagnetic studies in Slovakia: Comparison of methodological approaches
NASA Astrophysics Data System (ADS)
Kubišová, Lenka
2016-03-01
We review the recent archaeomagnetic studies carried out on the territory of Slovakia, focusing on the comparison of methodological approaches, discussing pros and cons of the individual applied methods from the perspective of our experience. The most widely used methods for the determination of intensity and direction of the archaeomegnetic field by demagnetisation of the sample material are the alternating field (AF) demagnetisation and the Thellier double heating method. These methods are used not only for archaeomagnetic studies but also help to solve some geological problems. The two methods were applied to samples collected recently at several sites of Slovakia, where archaeological prospection invoked by earthwork or reconstruction work of developing projects demanded archaeomagnetic dating. Then we discuss advantages and weaknesses of the investigated methods from different perspectives based on several examples and our recent experience.
Resistive method for measuring the disintegration speed of Prince Rupert's drops
NASA Astrophysics Data System (ADS)
Bochkov, Mark; Gusenkova, Daria; Glushkov, Evgenii; Zotova, Julia; Zhabin, S. N.
2016-09-01
We have successfully applied the resistance grid technique to measure the disintegration speed in a special type of glass objects, widely known as Prince Rupert's drops. We use a fast digital oscilloscope and a simple electrical circuit, glued to the surface of the drops, to detect the voltage changes, corresponding to the breaks in the specific parts of the drops. The results obtained using this method are in good qualitative and quantitative agreement with theoretical predictions and previously published data. Moreover, the proposed experimental setup does not include any expensive equipment (such as a high-speed camera) and can therefore be widely used in high schools and universities.
Kumar, Vineet
2011-12-01
The grain size statistics, commonly derived from the grain map of a material sample, are important microstructure characteristics that greatly influence its properties. The grain map for nanomaterials is usually obtained manually by visual inspection of the transmission electron microscope (TEM) micrographs because automated methods do not perform satisfactorily. While the visual inspection method provides reliable results, it is a labor intensive process and is often prone to human errors. In this article, an automated grain mapping method is developed using TEM diffraction patterns. The presented method uses wide angle convergent beam diffraction in the TEM. The automated technique was applied on a platinum thin film sample to obtain the grain map and subsequently derive grain size statistics from it. The grain size statistics obtained with the automated method were found in good agreement with the visual inspection method.
The MEM of spectral analysis applied to L.O.D.
NASA Astrophysics Data System (ADS)
Fernandez, L. I.; Arias, E. F.
The maximum entropy method (MEM) has been widely applied for polar motion studies taking advantage of its performance on the management of complex time series. The authors used the algorithm of the MEM to estimate Cross Spectral function in order to compare interannual Length-of-Day (LOD) time series with Southern Oscillation Index (SOI) and Sea Surface Temperature (SST) series, which are close related to El Niño-Southern Oscillation (ENSO) events.
Investigation of High-Angle-of-Attack Maneuver-Limiting Factors. Part 1. Analysis and Simulation
1980-12-01
useful, are not so satisfying or in- structive as the more positive identification of causal factors offered by the methods developed in Reference 5...same methods be applied to additional high-performance fighter aircraft having widely differing high AOA handling characteristics to see if further...predictions and the nonlinear model results were resolved. The second task involved development of methods , criteria, and an associated pilot rating scale, for
Wide-Field Fluorescence Microscopy of Real-Time Bioconjugation Sensing
Szalkowski, Marcin; Sulowska, Karolina; Grzelak, Justyna; Niedziółka-Jönsson, Joanna; Roźniecka, Ewa
2018-01-01
We apply wide-field fluorescence microscopy to measure real-time attachment of photosynthetic proteins to plasmonically active silver nanowires. The observation of this effect is enabled, on the one hand, by sensitive detection of fluorescence and, on the other hand, by plasmonic enhancement of protein fluorescence. We examined two sample configurations with substrates being a bare glass coverslip and a coverslip functionalized with a monolayer of streptavidin. The different preparation of the substrate changes the observed behavior as far as attachment of the protein is concerned as well as its subsequent photobleaching. For the latter substrate the conjugation process is measurably slower. The described method can be universally applied in studying protein-nanostructure interactions for real-time fluorescence-based sensing. PMID:29351211
Wide-Field Fluorescence Microscopy of Real-Time Bioconjugation Sensing.
Szalkowski, Marcin; Sulowska, Karolina; Grzelak, Justyna; Niedziółka-Jönsson, Joanna; Roźniecka, Ewa; Kowalska, Dorota; Maćkowski, Sebastian
2018-01-19
We apply wide-field fluorescence microscopy to measure real-time attachment of photosynthetic proteins to plasmonically active silver nanowires. The observation of this effect is enabled, on the one hand, by sensitive detection of fluorescence and, on the other hand, by plasmonic enhancement of protein fluorescence. We examined two sample configurations with substrates being a bare glass coverslip and a coverslip functionalized with a monolayer of streptavidin. The different preparation of the substrate changes the observed behavior as far as attachment of the protein is concerned as well as its subsequent photobleaching. For the latter substrate the conjugation process is measurably slower. The described method can be universally applied in studying protein-nanostructure interactions for real-time fluorescence-based sensing.
Garten, Justin; Hoover, Joe; Johnson, Kate M; Boghrati, Reihane; Iskiwitch, Carol; Dehghani, Morteza
2018-02-01
Theory-driven text analysis has made extensive use of psychological concept dictionaries, leading to a wide range of important results. These dictionaries have generally been applied through word count methods which have proven to be both simple and effective. In this paper, we introduce Distributed Dictionary Representations (DDR), a method that applies psychological dictionaries using semantic similarity rather than word counts. This allows for the measurement of the similarity between dictionaries and spans of text ranging from complete documents to individual words. We show how DDR enables dictionary authors to place greater emphasis on construct validity without sacrificing linguistic coverage. We further demonstrate the benefits of DDR on two real-world tasks and finally conduct an extensive study of the interaction between dictionary size and task performance. These studies allow us to examine how DDR and word count methods complement one another as tools for applying concept dictionaries and where each is best applied. Finally, we provide references to tools and resources to make this method both available and accessible to a broad psychological audience.
Solliec, Morgan; Roy-Lachapelle, Audrey; Sauvé, Sébastien
2015-12-30
Swine manure can contain a wide range of veterinary antibiotics, which could enter the environment via manure spreading on agricultural fields. A suspect and non-target screening method was applied to swine manure samples to attempt to identify veterinary antibiotics and pharmaceutical compounds for a future targeted analysis method. A combination of suspect and non-target screening method was developed to identify various veterinary antibiotic families using liquid chromatography coupled with high-resolution mass spectrometry (LC/HRMS). The sample preparation was based on the physicochemical parameters of antibiotics for the wide scope extraction of polar compounds prior to LC/HRMS analysis. The amount of data produced was processed by applying restrictive thresholds and filters to significantly reduce the number of compounds found and eliminate matrix components. The suspect and non-target screening was applied on swine manure samples and revealed the presence of seven common veterinary antibiotics and some of their relative metabolites, including tetracyclines, β-lactams, sulfonamides and lincosamides. However, one steroid and one analgesic were also identified. The occurrence of the identified compounds was validated by comparing their retention times, isotopic abundance patterns and fragmentation patterns with certified standards. This identification method could be very useful as an initial step to screen for and identify emerging contaminants such as veterinary antibiotics and pharmaceuticals in environmental and biological matrices prior to quantification. Copyright © 2015 John Wiley & Sons, Ltd.
Zhang, Yiwei; Xu, Zhiyuan; Shen, Xiaotong; Pan, Wei
2014-08-01
There is an increasing need to develop and apply powerful statistical tests to detect multiple traits-single locus associations, as arising from neuroimaging genetics and other studies. For example, in the Alzheimer's Disease Neuroimaging Initiative (ADNI), in addition to genome-wide single nucleotide polymorphisms (SNPs), thousands of neuroimaging and neuropsychological phenotypes as intermediate phenotypes for Alzheimer's disease, have been collected. Although some classic methods like MANOVA and newly proposed methods may be applied, they have their own limitations. For example, MANOVA cannot be applied to binary and other discrete traits. In addition, the relationships among these methods are not well understood. Importantly, since these tests are not data adaptive, depending on the unknown association patterns among multiple traits and between multiple traits and a locus, these tests may or may not be powerful. In this paper we propose a class of data-adaptive weights and the corresponding weighted tests in the general framework of generalized estimation equations (GEE). A highly adaptive test is proposed to select the most powerful one from this class of the weighted tests so that it can maintain high power across a wide range of situations. Our proposed tests are applicable to various types of traits with or without covariates. Importantly, we also analytically show relationships among some existing and our proposed tests, indicating that many existing tests are special cases of our proposed tests. Extensive simulation studies were conducted to compare and contrast the power properties of various existing and our new methods. Finally, we applied the methods to an ADNI dataset to illustrate the performance of the methods. We conclude with the recommendation for the use of the GEE-based Score test and our proposed adaptive test for their high and complementary performance. Copyright © 2014 Elsevier Inc. All rights reserved.
Borycki, E; Kushniruk, A; Nohr, C; Takeda, H; Kuwata, S; Carvalho, C; Bainbridge, M; Kannry, J
2013-01-01
Issues related to lack of system usability and potential safety hazards continue to be reported in the health information technology (HIT) literature. Usability engineering methods are increasingly used to ensure improved system usability and they are also beginning to be applied more widely for ensuring the safety of HIT applications. These methods are being used in the design and implementation of many HIT systems. In this paper we describe evidence-based approaches to applying usability engineering methods. A multi-phased approach to ensuring system usability and safety in healthcare is described. Usability inspection methods are first described including the development of evidence-based safety heuristics for HIT. Laboratory-based usability testing is then conducted under artificial conditions to test if a system has any base level usability problems that need to be corrected. Usability problems that are detected are corrected and then a new phase is initiated where the system is tested under more realistic conditions using clinical simulations. This phase may involve testing the system with simulated patients. Finally, an additional phase may be conducted, involving a naturalistic study of system use under real-world clinical conditions. The methods described have been employed in the analysis of the usability and safety of a wide range of HIT applications, including electronic health record systems, decision support systems and consumer health applications. It has been found that at least usability inspection and usability testing should be applied prior to the widespread release of HIT. However, wherever possible, additional layers of testing involving clinical simulations and a naturalistic evaluation will likely detect usability and safety issues that may not otherwise be detected prior to widespread system release. The framework presented in the paper can be applied in order to develop more usable and safer HIT, based on multiple layers of evidence.
Creative Reflections on Brainstorming
ERIC Educational Resources Information Center
Byron, Kevin
2012-01-01
Brainstorming is the default method of idea-generation in organisations, and is widely applied in higher education by students, academics and support staff. Its popularity is mainly attributable to an illusory belief that groups working together are more productive than individuals working apart. Shared responsibility, the need for collaboration…
Factors Affecting Performance of Soil Termiticides
USDA-ARS?s Scientific Manuscript database
Applying liquid insecticide to soil under and around structures is one of the most widely used methods of subterranean termite prevention and control. Failure of soil termiticide treatments is often related to factors other than the active ingredient. Efficacy and longevity of soil treatments vary g...
Cordero, Eliana; Korinth, Florian; Stiebing, Clara; Krafft, Christoph; Schie, Iwan W; Popp, Jürgen
2017-07-27
Raman spectroscopy provides label-free biochemical information from tissue samples without complicated sample preparation. The clinical capability of Raman spectroscopy has been demonstrated in a wide range of in vitro and in vivo applications. However, a challenge for in vivo applications is the simultaneous excitation of auto-fluorescence in the majority of tissues of interest, such as liver, bladder, brain, and others. Raman bands are then superimposed on a fluorescence background, which can be several orders of magnitude larger than the Raman signal. To eliminate the disturbing fluorescence background, several approaches are available. Among instrumentational methods shifted excitation Raman difference spectroscopy (SERDS) has been widely applied and studied. Similarly, computational techniques, for instance extended multiplicative scatter correction (EMSC), have also been employed to remove undesired background contributions. Here, we present a theoretical and experimental evaluation and comparison of fluorescence background removal approaches for Raman spectra based on SERDS and EMSC.
Cordero, Eliana; Korinth, Florian; Stiebing, Clara; Krafft, Christoph; Schie, Iwan W.; Popp, Jürgen
2017-01-01
Raman spectroscopy provides label-free biochemical information from tissue samples without complicated sample preparation. The clinical capability of Raman spectroscopy has been demonstrated in a wide range of in vitro and in vivo applications. However, a challenge for in vivo applications is the simultaneous excitation of auto-fluorescence in the majority of tissues of interest, such as liver, bladder, brain, and others. Raman bands are then superimposed on a fluorescence background, which can be several orders of magnitude larger than the Raman signal. To eliminate the disturbing fluorescence background, several approaches are available. Among instrumentational methods shifted excitation Raman difference spectroscopy (SERDS) has been widely applied and studied. Similarly, computational techniques, for instance extended multiplicative scatter correction (EMSC), have also been employed to remove undesired background contributions. Here, we present a theoretical and experimental evaluation and comparison of fluorescence background removal approaches for Raman spectra based on SERDS and EMSC. PMID:28749450
Zou, Y; Wang, X; Fan, G
2015-04-01
To understand the habits of Chinese women applying leave-on skincare products (LOSCP) and to improve female facial evenness of anti-ageing cosmetics through modifying facial skincare smear ways. A questionnaire on the method of applying LOSCP was distributed to 60 women with habit of using LOSCP. Their facial images before and after applying LOSCP were taken, and their positioning and grey value were used to analyse the effects of different applying methods on the uniformity of facial LOSCP. LOSCP including anti-ageing cosmetics have been widely used among Chinese women for a long time. However, some women do not concern how to properly apply LOSCP. In our survey, the main focal points of the face are forehead, malar region, cheek, mouth corners and chin when they looking into the mirror, and mouth corners and inner canthus are often overlooked when applying cosmetic products. The image analysis found that after applying the LOSCP, the greyscale of the forehead, glabella, malar region, upper lip region and jaw changed significantly whereas that of canthus, mouth corners and lateral cheek region was not significantly different. Applying an improved smear method (11-point method)could significantly increase the grey values of various facial areas. The way of Chinese women applying LOSCP may result in facial unevenness of skin products. By improving facial skincare smear method, one can make the products even in all facial areas, thereby ensuring the efficacy of anti-ageing cosmetics. Thus, further improvement and education regarding skincare is required. © 2014 Society of Cosmetic Scientists and the Société Française de Cosmétologie.
NASA Astrophysics Data System (ADS)
Şenol, Mehmet; Alquran, Marwan; Kasmaei, Hamed Daei
2018-06-01
In this paper, we present analytic-approximate solution of time-fractional Zakharov-Kuznetsov equation. This model demonstrates the behavior of weakly nonlinear ion acoustic waves in a plasma bearing cold ions and hot isothermal electrons in the presence of a uniform magnetic field. Basic definitions of fractional derivatives are described in the Caputo sense. Perturbation-iteration algorithm (PIA) and residual power series method (RPSM) are applied to solve this equation with success. The convergence analysis is also presented for both methods. Numerical results are given and then they are compared with the exact solutions. Comparison of the results reveal that both methods are competitive, powerful, reliable, simple to use and ready to apply to wide range of fractional partial differential equations.
NASA Astrophysics Data System (ADS)
Wang, Liwei; Liu, Xinggao; Zhang, Zeyin
2017-02-01
An efficient primal-dual interior-point algorithm using a new non-monotone line search filter method is presented for nonlinear constrained programming, which is widely applied in engineering optimization. The new non-monotone line search technique is introduced to lead to relaxed step acceptance conditions and improved convergence performance. It can also avoid the choice of the upper bound on the memory, which brings obvious disadvantages to traditional techniques. Under mild assumptions, the global convergence of the new non-monotone line search filter method is analysed, and fast local convergence is ensured by second order corrections. The proposed algorithm is applied to the classical alkylation process optimization problem and the results illustrate its effectiveness. Some comprehensive comparisons to existing methods are also presented.
Contact angle adjustment in equation-of-state-based pseudopotential model.
Hu, Anjie; Li, Longjian; Uddin, Rizwan; Liu, Dong
2016-05-01
The single component pseudopotential lattice Boltzmann model has been widely applied in multiphase simulation due to its simplicity and stability. In many studies, it has been claimed that this model can be stable for density ratios larger than 1000. However, the application of the model is still limited to small density ratios when the contact angle is considered. The reason is that the original contact angle adjustment method influences the stability of the model. Moreover, simulation results in the present work show that, by applying the original contact angle adjustment method, the density distribution near the wall is artificially changed, and the contact angle is dependent on the surface tension. Hence, it is very inconvenient to apply this method with a fixed contact angle, and the accuracy of the model cannot be guaranteed. To solve these problems, a contact angle adjustment method based on the geometry analysis is proposed and numerically compared with the original method. Simulation results show that, with our contact angle adjustment method, the stability of the model is highly improved when the density ratio is relatively large, and it is independent of the surface tension.
Contact angle adjustment in equation-of-state-based pseudopotential model
NASA Astrophysics Data System (ADS)
Hu, Anjie; Li, Longjian; Uddin, Rizwan; Liu, Dong
2016-05-01
The single component pseudopotential lattice Boltzmann model has been widely applied in multiphase simulation due to its simplicity and stability. In many studies, it has been claimed that this model can be stable for density ratios larger than 1000. However, the application of the model is still limited to small density ratios when the contact angle is considered. The reason is that the original contact angle adjustment method influences the stability of the model. Moreover, simulation results in the present work show that, by applying the original contact angle adjustment method, the density distribution near the wall is artificially changed, and the contact angle is dependent on the surface tension. Hence, it is very inconvenient to apply this method with a fixed contact angle, and the accuracy of the model cannot be guaranteed. To solve these problems, a contact angle adjustment method based on the geometry analysis is proposed and numerically compared with the original method. Simulation results show that, with our contact angle adjustment method, the stability of the model is highly improved when the density ratio is relatively large, and it is independent of the surface tension.
Interferometric imaging of crustal structure from wide-angle multicomponent OBS-airgun data
NASA Astrophysics Data System (ADS)
Shiraishi, K.; Fujie, G.; Sato, T.; Abe, S.; Asakawa, E.; Kodaira, S.
2015-12-01
In wide-angle seismic surveys with ocean bottom seismograph (OBS) and airgun, surface-related multiple reflections and upgoing P-to-S conversions are frequently observed. We applied two interferometric imaging methods to the multicomponent OBS data in order to highly utilize seismic signals for subsurface imaging.First, seismic interferometry (SI) is applied to vertical component in order to obtain reflection profile with multiple reflections. By correlating seismic traces on common receiver records, pseudo seismic data are generated with virtual sources and receivers located on all original shot positions. We adopt the deconvolution SI because source and receiver spectra can be canceled by spectral division. Consequently, gapless reflection images from just below the seafloor to the deeper are obtained.Second, receiver function (RF) imaging is applied to multicomponent OBS data in order to image P-to-S conversion boundary. Though RF is commonly applied to teleseismic data, our purpose is to extract upgoing PS converted waves from wide-angle OBS data. The RF traces are synthesized by deconvolution of radial and vertical components at same OBS location for each shot. Final section obtained by stacking RF traces shows the PS conversion boundaries beneath OBSs. Then, Vp/Vs ratio can be estimated by comparing one-way traveltime delay with two-way traveltime of P wave reflections.We applied these methods to field data sets; (a) 175 km survey in Nankai trough subduction zone using 71 OBSs with from 1 km to 10 km intervals and 878 shots with 200 m interval, and (b) 237 km survey in northwest pacific ocean with almost flat layers before subduction using 25 OBSs with 6km interval and 1188 shots with 200 m interval. In our study, SI imaging with multiple reflections is highly applicable to OBS data even in a complex geological setting, and PS conversion boundary is well imaged by RF imaging and Vp/Vs ratio distribution in sediment is estimated in case of simple structure.
Kailasa, Suresh Kumar; Wu, Hui-Fen
2013-07-01
Recently, mass spectrometric related techniques have been widely applied for the identification and quantification of neurochemicals and their metabolites in biofluids. This article presents an overview of mass spectrometric techniques applied in the detection of neurological substances and their metabolites from biological samples. In addition, the advances of chromatographic methods (LC, GC and CE) coupled with mass spectrometric techniques for analysis of neurochemicals in pharmaceutical and biological samples are also discussed.
Application of Higuchi's fractal dimension from basic to clinical neurophysiology: A review.
Kesić, Srdjan; Spasić, Sladjana Z
2016-09-01
For more than 20 years, Higuchi's fractal dimension (HFD), as a nonlinear method, has occupied an important place in the analysis of biological signals. The use of HFD has evolved from EEG and single neuron activity analysis to the most recent application in automated assessments of different clinical conditions. Our objective is to provide an updated review of the HFD method applied in basic and clinical neurophysiological research. This article summarizes and critically reviews a broad literature and major findings concerning the applications of HFD for measuring the complexity of neuronal activity during different neurophysiological conditions. The source of information used in this review comes from the PubMed, Scopus, Google Scholar and IEEE Xplore Digital Library databases. The review process substantiated the significance, advantages and shortcomings of HFD application within all key areas of basic and clinical neurophysiology. Therefore, the paper discusses HFD application alone, combined with other linear or nonlinear measures, or as a part of automated methods for analyzing neurophysiological signals. The speed, accuracy and cost of applying the HFD method for research and medical diagnosis make it stand out from the widely used linear methods. However, only a combination of HFD with other nonlinear methods ensures reliable and accurate analysis of a wide range of neurophysiological signals. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
A new method for weakening the combined effect of residual errors on multibeam bathymetric data
NASA Astrophysics Data System (ADS)
Zhao, Jianhu; Yan, Jun; Zhang, Hongmei; Zhang, Yuqing; Wang, Aixue
2014-12-01
Multibeam bathymetric system (MBS) has been widely applied in the marine surveying for providing high-resolution seabed topography. However, some factors degrade the precision of bathymetry, including the sound velocity, the vessel attitude, the misalignment angle of the transducer and so on. Although these factors have been corrected strictly in bathymetric data processing, the final bathymetric result is still affected by their residual errors. In deep water, the result usually cannot meet the requirements of high-precision seabed topography. The combined effect of these residual errors is systematic, and it's difficult to separate and weaken the effect using traditional single-error correction methods. Therefore, the paper puts forward a new method for weakening the effect of residual errors based on the frequency-spectrum characteristics of seabed topography and multibeam bathymetric data. Four steps, namely the separation of the low-frequency and the high-frequency part of bathymetric data, the reconstruction of the trend of actual seabed topography, the merging of the actual trend and the extracted microtopography, and the accuracy evaluation, are involved in the method. Experiment results prove that the proposed method could weaken the combined effect of residual errors on multibeam bathymetric data and efficiently improve the accuracy of the final post-processing results. We suggest that the method should be widely applied to MBS data processing in deep water.
Mixture Modeling: Applications in Educational Psychology
ERIC Educational Resources Information Center
Harring, Jeffrey R.; Hodis, Flaviu A.
2016-01-01
Model-based clustering methods, commonly referred to as finite mixture modeling, have been applied to a wide variety of cross-sectional and longitudinal data to account for heterogeneity in population characteristics. In this article, we elucidate 2 such approaches: growth mixture modeling and latent profile analysis. Both techniques are…
NASA Astrophysics Data System (ADS)
Anderson, R. B.; Finch, N.; Clegg, S. M.; Graff, T.; Morris, R. V.; Laura, J.
2018-04-01
The PySAT point spectra tool provides a flexible graphical interface, enabling scientists to apply a wide variety of preprocessing and machine learning methods to point spectral data, with an emphasis on multivariate regression.
On the application of the Germano identity to subgrid-scale modeling
NASA Technical Reports Server (NTRS)
Ronchi, C.; Ypma, M.; Canuto, V. M.
1992-01-01
An identity proposed by Germano (1992) has been widely applied to several turbulent flows to dynamically compute rather than adjust the Smagorinsky coefficient. The assumptions under which the method has been used are discussed, and some conceptual difficulties in its current implementation are examined.
Honeynet Learning: Discovering IT Security
ERIC Educational Resources Information Center
del Moral Talabis, Mark Ryan
2007-01-01
Learning IT Security in a classroom setting has often been a frustrating endeavor for both instructors and students alike. From our experience, traditional instructional methods like direct instruction and lectures though widely used and effective in most other areas have significant shortcomings when applied in IT security learning. In this…
Riparian Sediment Delivery Ratio: Stiff Diagrams and Artifical Neural Networks
Various methods are used to estimate sediment transport through riparian buffers and grass jilters with the sediment delivery ratio having been the most widely applied. The U.S. Forest Service developed a sediment delivery ratio using the stiff diagram and a logistic curve to int...
Domain specific languages for modeling and simulation: use case OMS3
USDA-ARS?s Scientific Manuscript database
A domain-specific language (DSL) is usually a concise, declarative language that strongly emphasizes a particular problem domain. DSL methods and implementations in general are widely prototyped and applied in academia for creating elegant ways to express properties, relationships, and behavior of r...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sawicki, K.; Malinowski, F. K.; Gałkowski, K.
2015-01-05
A simple, single-color method for permanent marking of the position of individual self-assembled semiconductor Quantum Dots (QDs) at cryogenic temperatures is reported. The method combines in situ photolithography with standard micro-photoluminescence spectroscopy. Its utility is proven by a systematic magnetooptical study of a single CdTe/ZnTe QD containing a Mn{sup 2+} ion, where a magnetic field of up to 10 T in two orthogonal, Faraday and Voigt, configurations is applied to the same QD. The presented approach can be applied to a wide range of solid state nanoemitters.
Evolution of statistical averages: An interdisciplinary proposal using the Chapman-Enskog method
NASA Astrophysics Data System (ADS)
Mariscal-Sanchez, A.; Sandoval-Villalbazo, A.
2017-08-01
This work examines the idea of applying the Chapman-Enskog (CE) method for approximating the solution of the Boltzmann equation beyond the realm of physics, using an information theory approach. Equations describing the evolution of averages and their fluctuations in a generalized phase space are established up to first-order in the Knudsen parameter which is defined as the ratio of the time between interactions (mean free time) and a characteristic macroscopic time. Although the general equations here obtained may be applied in a wide range of disciplines, in this paper, only a particular case related to the evolution of averages in speculative markets is examined.
Application of the superposition principle to solar-cell analysis
NASA Technical Reports Server (NTRS)
Lindholm, F. A.; Fossum, J. G.; Burgess, E. L.
1979-01-01
The superposition principle of differential-equation theory - which applies if and only if the relevant boundary-value problems are linear - is used to derive the widely used shifting approximation that the current-voltage characteristic of an illuminated solar cell is the dark current-voltage characteristic shifted by the short-circuit photocurrent. Analytical methods are presented to treat cases where shifting is not strictly valid. Well-defined conditions necessary for superposition to apply are established. For high injection in the base region, the method of analysis accurately yields the dependence of the open-circuit voltage on the short-circuit current (or the illumination level).
NASA Astrophysics Data System (ADS)
Suzuki, Mototsugu; Akiba, Norimitsu; Kurosawa, Kenji; Kuroki, Kenro; Akao, Yoshinori; Higashikawa, Yoshiyasu
2016-01-01
We applied a wide-field time-resolved luminescence (TRL) method with a pulsed laser and a gated intensified charge coupled device (ICCD) for deciphering obliterated documents for use in forensic science. The TRL method can nondestructively measure the dynamics of luminescence, including fluorescence and phosphorescence lifetimes, which prove to be useful parameters for image detection. First, we measured the TRL spectra of four brands of black porous-tip pen inks on paper to estimate their luminescence lifetimes. Next, we acquired the TRL images of 12 obliterated documents at various delay times and gate times of the ICCD. The obliterated contents were revealed in the TRL images because of the difference in the luminescence lifetimes of the inks. This method requires no pretreatment, is nondestructive, and has the advantage of wide-field imaging, which makes it is easy to control the gate timing. This demonstration proves that TRL imaging and spectroscopy are powerful tools for forensic document examination.
BLISS is a versatile and quantitative method for genome-wide profiling of DNA double-strand breaks.
Yan, Winston X; Mirzazadeh, Reza; Garnerone, Silvano; Scott, David; Schneider, Martin W; Kallas, Tomasz; Custodio, Joaquin; Wernersson, Erik; Li, Yinqing; Gao, Linyi; Federova, Yana; Zetsche, Bernd; Zhang, Feng; Bienko, Magda; Crosetto, Nicola
2017-05-12
Precisely measuring the location and frequency of DNA double-strand breaks (DSBs) along the genome is instrumental to understanding genomic fragility, but current methods are limited in versatility, sensitivity or practicality. Here we present Breaks Labeling In Situ and Sequencing (BLISS), featuring the following: (1) direct labelling of DSBs in fixed cells or tissue sections on a solid surface; (2) low-input requirement by linear amplification of tagged DSBs by in vitro transcription; (3) quantification of DSBs through unique molecular identifiers; and (4) easy scalability and multiplexing. We apply BLISS to profile endogenous and exogenous DSBs in low-input samples of cancer cells, embryonic stem cells and liver tissue. We demonstrate the sensitivity of BLISS by assessing the genome-wide off-target activity of two CRISPR-associated RNA-guided endonucleases, Cas9 and Cpf1, observing that Cpf1 has higher specificity than Cas9. Our results establish BLISS as a versatile, sensitive and efficient method for genome-wide DSB mapping in many applications.
A novel approach to describing and detecting performance anti-patterns
NASA Astrophysics Data System (ADS)
Sheng, Jinfang; Wang, Yihan; Hu, Peipei; Wang, Bin
2017-08-01
Anti-pattern, as an extension to pattern, describes a widely used poor solution which can bring negative influence to application systems. Aiming at the shortcomings of the existing anti-pattern descriptions, an anti-pattern description method based on first order predicate is proposed. This method synthesizes anti-pattern forms and symptoms, which makes the description more accurate and has good scalability and versatility as well. In order to improve the accuracy of anti-pattern detection, a Bayesian classification method is applied in validation for detection results, which can reduce false negatives and false positives of anti-pattern detection. Finally, the proposed approach in this paper is applied to a small e-commerce system, the feasibility and effectiveness of the approach is demonstrated further through experiments.
Clinical study of cultured epithelial autografts in liquid suspension in severe burn patients.
Yim, Haejun; Yang, Hyeong Tae; Cho, Yong Suk; Seo, Cheong Hoon; Lee, Boung Chul; Ko, Jang Hyu; Kwak, In Suk; Kim, Dohern; Hur, Jun; Kim, Jong Hyun; Chun, Wook
2011-09-01
We address the clinical application of the suspension type cultured epithelial autografts (CEAs), Keraheal™ (MCTT, Seoul, Korea), along with the effects, application method, merits and demerits thereof. From February 2007 to June 2010, 29 burn patients with extensive burns, participated in the suspension type of CEA clinical test. A widely meshed autograft (1:4-6 ratio) was applied to the wound bed and the suspension type CEA was sprayed with a Tissomat cell sprayer, followed by a Tissucol spray, a fibrin sealant. The patients' (men/women=26/3) median (interquartile ranges) age was 42 (30-49) years old, the burned TBSA was 55 (44-60) %, and the full thickness burn area was 40 (30-46.5) %. The area of Keraheal™ applied was 800 (400-1200) cm(2). The take rate was 96 (90.5-99) % and 100 (98.5-100) % at 2 and 4 weeks after treatment with Keraheal™, respectively. The Vancouver burn scar scale was 5 (4-6.5), 4 (3-6), and 3 (2-4) at 8, 12 and 24 weeks after the Keraheal™ application. Widely meshed autograft must be applied in massive burns but it's take rate is greatly reduced. The CEAs enhance the take rate of a wide meshed autograft in massive burns and allow for grafting wide meshed autograft together with acellular dermal matrix in some cases. Copyright © 2011 Elsevier Ltd and ISBI. All rights reserved.
Variability in the measurement of hospital-wide mortality rates.
Shahian, David M; Wolf, Robert E; Iezzoni, Lisa I; Kirle, Leslie; Normand, Sharon-Lise T
2010-12-23
Several countries use hospital-wide mortality rates to evaluate the quality of hospital care, although the usefulness of this metric has been questioned. Massachusetts policymakers recently requested an assessment of methods to calculate this aggregate mortality metric for use as a measure of hospital quality. The Massachusetts Division of Health Care Finance and Policy provided four vendors with identical information on 2,528,624 discharges from Massachusetts acute care hospitals from October 1, 2004, through September 30, 2007. Vendors applied their risk-adjustment algorithms and provided predicted probabilities of in-hospital death for each discharge and for hospital-level observed and expected mortality rates. We compared the numbers and characteristics of discharges and hospitals included by each of the four methods. We also compared hospitals' standardized mortality ratios and classification of hospitals with mortality rates that were higher or lower than expected, according to each method. The proportions of discharges that were included by each method ranged from 28% to 95%, and the severity of patients' diagnoses varied widely. Because of their discharge-selection criteria, two methods calculated in-hospital mortality rates (4.0% and 5.9%) that were twice the state average (2.1%). Pairwise associations (Pearson correlation coefficients) of discharge-level predicted mortality probabilities ranged from 0.46 to 0.70. Hospital-performance categorizations varied substantially and were sometimes completely discordant. In 2006, a total of 12 of 28 hospitals that had higher-than-expected hospital-wide mortality when classified by one method had lower-than-expected mortality when classified by one or more of the other methods. Four common methods for calculating hospital-wide mortality produced substantially different results. This may have resulted from a lack of standardized national eligibility and exclusion criteria, different statistical methods, or fundamental flaws in the hypothesized association between hospital-wide mortality and quality of care. (Funded by the Massachusetts Division of Health Care Finance and Policy.).
NASA Technical Reports Server (NTRS)
Ganguly, Jibamitra
1989-01-01
Results of preliminary calculations of volatile abundances in carbonaceous chondrites are discussed. The method (Ganguly 1982) was refined for the calculation of cooling rate on the basis of cation ordering in orthopyroxenes, and it was applied to the derivation of cooling rates of some stony meteorites. Evaluation of cooling rate is important to the analysis of condensation, accretion, and post-accretionary metamorphic histories of meteorites. The method of orthopyroxene speedometry is widely applicable to meteorites and would be very useful in the understanding of the evolutionary histories of carbonaceous chondrites, especially since the conventional metallographic and fission track methods yield widely different results in many cases. Abstracts are given which summarize the major conclusions of the volatile abundance and cooling rate calculations.
Intelligent methods for the process parameter determination of plastic injection molding
NASA Astrophysics Data System (ADS)
Gao, Huang; Zhang, Yun; Zhou, Xundao; Li, Dequn
2018-03-01
Injection molding is one of the most widely used material processing methods in producing plastic products with complex geometries and high precision. The determination of process parameters is important in obtaining qualified products and maintaining product quality. This article reviews the recent studies and developments of the intelligent methods applied in the process parameter determination of injection molding. These intelligent methods are classified into three categories: Case-based reasoning methods, expert system- based methods, and data fitting and optimization methods. A framework of process parameter determination is proposed after comprehensive discussions. Finally, the conclusions and future research topics are discussed.
[Application of Delphi method in traditional Chinese medicine clinical research].
Bi, Ying-fei; Mao, Jing-yuan
2012-03-01
In recent years, Delphi method has been widely applied in traditional Chinese medicine (TCM) clinical research. This article analyzed the present application situation of Delphi method in TCM clinical research, and discussed some problems presented in the choice of evaluation method, classification of observation indexes and selection of survey items. On the basis of present application of Delphi method, the author analyzed the method on questionnaire making, selection of experts, evaluation of observation indexes and selection of survey items. Furthermore, the author summarized the steps of application of Delphi method in TCM clinical research.
Lopez-Sangil, Luis; George, Charles; Medina-Barcenas, Eduardo; Birkett, Ali J; Baxendale, Catherine; Bréchet, Laëtitia M; Estradera-Gumbau, Eduard; Sayer, Emma J
2017-09-01
Root exudation is a key component of nutrient and carbon dynamics in terrestrial ecosystems. Exudation rates vary widely by plant species and environmental conditions, but our understanding of how root exudates affect soil functioning is incomplete, in part because there are few viable methods to manipulate root exudates in situ . To address this, we devised the Automated Root Exudate System (ARES), which simulates increased root exudation by applying small amounts of labile solutes at regular intervals in the field.The ARES is a gravity-fed drip irrigation system comprising a reservoir bottle connected via a timer to a micro-hose irrigation grid covering c . 1 m 2 ; 24 drip-tips are inserted into the soil to 4-cm depth to apply solutions into the rooting zone. We installed two ARES subplots within existing litter removal and control plots in a temperate deciduous woodland. We applied either an artificial root exudate solution (RE) or a procedural control solution (CP) to each subplot for 1 min day -1 during two growing seasons. To investigate the influence of root exudation on soil carbon dynamics, we measured soil respiration monthly and soil microbial biomass at the end of each growing season.The ARES applied the solutions at a rate of c . 2 L m -2 week -1 without significantly increasing soil water content. The application of RE solution had a clear effect on soil carbon dynamics, but the response varied by litter treatment. Across two growing seasons, soil respiration was 25% higher in RE compared to CP subplots in the litter removal treatment, but not in the control plots. By contrast, we observed a significant increase in microbial biomass carbon (33%) and nitrogen (26%) in RE subplots in the control litter treatment.The ARES is an effective, low-cost method to apply experimental solutions directly into the rooting zone in the field. The installation of the systems entails minimal disturbance to the soil and little maintenance is required. Although we used ARES to apply root exudate solution, the method can be used to apply many other treatments involving solute inputs at regular intervals in a wide range of ecosystems.
Characterization of rock thermal conductivity by high-resolution optical scanning
Popov, Y.A.; Pribnow, D.F.C.; Sass, J.H.; Williams, C.F.; Burkhardt, H.
1999-01-01
We compared thress laboratory methods for thermal conductivity measurements: divided-bar, line-source and optical scanning. These methods are widely used in geothermal and petrophysical studies, particularly as applied to research on cores from deep scientific boreholes. The relatively new optical scanning method has recently been perfected and applied to geophysical problems. A comparison among these methods for determining the thermal conductivity tensor for anisotropic rocks is based on a representative collection of 80 crystalline rock samples from the KTB continental deep borehole (Germany). Despite substantial thermal inhomogeneity of rock thermal conductivity (up to 40-50% variation) and high anisotropy (with ratios of principal values attaining 2 and more), the results of measurements agree very well among the different methods. The discrepancy for measurements along the foliation is negligible (<1%). The component of thermal conductivity normal to the foliation reveals somewhat larger differences (3-4%). Optical scanning allowed us to characterize the thermal inhomogeneity of rocks and to identify a three-dimensional anisotropy in thermal conductivity of some gneiss samples. The merits of optical scanning include minor random errors (1.6%), the ability to record the variation of thermal conductivity along the sample, the ability to sample deeply using a slow scanning rate, freedom from constraints for sample size and shape, and quality of mechanical treatment of the sample surface, a contactless mode of measurement, high speed of operation, and the ability to measure on a cylindrical sample surface. More traditional methods remain superior for characterizing bulk conductivity at elevated temperature.Three laboratory methods including divided-bar, line-source and optical scanning are widely applied in geothermal and petrophysical studies. In this study, these three methods were compared for determining the thermal conductivity tensor for anisotropic rocks. For this study, a representative collection of 80 crystalline rock samples from the KTB continental deep borehole was used. Despite substantial thermal inhomogeneity of rock thermal conductivity and high anisotropy, measurement results were in excellent agreement among the three methods.
ERIC Educational Resources Information Center
Burns, Barbara A.; Jordan, Thomas M.
2006-01-01
Business managers are faced with complex decisions involving a wide range of issues--technical, social, environmental, and financial--and their interaction. Our education system focuses heavily on presenting structured problems and teaching students to apply a set of tools or methods to solve these problems. Yet the most difficult thing to teach…
What Should We Teach? A Consensus Method to Determine Curriculum Content
ERIC Educational Resources Information Center
McCarthy, W. H.; And Others
1977-01-01
A technique is described that uses the perception of a wide variety of doctors, including specialists in the field, other specialists, and general practitioners, to determine what should be taught to undergraduate medical students. The Spivey technique is applied to curriculum needs in ophthalmology. (LBH)
USDA-ARS?s Scientific Manuscript database
Genotyping-by-Sequencing (GBS) is a low-cost, high-throughput, method for genome-wide polymorphism discovery and genotyping adjacent to restriction sites. Since 2010, GBS has been applied for the genotyping of over 12,000 grape breeding lines, with a primary focus on identifying markers predictive ...
Transcriptome analysis of Pseudomonas syringae identifies new genes, ncRNAs, and antisense activity
USDA-ARS?s Scientific Manuscript database
To fully understand how bacteria respond to their environment, it is essential to assess genome-wide transcriptional activity. New high throughput sequencing technologies make it possible to query the transcriptome of an organism in an efficient unbiased manner. We applied a strand-specific method t...
Socioeconomic Status and Child Development: A Meta-Analysis
ERIC Educational Resources Information Center
Letourneau, Nicole Lyn; Duffett-Leger, Linda; Levac, Leah; Watson, Barry; Young-Morris, Catherine
2013-01-01
Lower socioeconomic status (SES) is widely accepted to have deleterious effects on the well-being and development of children and adolescents. However, rigorous meta-analytic methods have not been applied to determine the degree to which SES supports or limits children's and adolescents behavioural, cognitive and language development. While…
A wide variety of in situ subsurface remediation strategies have been developed to mitigate contamination by chlorinated solvent dense non-aqueous phase liquids (DNAPLS) and metals. Geochemical methods include: zerovalent iron emplacement, various electrolytic applications, elec...
Synthesis of aza-fused polycyclic quinolines through copper-catalyzed cascade reactions.
Cai, Qian; Li, Zhengqiu; Wei, Jiajia; Fu, Liangbin; Ha, Chengyong; Pei, Duanqing; Ding, Ke
2010-04-02
A new and efficient method for the synthesis of aza-fused polycyclic quinolines (e.g., benzimidazo[1,2-a]quinolines) is described. This protocol includes an intermolecular condensation followed by a copper-catalyzed intramolecular C-N coupling reaction. The method is applied to a wide range of 2-iodo, 2-bromo, and 2-chloro aryl aldehyde substrates to yield the aza-fused polycyclic quinolines in good yields.
Thermodynamic and Transport Properties of Real Air Plasma in Wide Range of Temperature and Pressure
NASA Astrophysics Data System (ADS)
Wang, Chunlin; Wu, Yi; Chen, Zhexin; Yang, Fei; Feng, Ying; Rong, Mingzhe; Zhang, Hantian
2016-07-01
Air plasma has been widely applied in industrial manufacture. In this paper, both dry and humid air plasmas' thermodynamic and transport properties are calculated in temperature 300-100000 K and pressure 0.1-100 atm. To build a more precise model of real air plasma, over 70 species are considered for composition. Two different methods, the Gibbs free energy minimization method and the mass action law method, are used to determinate the composition of the air plasma in a different temperature range. For the transport coefficients, the simplified Chapman-Enskog method developed by Devoto has been applied using the most recent collision integrals. It is found that the presence of CO2 has almost no effect on the properties of air plasma. The influence of H2O can be ignored except in low pressure air plasma, in which the saturated vapor pressure is relatively high. The results will serve as credible inputs for computational simulation of air plasma. supported by the National Key Basic Research Program of China (973 Program)(No. 2015CB251002), National Natural Science Foundation of China (Nos. 51521065, 51577145), the Science and Technology Project Funds of the Grid State Corporation (SGTYHT/13-JS-177), the Fundamental Research Funds for the Central Universities, and State Grid Corporation Project (GY71-14-004)
NASA Astrophysics Data System (ADS)
Liu, Jianyu; Zhang, Qiang; Zhang, Yongqiang; Chen, Xi; Li, Jianfeng; Aryal, Santosh K.
2017-10-01
Climatic elasticity has been widely applied to assess streamflow responses to climate changes. To fully assess impacts of climate under global warming on streamflow and reduce the error and uncertainty from various control variables, we develop a four-parameter (precipitation, catchment characteristics n, and maximum and minimum temperatures) climatic elasticity method named PnT, based on the widely used Budyko framework and simplified Makkink equation. We use this method to carry out the first comprehensive evaluation of the streamflow response to potential climate change for 372 widely spread catchments in China. The PnT climatic elasticity was first evaluated for a period 1980-2000, and then used to evaluate streamflow change response to climate change based on 12 global climate models under Representative Concentration Pathway 2.6 (RCP2.6) and RCP 8.5 emission scenarios. The results show that (1) the PnT climatic elasticity method is reliable; (2) projected increasing streamflow takes place in more than 60% of the selected catchments, with mean increments of 9% and 15.4% under RCP2.6 and RCP8.5 respectively; and (3) uncertainties in the projected streamflow are considerable in several regions, such as the Pearl River and Yellow River, with more than 40% of the selected catchments showing inconsistent change directions. Our results can help Chinese policy makers to manage and plan water resources more effectively, and the PnT climatic elasticity should be applied to other parts of the world.
Conceptual Design Study on Bolts for Self-Loosing Preventable Threaded Fasteners
NASA Astrophysics Data System (ADS)
Noma, Atsushi; He, Jianmei
2017-11-01
Threaded fasteners using bolts is widely applied in industrial field as well as various fields. However, threaded fasteners using bolts have loosing problems and cause many accidents. In this study, the purpose is to obtain self-loosing preventable threaded fasteners by applying spring characteristic effects on bolt structures. Helical-cutting applied bolt structures is introduced through three dimensional (3D) CAD modeling tools. Analytical approaches for evaluations on the spring characteristic effects helical-cutting applied bolt structures and self-loosing preventable performance of threaded fasteners were performed using finite element method and results are reported. Comparing slackness test results with analytical results and more details on evaluating mechanical properties will be executed in future study.
Sequential neural text compression.
Schmidhuber, J; Heil, S
1996-01-01
The purpose of this paper is to show that neural networks may be promising tools for data compression without loss of information. We combine predictive neural nets and statistical coding techniques to compress text files. We apply our methods to certain short newspaper articles and obtain compression ratios exceeding those of the widely used Lempel-Ziv algorithms (which build the basis of the UNIX functions "compress" and "gzip"). The main disadvantage of our methods is that they are about three orders of magnitude slower than standard methods.
Image Contrast Immersion Method for Measuring Refractive Index Applied to Spider Silks
2011-09-26
12.880665. 8. A. J. Werner, “Methods in high precision refractometry of optical glasses,” Appl. Opt. 7(5), 837–843 (1968). 9. Y. S. Liu, “Direct...transparent, low visibility orb web. Refractometry is the most widely used technique for accurately measuring n for transparent media. It has been...in use for more than a century. There are several standard refractometry methods [8]. Most require a bulk sample with surfaces polished to optical
Analysis of time-of-flight spectra
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gibson, E.M.; Foxon, C.T.; Zhang, J.
1990-07-01
A simplified method of data analysis for time of flight measurements of the velocity of molecular beams sources is described. This method does not require the complex data fitting previously used in such studies. The method is applied to the study of Pb molecular beams from a true Knudsen source and has been used to show that a VG Quadrupoles SXP300H mass spectrometer, when fitted with an open cross-beam ionizer, acts as an ideal density detector over a wide range of operating conditions.
Does Choice of Multicriteria Method Matter? An Experiment in Water Resources Planning
NASA Astrophysics Data System (ADS)
Hobbs, Benjamin F.; Chankong, Vira; Hamadeh, Wael; Stakhiv, Eugene Z.
1992-07-01
Many multiple criteria decision making methods have been proposed and applied to water planning. Their purpose is to provide information on tradeoffs among objectives and to help users articulate value judgments in a systematic, coherent, and documentable manner. The wide variety of available techniques confuses potential users, causing inappropriate matching of methods with problems. Experiments in which water planners apply more than one multicriteria procedure to realistic problems can help dispel this confusion by testing method appropriateness, ease of use, and validity. We summarize one such experiment where U.S. Army Corps of Engineers personnel used several methods to screen urban water supply plans. The methods evaluated include goal programming, ELECTRE I, additive value functions, multiplicative utility functions, and three techniques for choosing weights (direct rating, indifference tradeoff, and the analytical hierarchy process). Among the conclusions we reach are the following. First, experienced planners generally prefer simpler, more transparent methods. Additive value functions are favored. Yet none of the methods are endorsed by a majority of the participants; many preferred to use no formal method at all. Second, there is strong evidence that rating, the most commonly applied weight selection method, is likely to lead to weights that fail to represent the trade-offs that users are willing to make among criteria. Finally, we show that decisions can be as or more sensitive to the method used as to which person applies it. Therefore, if who chooses is important, then so too is how a choice is made.
Mo, Yun; Zhang, Zhongzhao; Meng, Weixiao; Ma, Lin; Wang, Yao
2014-01-01
Indoor positioning systems based on the fingerprint method are widely used due to the large number of existing devices with a wide range of coverage. However, extensive positioning regions with a massive fingerprint database may cause high computational complexity and error margins, therefore clustering methods are widely applied as a solution. However, traditional clustering methods in positioning systems can only measure the similarity of the Received Signal Strength without being concerned with the continuity of physical coordinates. Besides, outage of access points could result in asymmetric matching problems which severely affect the fine positioning procedure. To solve these issues, in this paper we propose a positioning system based on the Spatial Division Clustering (SDC) method for clustering the fingerprint dataset subject to physical distance constraints. With the Genetic Algorithm and Support Vector Machine techniques, SDC can achieve higher coarse positioning accuracy than traditional clustering algorithms. In terms of fine localization, based on the Kernel Principal Component Analysis method, the proposed positioning system outperforms its counterparts based on other feature extraction methods in low dimensionality. Apart from balancing online matching computational burden, the new positioning system exhibits advantageous performance on radio map clustering, and also shows better robustness and adaptability in the asymmetric matching problem aspect. PMID:24451470
Segmentation of touching handwritten Japanese characters using the graph theory method
NASA Astrophysics Data System (ADS)
Suwa, Misako
2000-12-01
Projection analysis methods have been widely used to segment Japanese character strings. However, if adjacent characters have overhanging strokes or a touching point doesn't correspond to the histogram minimum, the methods are prone to result in errors. In contrast, non-projection analysis methods being proposed for use on numerals or alphabet characters cannot be simply applied for Japanese characters because of the differences in the structure of the characters. Based on the oversegmenting strategy, a new pre-segmentation method is presented in this paper: touching patterns are represented as graphs and touching strokes are regarded as the elements of proper edge cutsets. By using the graph theoretical technique, the cutset martrix is calculated. Then, by applying pruning rules, potential touching strokes are determined and the patterns are over segmented. Moreover, this algorithm was confirmed to be valid for touching patterns with overhanging strokes and doubly connected patterns in simulations.
Standard methods for sampling North American freshwater fishes
Bonar, Scott A.; Hubert, Wayne A.; Willis, David W.
2009-01-01
This important reference book provides standard sampling methods recommended by the American Fisheries Society for assessing and monitoring freshwater fish populations in North America. Methods apply to ponds, reservoirs, natural lakes, and streams and rivers containing cold and warmwater fishes. Range-wide and eco-regional averages for indices of abundance, population structure, and condition for individual species are supplied to facilitate comparisons of standard data among populations. Provides information on converting nonstandard to standard data, statistical and database procedures for analyzing and storing standard data, and methods to prevent transfer of invasive species while sampling.
Applying Quantitative Genetic Methods to Primate Social Behavior
Brent, Lauren J. N.
2013-01-01
Increasingly, behavioral ecologists have applied quantitative genetic methods to investigate the evolution of behaviors in wild animal populations. The promise of quantitative genetics in unmanaged populations opens the door for simultaneous analysis of inheritance, phenotypic plasticity, and patterns of selection on behavioral phenotypes all within the same study. In this article, we describe how quantitative genetic techniques provide studies of the evolution of behavior with information that is unique and valuable. We outline technical obstacles for applying quantitative genetic techniques that are of particular relevance to studies of behavior in primates, especially those living in noncaptive populations, e.g., the need for pedigree information, non-Gaussian phenotypes, and demonstrate how many of these barriers are now surmountable. We illustrate this by applying recent quantitative genetic methods to spatial proximity data, a simple and widely collected primate social behavior, from adult rhesus macaques on Cayo Santiago. Our analysis shows that proximity measures are consistent across repeated measurements on individuals (repeatable) and that kin have similar mean measurements (heritable). Quantitative genetics may hold lessons of considerable importance for studies of primate behavior, even those without a specific genetic focus. PMID:24659839
Harris, R. Alan; Wang, Ting; Coarfa, Cristian; Nagarajan, Raman P.; Hong, Chibo; Downey, Sara L.; Johnson, Brett E.; Fouse, Shaun D.; Delaney, Allen; Zhao, Yongjun; Olshen, Adam; Ballinger, Tracy; Zhou, Xin; Forsberg, Kevin J.; Gu, Junchen; Echipare, Lorigail; O’Geen, Henriette; Lister, Ryan; Pelizzola, Mattia; Xi, Yuanxin; Epstein, Charles B.; Bernstein, Bradley E.; Hawkins, R. David; Ren, Bing; Chung, Wen-Yu; Gu, Hongcang; Bock, Christoph; Gnirke, Andreas; Zhang, Michael Q.; Haussler, David; Ecker, Joseph; Li, Wei; Farnham, Peggy J.; Waterland, Robert A.; Meissner, Alexander; Marra, Marco A.; Hirst, Martin; Milosavljevic, Aleksandar; Costello, Joseph F.
2010-01-01
Sequencing-based DNA methylation profiling methods are comprehensive and, as accuracy and affordability improve, will increasingly supplant microarrays for genome-scale analyses. Here, four sequencing-based methodologies were applied to biological replicates of human embryonic stem cells to compare their CpG coverage genome-wide and in transposons, resolution, cost, concordance and its relationship with CpG density and genomic context. The two bisulfite methods reached concordance of 82% for CpG methylation levels and 99% for non-CpG cytosine methylation levels. Using binary methylation calls, two enrichment methods were 99% concordant, while regions assessed by all four methods were 97% concordant. To achieve comprehensive methylome coverage while reducing cost, an approach integrating two complementary methods was examined. The integrative methylome profile along with histone methylation, RNA, and SNP profiles derived from the sequence reads allowed genome-wide assessment of allele-specific epigenetic states, identifying most known imprinted regions and new loci with monoallelic epigenetic marks and monoallelic expression. PMID:20852635
Fast and Accurate Approximation to Significance Tests in Genome-Wide Association Studies
Zhang, Yu; Liu, Jun S.
2011-01-01
Genome-wide association studies commonly involve simultaneous tests of millions of single nucleotide polymorphisms (SNP) for disease association. The SNPs in nearby genomic regions, however, are often highly correlated due to linkage disequilibrium (LD, a genetic term for correlation). Simple Bonferonni correction for multiple comparisons is therefore too conservative. Permutation tests, which are often employed in practice, are both computationally expensive for genome-wide studies and limited in their scopes. We present an accurate and computationally efficient method, based on Poisson de-clumping heuristics, for approximating genome-wide significance of SNP associations. Compared with permutation tests and other multiple comparison adjustment approaches, our method computes the most accurate and robust p-value adjustments for millions of correlated comparisons within seconds. We demonstrate analytically that the accuracy and the efficiency of our method are nearly independent of the sample size, the number of SNPs, and the scale of p-values to be adjusted. In addition, our method can be easily adopted to estimate false discovery rate. When applied to genome-wide SNP datasets, we observed highly variable p-value adjustment results evaluated from different genomic regions. The variation in adjustments along the genome, however, are well conserved between the European and the African populations. The p-value adjustments are significantly correlated with LD among SNPs, recombination rates, and SNP densities. Given the large variability of sequence features in the genome, we further discuss a novel approach of using SNP-specific (local) thresholds to detect genome-wide significant associations. This article has supplementary material online. PMID:22140288
Compression-RSA: New approach of encryption and decryption method
NASA Astrophysics Data System (ADS)
Hung, Chang Ee; Mandangan, Arif
2013-04-01
Rivest-Shamir-Adleman (RSA) cryptosystem is a well known asymmetric cryptosystem and it has been applied in a very wide area. Many researches with different approaches have been carried out in order to improve the security and performance of RSA cryptosystem. The enhancement of the performance of RSA cryptosystem is our main interest. In this paper, we propose a new method to increase the efficiency of RSA by shortening the number of plaintext before it goes under encryption process without affecting the original content of the plaintext. Concept of simple Continued Fraction and the new special relationship between it and Euclidean Algorithm have been applied on this newly proposed method. By reducing the number of plaintext-ciphertext, the encryption-decryption processes of a secret message can be accelerated.
[Interventional radiology treatment of extensive pulmonary embolism and thromboembolic diseases].
Battyáni, István; Dósa, Edit; Harmat, Zoltán
2015-04-26
The authors discuss interventional radiological methods in the field of vascular interventions applied in venous system diseases. Venous diseases can be life threatening without appropriate treatment and can lead to chronic venous diseases and venous insufficiency with long-term reduction in the quality of life. In addition, recurrent clinical symptoms require additional treatments. Interventional radiology has several methods that can provide fast and complete recovery if applied in time. The authors summarize these methods hoping that they will be available for a wide range of patients through the establishment of Interventional Radiological Centres and will be a part of the daily routine of patient care. Regarding the frequency of venous diseases and its influance on life quality the authors would like to draw attention to interventional radiological techniques and modern therapeutic possibilities.
Assessment and Validation of Machine Learning Methods for Predicting Molecular Atomization Energies.
Hansen, Katja; Montavon, Grégoire; Biegler, Franziska; Fazli, Siamac; Rupp, Matthias; Scheffler, Matthias; von Lilienfeld, O Anatole; Tkatchenko, Alexandre; Müller, Klaus-Robert
2013-08-13
The accurate and reliable prediction of properties of molecules typically requires computationally intensive quantum-chemical calculations. Recently, machine learning techniques applied to ab initio calculations have been proposed as an efficient approach for describing the energies of molecules in their given ground-state structure throughout chemical compound space (Rupp et al. Phys. Rev. Lett. 2012, 108, 058301). In this paper we outline a number of established machine learning techniques and investigate the influence of the molecular representation on the methods performance. The best methods achieve prediction errors of 3 kcal/mol for the atomization energies of a wide variety of molecules. Rationales for this performance improvement are given together with pitfalls and challenges when applying machine learning approaches to the prediction of quantum-mechanical observables.
Frantz, Terrill L
2012-01-01
This paper introduces the contemporary perspectives and techniques of social network analysis (SNA) and agent-based modeling (ABM) and advocates applying them to advance various aspects of complementary and alternative medicine (CAM). SNA and ABM are invaluable methods for representing, analyzing and projecting complex, relational, social phenomena; they provide both an insightful vantage point and a set of analytic tools that can be useful in a wide range of contexts. Applying these methods in the CAM context can aid the ongoing advances in the CAM field, in both its scientific aspects and in developing broader acceptance in associated stakeholder communities. Copyright © 2012 S. Karger AG, Basel.
William N., Jr. Cannon; Jack H. Barger; Charles J. Kostichka; Charles J. Kostichka
1986-01-01
Dutch elm disease control practice in 15 communities showed a wide range of time and material required to apply control methods. The median time used for each method was: sanitation survey, 9.8 hours per square mile; symptom survey, 96 hours per thousand elms; systemic fungicide injection, 1.4 hours per elm; and root-graft barrier installation, 2.2 hours per barrier (5...
Smoothing of climate time series revisited
NASA Astrophysics Data System (ADS)
Mann, Michael E.
2008-08-01
We present an easily implemented method for smoothing climate time series, generalizing upon an approach previously described by Mann (2004). The method adaptively weights the three lowest order time series boundary constraints to optimize the fit with the raw time series. We apply the method to the instrumental global mean temperature series from 1850-2007 and to various surrogate global mean temperature series from 1850-2100 derived from the CMIP3 multimodel intercomparison project. These applications demonstrate that the adaptive method systematically out-performs certain widely used default smoothing methods, and is more likely to yield accurate assessments of long-term warming trends.
Application of Taguchi methods to infrared window design
NASA Astrophysics Data System (ADS)
Osmer, Kurt A.; Pruszynski, Charles J.
1990-10-01
Dr. Genichi Taguchi, a prominent quality consultant, reduced a branch of statistics known as "Design of Experiments" to a cookbook methodology that can be employed by any competent engineer. This technique has been extensively employed by Japanese manufacturers, and is widely credited with helping them attain their current level of success in low cost, high quality product design and fabrication. Although this technique was originally put forth as a tool to streamline the determination of improved production processes, it can also be applied to a wide range of engineering problems. As part of an internal research project, this method of experimental design has been adapted to window trade studies and materials research. Two of these analyses are presented herein, and have been chosen to illustrate the breadth of applications to which the Taguchi method can be utilized.
Multifractal analysis of mobile social networks
NASA Astrophysics Data System (ADS)
Zheng, Wei; Zhang, Zifeng; Deng, Yufan
2017-09-01
As Wireless Fidelity (Wi-Fi)-enabled handheld devices have been widely used, the mobile social networks (MSNs) has been attracting extensive attention. Fractal approaches have also been widely applied to characterierize natural networks as useful tools to depict their spatial distribution and scaling properties. Moreover, when the complexity of the spatial distribution of MSNs cannot be properly charaterized by single fractal dimension, multifractal analysis is required. For further research, we introduced a multifractal analysis method based on box-covering algorithm to describe the structure of MSNs. Using this method, we find that the networks are multifractal at different time interval. The simulation results demonstrate that the proposed method is efficient for analyzing the multifractal characteristic of MSNs, which provides a distribution of singularities adequately describing both the heterogeneity of fractal patterns and the statistics of measurements across spatial scales in MSNs.
Max, Jean-Joseph; Meddeb-Mouelhi, Fatma; Beauregard, Marc; Chapados, Camille
2012-12-01
Enzymatic assays need robust, rapid colorimetric methods that can follow ongoing reactions. For this, we developed a highly accurate, multi-wavelength detection method that could be used for several systems. Here, it was applied to the detection of para-nitrophenol (pNP) in basic and acidic solutions. First, we confirmed by factor analysis that pNP has two forms, with unique spectral characteristics in the 240 to 600 nm range: Phenol in acidic conditions absorbs in the lower range, whereas phenolate in basic conditions absorbs in the higher range. Thereafter, the method was used for the determination of species concentration. For this, the intensity measurements were made at only two wavelengths with a microtiter plate reader. This yielded total dye concentration, species relative abundance, and solution pH value. The method was applied to an enzymatic assay. For this, a chromogenic substrate that generates pNP after hydrolysis catalyzed by a lipase from the fungus Yarrowia lipolytica was used. Over the pH range of 3-11, accurate amounts of acidic and basic pNP were determined at 340 and 405 nm, respectively. This method surpasses the commonly used single-wavelength assay at 405 nm, which does not detect pNP acidic species, leading to activity underestimations. Moreover, alleviation of this pH-related problem by neutralization is not necessary. On the whole, the method developed is readily applicable to rapid high-throughput of enzymatic activity measurements over a wide pH range.
GO-PCA: An Unsupervised Method to Explore Gene Expression Data Using Prior Knowledge
Wagner, Florian
2015-01-01
Method Genome-wide expression profiling is a widely used approach for characterizing heterogeneous populations of cells, tissues, biopsies, or other biological specimen. The exploratory analysis of such data typically relies on generic unsupervised methods, e.g. principal component analysis (PCA) or hierarchical clustering. However, generic methods fail to exploit prior knowledge about the molecular functions of genes. Here, I introduce GO-PCA, an unsupervised method that combines PCA with nonparametric GO enrichment analysis, in order to systematically search for sets of genes that are both strongly correlated and closely functionally related. These gene sets are then used to automatically generate expression signatures with functional labels, which collectively aim to provide a readily interpretable representation of biologically relevant similarities and differences. The robustness of the results obtained can be assessed by bootstrapping. Results I first applied GO-PCA to datasets containing diverse hematopoietic cell types from human and mouse, respectively. In both cases, GO-PCA generated a small number of signatures that represented the majority of lineages present, and whose labels reflected their respective biological characteristics. I then applied GO-PCA to human glioblastoma (GBM) data, and recovered signatures associated with four out of five previously defined GBM subtypes. My results demonstrate that GO-PCA is a powerful and versatile exploratory method that reduces an expression matrix containing thousands of genes to a much smaller set of interpretable signatures. In this way, GO-PCA aims to facilitate hypothesis generation, design of further analyses, and functional comparisons across datasets. PMID:26575370
Validation of Multilevel Constructs: Validation Methods and Empirical Findings for the EDI
ERIC Educational Resources Information Center
Forer, Barry; Zumbo, Bruno D.
2011-01-01
The purposes of this paper are to highlight the foundations of multilevel construct validation, describe two methodological approaches and associated analytic techniques, and then apply these approaches and techniques to the multilevel construct validation of a widely-used school readiness measure called the Early Development Instrument (EDI;…
Teaching Focus Group Interviewing: Benefits and Challenges
ERIC Educational Resources Information Center
George, Molly
2013-01-01
Focus group interviewing is widely used by academic and applied researchers. Given the popularity and strengths of this method, it is surprising how rarely focus group interviewing is taught in the undergraduate classroom and how few resources exist to support instructors who wish to train students to use this technique. This article fills the gap…
Diffusing Innovations: Adoption of Serious Educational Games by K-12 Science Teachers
ERIC Educational Resources Information Center
Vallett, David; Annetta, Leonard; Lamb, Richard; Bowling, Brandy
2014-01-01
Innovation is a term that has become widely used in education; especially as it pertains to technology infusion. Applying the corporate theory of diffusing innovation to educational practice is an innovation in itself. This mixed-methods study examined 38 teachers in a science educational gaming professional development program that provided…
ERIC Educational Resources Information Center
van Urk, Felix; Grant, Sean; Bonell, Chris
2016-01-01
The use of explicit programme theory to guide evaluation is widely recommended. However, practitioners and other partnering stakeholders often initiate programmes based on implicit theories, leaving researchers to explicate them before commencing evaluation. The current study aimed to apply a systematic method to undertake this process. We…
USDA-ARS?s Scientific Manuscript database
The butanol-HCl spectrophotometric assay is widely used to quantify extractable and insoluble forms of condensed tannin (CT, syn. proanthocyanidin) in foods, feeds, and foliage of herbaceous and woody plants. However, this method underestimates total CT content when applied directly to plant materia...
Tissue-Point Motion Tracking in the Tongue from Cine MRI and Tagged MRI
ERIC Educational Resources Information Center
Woo, Jonghye; Stone, Maureen; Suo, Yuanming; Murano, Emi Z.; Prince, Jerry L.
2014-01-01
Purpose: Accurate tissue motion tracking within the tongue can help professionals diagnose and treat vocal tract--related disorders, evaluate speech quality before and after surgery, and conduct various scientific studies. The authors compared tissue tracking results from 4 widely used deformable registration (DR) methods applied to cine magnetic…
ERIC Educational Resources Information Center
Shachak, Aviv; Ophir, Ron; Rubin, Eitan
2005-01-01
The need to support bioinformatics training has been widely recognized by scientists, industry, and government institutions. However, the discussion of instructional methods for teaching bioinformatics is only beginning. Here we report on a systematic attempt to design two bioinformatics workshops for graduate biology students on the basis of…
Liu, Kui; Wei, Sixiao; Chen, Zhijiang; Jia, Bin; Chen, Genshe; Ling, Haibin; Sheaff, Carolyn; Blasch, Erik
2017-01-01
This paper presents the first attempt at combining Cloud with Graphic Processing Units (GPUs) in a complementary manner within the framework of a real-time high performance computation architecture for the application of detecting and tracking multiple moving targets based on Wide Area Motion Imagery (WAMI). More specifically, the GPU and Cloud Moving Target Tracking (GC-MTT) system applied a front-end web based server to perform the interaction with Hadoop and highly parallelized computation functions based on the Compute Unified Device Architecture (CUDA©). The introduced multiple moving target detection and tracking method can be extended to other applications such as pedestrian tracking, group tracking, and Patterns of Life (PoL) analysis. The cloud and GPUs based computing provides an efficient real-time target recognition and tracking approach as compared to methods when the work flow is applied using only central processing units (CPUs). The simultaneous tracking and recognition results demonstrate that a GC-MTT based approach provides drastically improved tracking with low frame rates over realistic conditions. PMID:28208684
Liu, Kui; Wei, Sixiao; Chen, Zhijiang; Jia, Bin; Chen, Genshe; Ling, Haibin; Sheaff, Carolyn; Blasch, Erik
2017-02-12
This paper presents the first attempt at combining Cloud with Graphic Processing Units (GPUs) in a complementary manner within the framework of a real-time high performance computation architecture for the application of detecting and tracking multiple moving targets based on Wide Area Motion Imagery (WAMI). More specifically, the GPU and Cloud Moving Target Tracking (GC-MTT) system applied a front-end web based server to perform the interaction with Hadoop and highly parallelized computation functions based on the Compute Unified Device Architecture (CUDA©). The introduced multiple moving target detection and tracking method can be extended to other applications such as pedestrian tracking, group tracking, and Patterns of Life (PoL) analysis. The cloud and GPUs based computing provides an efficient real-time target recognition and tracking approach as compared to methods when the work flow is applied using only central processing units (CPUs). The simultaneous tracking and recognition results demonstrate that a GC-MTT based approach provides drastically improved tracking with low frame rates over realistic conditions.
Bonar, Maegwin; Ellington, E Hance; Lewis, Keith P; Vander Wal, Eric
2018-01-01
In ungulates, parturition is correlated with a reduction in movement rate. With advances in movement-based technologies comes an opportunity to develop new techniques to assess reproduction in wild ungulates that are less invasive and reduce biases. DeMars et al. (2013, Ecology and Evolution 3:4149-4160) proposed two promising new methods (individual- and population-based; the DeMars model) that use GPS inter-fix step length of adult female caribou (Rangifer tarandus caribou) to infer parturition and neonate survival. Our objective was to apply the DeMars model to caribou populations that may violate model assumptions for retrospective analysis of parturition and calf survival. We extended the use of the DeMars model after assigning parturition and calf mortality status by examining herd-wide distributions of parturition date, calf mortality date, and survival. We used the DeMars model to estimate parturition and calf mortality events and compared them with the known parturition and calf mortality events from collared adult females (n = 19). We also used the DeMars model to estimate parturition and calf mortality events for collared female caribou with unknown parturition and calf mortality events (n = 43) and instead derived herd-wide estimates of calf survival as well as distributions of parturition and calf mortality dates and compared them to herd-wide estimates generated from calves fitted with VHF collars (n = 134). For our data, the individual-based method was effective at predicting calf mortality, but was not effective at predicting parturition. The population-based method was more effective at predicting parturition but was not effective at predicting calf mortality. At the herd-level, the predicted distributions of parturition date from both methods differed from each other and from the distribution derived from the parturition dates of VHF-collared calves (log-ranked test: χ2 = 40.5, df = 2, p < 0.01). The predicted distributions of calf mortality dates from both methods were similar to the observed distribution derived from VHF-collared calves. Both methods underestimated herd-wide calf survival based on VHF-collared calves, however, a combination of the individual- and population-based methods produced herd-wide survival estimates similar to estimates generated from collared calves. The limitations we experienced when applying the DeMars model could result from the shortcomings in our data violating model assumptions. However despite the differences in our caribou systems, with proper validation techniques the framework in the DeMars model is sufficient to make inferences on parturition and calf mortality.
Ellington, E. Hance; Lewis, Keith P.; Vander Wal, Eric
2018-01-01
In ungulates, parturition is correlated with a reduction in movement rate. With advances in movement-based technologies comes an opportunity to develop new techniques to assess reproduction in wild ungulates that are less invasive and reduce biases. DeMars et al. (2013, Ecology and Evolution 3:4149–4160) proposed two promising new methods (individual- and population-based; the DeMars model) that use GPS inter-fix step length of adult female caribou (Rangifer tarandus caribou) to infer parturition and neonate survival. Our objective was to apply the DeMars model to caribou populations that may violate model assumptions for retrospective analysis of parturition and calf survival. We extended the use of the DeMars model after assigning parturition and calf mortality status by examining herd-wide distributions of parturition date, calf mortality date, and survival. We used the DeMars model to estimate parturition and calf mortality events and compared them with the known parturition and calf mortality events from collared adult females (n = 19). We also used the DeMars model to estimate parturition and calf mortality events for collared female caribou with unknown parturition and calf mortality events (n = 43) and instead derived herd-wide estimates of calf survival as well as distributions of parturition and calf mortality dates and compared them to herd-wide estimates generated from calves fitted with VHF collars (n = 134). For our data, the individual-based method was effective at predicting calf mortality, but was not effective at predicting parturition. The population-based method was more effective at predicting parturition but was not effective at predicting calf mortality. At the herd-level, the predicted distributions of parturition date from both methods differed from each other and from the distribution derived from the parturition dates of VHF-collared calves (log-ranked test: χ2 = 40.5, df = 2, p < 0.01). The predicted distributions of calf mortality dates from both methods were similar to the observed distribution derived from VHF-collared calves. Both methods underestimated herd-wide calf survival based on VHF-collared calves, however, a combination of the individual- and population-based methods produced herd-wide survival estimates similar to estimates generated from collared calves. The limitations we experienced when applying the DeMars model could result from the shortcomings in our data violating model assumptions. However despite the differences in our caribou systems, with proper validation techniques the framework in the DeMars model is sufficient to make inferences on parturition and calf mortality. PMID:29466451
Fleming, R. M.; Seager, C. H.; Lang, D. V.; ...
2015-07-02
In this study, an improved method for measuring the cross sections for carrier trapping at defects in semiconductors is described. This method, a variation of deep level transient spectroscopy(DLTS) used with bipolar transistors, is applied to hot carrier trapping at vacancy-oxygen, carbon-oxygen, and three charge states of divacancy centers (V 2) in n- and p-type silicon. Unlike standard DLTS, we fill traps by injecting carriers into the depletion region of a bipolar transistor diode using a pulse of forward bias current applied to the adjacent diode. We show that this technique is capable of accurately measuring a wide range ofmore » capture cross sections at varying electric fields due to the control of the carrier density it provides. Because this technique can be applied to a variety of carrier energy distributions, it should be valuable in modeling the effect of radiation-induced generation-recombination currents in bipolar devices.« less
Wavelet neural networks: a practical guide.
Alexandridis, Antonios K; Zapranis, Achilleas D
2013-06-01
Wavelet networks (WNs) are a new class of networks which have been used with great success in a wide range of applications. However a general accepted framework for applying WNs is missing from the literature. In this study, we present a complete statistical model identification framework in order to apply WNs in various applications. The following subjects were thoroughly examined: the structure of a WN, training methods, initialization algorithms, variable significance and variable selection algorithms, model selection methods and finally methods to construct confidence and prediction intervals. In addition the complexity of each algorithm is discussed. Our proposed framework was tested in two simulated cases, in one chaotic time series described by the Mackey-Glass equation and in three real datasets described by daily temperatures in Berlin, daily wind speeds in New York and breast cancer classification. Our results have shown that the proposed algorithms produce stable and robust results indicating that our proposed framework can be applied in various applications. Copyright © 2013 Elsevier Ltd. All rights reserved.
Zhao, Xing; Zhou, Xiao-Hua; Feng, Zijian; Guo, Pengfei; He, Hongyan; Zhang, Tao; Duan, Lei; Li, Xiaosong
2013-01-01
As a useful tool for geographical cluster detection of events, the spatial scan statistic is widely applied in many fields and plays an increasingly important role. The classic version of the spatial scan statistic for the binary outcome is developed by Kulldorff, based on the Bernoulli or the Poisson probability model. In this paper, we apply the Hypergeometric probability model to construct the likelihood function under the null hypothesis. Compared with existing methods, the likelihood function under the null hypothesis is an alternative and indirect method to identify the potential cluster, and the test statistic is the extreme value of the likelihood function. Similar with Kulldorff's methods, we adopt Monte Carlo test for the test of significance. Both methods are applied for detecting spatial clusters of Japanese encephalitis in Sichuan province, China, in 2009, and the detected clusters are identical. Through a simulation to independent benchmark data, it is indicated that the test statistic based on the Hypergeometric model outweighs Kulldorff's statistics for clusters of high population density or large size; otherwise Kulldorff's statistics are superior.
Regression modeling of ground-water flow
Cooley, R.L.; Naff, R.L.
1985-01-01
Nonlinear multiple regression methods are developed to model and analyze groundwater flow systems. Complete descriptions of regression methodology as applied to groundwater flow models allow scientists and engineers engaged in flow modeling to apply the methods to a wide range of problems. Organization of the text proceeds from an introduction that discusses the general topic of groundwater flow modeling, to a review of basic statistics necessary to properly apply regression techniques, and then to the main topic: exposition and use of linear and nonlinear regression to model groundwater flow. Statistical procedures are given to analyze and use the regression models. A number of exercises and answers are included to exercise the student on nearly all the methods that are presented for modeling and statistical analysis. Three computer programs implement the more complex methods. These three are a general two-dimensional, steady-state regression model for flow in an anisotropic, heterogeneous porous medium, a program to calculate a measure of model nonlinearity with respect to the regression parameters, and a program to analyze model errors in computed dependent variables such as hydraulic head. (USGS)
Methods for evaluating the biological impact of potentially toxic waste applied to soils
DOE Office of Scientific and Technical Information (OSTI.GOV)
Neuhauser, E.F.; Loehr, R.C.; Malecki, M.R.
1985-12-01
The study was designed to evaluate two methods that can be used to estimate the biological impact of organics and inorganics that may be in wastes applied to land for treatment and disposal. The two methods were the contact test and the artificial soil test. The contact test is a 48 hr test using an adult worm, a small glass vial, and filter paper to which the test chemical or waste is applied. The test is designed to provide close contact between the worm and a chemical similar to the situation in soils. The method provides a rapid estimate ofmore » the relative toxicity of chemicals and industrial wastes. The artificial soil test uses a mixture of sand, kaolin, peat, and calcium carbonate as a representative soil. Different concentrations of the test material are added to the artificial soil, adult worms are added and worm survival is evaluated after two weeks. These studies have shown that: earthworms can distinguish between a wide variety of chemicals with a high degree of accuracy.« less
Efficient forced vibration reanalysis method for rotating electric machines
NASA Astrophysics Data System (ADS)
Saito, Akira; Suzuki, Hiromitsu; Kuroishi, Masakatsu; Nakai, Hideo
2015-01-01
Rotating electric machines are subject to forced vibration by magnetic force excitation with wide-band frequency spectrum that are dependent on the operating conditions. Therefore, when designing the electric machines, it is inevitable to compute the vibration response of the machines at various operating conditions efficiently and accurately. This paper presents an efficient frequency-domain vibration analysis method for the electric machines. The method enables the efficient re-analysis of the vibration response of electric machines at various operating conditions without the necessity to re-compute the harmonic response by finite element analyses. Theoretical background of the proposed method is provided, which is based on the modal reduction of the magnetic force excitation by a set of amplitude-modulated standing-waves. The method is applied to the forced response vibration of the interior permanent magnet motor at a fixed operating condition. The results computed by the proposed method agree very well with those computed by the conventional harmonic response analysis by the FEA. The proposed method is then applied to the spin-up test condition to demonstrate its applicability to various operating conditions. It is observed that the proposed method can successfully be applied to the spin-up test conditions, and the measured dominant frequency peaks in the frequency response can be well captured by the proposed approach.
Zhou, Yongxin; Bai, Jing
2007-01-01
A framework that combines atlas registration, fuzzy connectedness (FC) segmentation, and parametric bias field correction (PABIC) is proposed for the automatic segmentation of brain magnetic resonance imaging (MRI). First, the atlas is registered onto the MRI to initialize the following FC segmentation. Original techniques are proposed to estimate necessary initial parameters of FC segmentation. Further, the result of the FC segmentation is utilized to initialize a following PABIC algorithm. Finally, we re-apply the FC technique on the PABIC corrected MRI to get the final segmentation. Thus, we avoid expert human intervention and provide a fully automatic method for brain MRI segmentation. Experiments on both simulated and real MRI images demonstrate the validity of the method, as well as the limitation of the method. Being a fully automatic method, it is expected to find wide applications, such as three-dimensional visualization, radiation therapy planning, and medical database construction.
Correcting for batch effects in case-control microbiome studies
Gibbons, Sean M.; Duvallet, Claire
2018-01-01
High-throughput data generation platforms, like mass-spectrometry, microarrays, and second-generation sequencing are susceptible to batch effects due to run-to-run variation in reagents, equipment, protocols, or personnel. Currently, batch correction methods are not commonly applied to microbiome sequencing datasets. In this paper, we compare different batch-correction methods applied to microbiome case-control studies. We introduce a model-free normalization procedure where features (i.e. bacterial taxa) in case samples are converted to percentiles of the equivalent features in control samples within a study prior to pooling data across studies. We look at how this percentile-normalization method compares to traditional meta-analysis methods for combining independent p-values and to limma and ComBat, widely used batch-correction models developed for RNA microarray data. Overall, we show that percentile-normalization is a simple, non-parametric approach for correcting batch effects and improving sensitivity in case-control meta-analyses. PMID:29684016
Musci, Marilena; Yao, Shicong
2017-12-01
Pu-erh tea is a post-fermented tea that has recently gained popularity worldwide, due to potential health benefits related to the antioxidant activity resulting from its high polyphenolic content. The Folin-Ciocalteu method is a simple, rapid, and inexpensive assay widely applied for the determination of total polyphenol content. Over the past years, it has been subjected to many modifications, often without any systematic optimization or validation. In our study, we sought to optimize the Folin-Ciocalteu method, evaluate quality parameters including linearity, precision and stability, and then apply the optimized model to determine the total polyphenol content of 57 Chinese teas, including green tea, aged and ripened Pu-erh tea. Our optimized Folin-Ciocalteu method reduced analysis time, allowed for the analysis of a large number of samples, to discriminate among the different teas, and to assess the effect of the post-fermentation process on polyphenol content.
A hybrid demodulation method of fiber-optic Fabry-Perot pressure sensor
NASA Astrophysics Data System (ADS)
Yu, Le; Lang, Jianjun; Pan, Yong; Wu, Di; Zhang, Min
2013-12-01
The fiber-optic Fabry-Perot pressure sensors have been widely applied to measure pressure in oilfield. For multi-well it will take a long time (dozens of seconds) to demodulate downhole pressure values of all wells by using only one demodulation system and it will cost a lot when every well is equipped with one system, which heavily limits the sensor applied in oilfield. In present paper, a new hybrid demodulation method, combining the windowed nonequispaced discrete Fourier Transform (nDFT) method with segment search minimum mean square error estimation (MMSE) method, was developed, by which the demodulation time can be reduced to 200ms, i.e., measuring 10 channels/wells was less than 2s. Besides, experimental results showed the demodulation cavity length of the fiber-optic Fabry-Perot sensor has a maximum error of 0.5 nm and consequently pressure measurement accuracy can reach 0.4% F.S.
Using cluster ensemble and validation to identify subtypes of pervasive developmental disorders.
Shen, Jess J; Lee, Phil-Hyoun; Holden, Jeanette J A; Shatkay, Hagit
2007-10-11
Pervasive Developmental Disorders (PDD) are neurodevelopmental disorders characterized by impairments in social interaction, communication and behavior. Given the diversity and varying severity of PDD, diagnostic tools attempt to identify homogeneous subtypes within PDD. Identifying subtypes can lead to targeted etiology studies and to effective type-specific intervention. Cluster analysis can suggest coherent subsets in data; however, different methods and assumptions lead to different results. Several previous studies applied clustering to PDD data, varying in number and characteristics of the produced subtypes. Most studies used a relatively small dataset (fewer than 150 subjects), and all applied only a single clustering method. Here we study a relatively large dataset (358 PDD patients), using an ensemble of three clustering methods. The results are evaluated using several validation methods, and consolidated through an integration step. Four clusters are identified, analyzed and compared to subtypes previously defined by the widely used diagnostic tool DSM-IV.
Using Cluster Ensemble and Validation to Identify Subtypes of Pervasive Developmental Disorders
Shen, Jess J.; Lee, Phil Hyoun; Holden, Jeanette J.A.; Shatkay, Hagit
2007-01-01
Pervasive Developmental Disorders (PDD) are neurodevelopmental disorders characterized by impairments in social interaction, communication and behavior.1 Given the diversity and varying severity of PDD, diagnostic tools attempt to identify homogeneous subtypes within PDD. Identifying subtypes can lead to targeted etiology studies and to effective type-specific intervention. Cluster analysis can suggest coherent subsets in data; however, different methods and assumptions lead to different results. Several previous studies applied clustering to PDD data, varying in number and characteristics of the produced subtypes19. Most studies used a relatively small dataset (fewer than 150 subjects), and all applied only a single clustering method. Here we study a relatively large dataset (358 PDD patients), using an ensemble of three clustering methods. The results are evaluated using several validation methods, and consolidated through an integration step. Four clusters are identified, analyzed and compared to subtypes previously defined by the widely used diagnostic tool DSM-IV.2 PMID:18693920
Stec, Katarzyna
2017-11-02
Materials made with chromite ore are widely applied in the industry metallurgy as well as in the foundry industry. The oxidation number of chromium in these materials is both (III) and (VI). Currently there are no procedures allowing proper determination of chrome in chromite ores and ore-containing materials. The analytical methods applied, which are dedicated to a very narrow range of materials, e.g., cement, and cannot be applied in the case of materials which, apart from trace amounts of Cr(VI), contain mainly compounds of Cr(III), Fe(III) as well as trace compounds of Cu(II), Ni(II) and V(V). In the work particular attention has been paid to the preparation of test samples and creating measurement conditions in which interferences from Cr(III) and Fe(III) spectral lines could be minimized. Two separate instrumental measurement techniques have been applied: Inductively Coupled Plasma-Atomic Emission Spectrometry (ICP AES) and the spectrophotometric method using diphenylcarbazide.
Third NASA Langley Formal Methods Workshop
NASA Technical Reports Server (NTRS)
Holloway, C. Michael (Compiler)
1995-01-01
This publication constitutes the proceedings of NASA Langley Research Center's third workshop on the application of formal methods to the design and verification of life-critical systems. This workshop brought together formal methods researchers, industry engineers, and academicians to discuss the potential of NASA-sponsored formal methods and to investigate new opportunities for applying these methods to industry problems. contained herein are copies of the material presented at the workshop, summaries of many of the presentations, a complete list of attendees, and a detailed summary of the Langley formal methods program. Much of this material is available electronically through the World-Wide Web via the following URL.
A Novel Residual Frequency Estimation Method for GNSS Receivers.
Nguyen, Tu Thi-Thanh; La, Vinh The; Ta, Tung Hai
2018-01-04
In Global Navigation Satellite System (GNSS) receivers, residual frequency estimation methods are traditionally applied in the synchronization block to reduce the transient time from acquisition to tracking, or they are used within the frequency estimator to improve its accuracy in open-loop architectures. There are several disadvantages in the current estimation methods, including sensitivity to noise and wide search space size. This paper proposes a new residual frequency estimation method depending on differential processing. Although the complexity of the proposed method is higher than the one of traditional methods, it can lead to more accurate estimates, without increasing the size of the search space.
Ultrananocrystalline Diamond Cantilever Wide Dynamic Range Acceleration/Vibration /Pressure Sensor
Krauss, Alan R.; Gruen, Dieter M.; Pellin, Michael J.; Auciello, Orlando
2003-09-02
An ultrananocrystalline diamond (UNCD) element formed in a cantilever configuration is used in a highly sensitive, ultra-small sensor for measuring acceleration, shock, vibration and static pressure over a wide dynamic range. The cantilever UNCD element may be used in combination with a single anode, with measurements made either optically or by capacitance. In another embodiment, the cantilever UNCD element is disposed between two anodes, with DC voltages applied to the two anodes. With a small AC modulated voltage applied to the UNCD cantilever element and because of the symmetry of the applied voltage and the anode-cathode gap distance in the Fowler-Nordheim equation, any change in the anode voltage ratio V1/V2 required to maintain a specified current ratio precisely matches any displacement of the UNCD cantilever element from equilibrium. By measuring changes in the anode voltage ratio required to maintain a specified current ratio, the deflection of the UNCD cantilever can be precisely determined. By appropriately modulating the voltages applied between the UNCD cantilever and the two anodes, or limit electrodes, precise independent measurements of pressure, uniaxial acceleration, vibration and shock can be made. This invention also contemplates a method for fabricating the cantilever UNCD structure for the sensor.
Ultrananocrystalline diamond cantilever wide dynamic range acceleration/vibration/pressure sensor
Krauss, Alan R [Naperville, IL; Gruen, Dieter M [Downers Grove, IL; Pellin, Michael J [Naperville, IL; Auciello, Orlando [Bolingbrook, IL
2002-07-23
An ultrananocrystalline diamond (UNCD) element formed in a cantilever configuration is used in a highly sensitive, ultra-small sensor for measuring acceleration, shock, vibration and static pressure over a wide dynamic range. The cantilever UNCD element may be used in combination with a single anode, with measurements made either optically or by capacitance. In another embodiment, the cantilever UNCD element is disposed between two anodes, with DC voltages applied to the two anodes. With a small AC modulated voltage applied to the UNCD cantilever element and because of the symmetry of the applied voltage and the anode-cathode gap distance in the Fowler-Nordheim equation, any change in the anode voltage ratio V1/N2 required to maintain a specified current ratio precisely matches any displacement of the UNCD cantilever element from equilibrium. By measuring changes in the anode voltage ratio required to maintain a specified current ratio, the deflection of the UNCD cantilever can be precisely determined. By appropriately modulating the voltages applied between the UNCD cantilever and the two anodes, or limit electrodes, precise independent measurements of pressure, uniaxial acceleration, vibration and shock can be made. This invention also contemplates a method for fabricating the cantilever UNCD structure for the sensor.
Code of Federal Regulations, 2010 CFR
2010-10-01
... 45 Public Welfare 4 2010-10-01 2010-10-01 false What government-wide requirements apply to staff fundraising under my AmeriCorps grant? 2520.60 Section 2520.60 Public Welfare Regulations Relating to Public... C PROGRAMS § 2520.60 What government-wide requirements apply to staff fundraising under my Ameri...
Genome-scale engineering of Saccharomyces cerevisiae with single-nucleotide precision.
Bao, Zehua; HamediRad, Mohammad; Xue, Pu; Xiao, Han; Tasan, Ipek; Chao, Ran; Liang, Jing; Zhao, Huimin
2018-07-01
We developed a CRISPR-Cas9- and homology-directed-repair-assisted genome-scale engineering method named CHAnGE that can rapidly output tens of thousands of specific genetic variants in yeast. More than 98% of target sequences were efficiently edited with an average frequency of 82%. We validate the single-nucleotide resolution genome-editing capability of this technology by creating a genome-wide gene disruption collection and apply our method to improve tolerance to growth inhibitors.
NASA Astrophysics Data System (ADS)
Yakunin, A. G.; Hussein, H. M.
2018-01-01
The article shows how the known statistical methods, which are widely used in solving financial problems and a number of other fields of science and technology, can be effectively applied after minor modification for solving such problems in climate and environment monitoring systems, as the detection of anomalies in the form of abrupt changes in signal levels, the occurrence of positive and negative outliers and the violation of the cycle form in periodic processes.
Docking-based classification models for exploratory toxicology ...
Background: Exploratory toxicology is a new emerging research area whose ultimate mission is that of protecting human health and environment from risks posed by chemicals. In this regard, the ethical and practical limitation of animal testing has encouraged the promotion of computational methods for the fast screening of huge collections of chemicals available on the market. Results: We derived 24 reliable docking-based classification models able to predict the estrogenic potential of a large collection of chemicals having high quality experimental data, kindly provided by the U.S. Environmental Protection Agency (EPA). The predictive power of our docking-based models was supported by values of AUC, EF1% (EFmax = 7.1), -LR (at SE = 0.75) and +LR (at SE = 0.25) ranging from 0.63 to 0.72, from 2.5 to 6.2, from 0.35 to 0.67 and from 2.05 to 9.84, respectively. In addition, external predictions were successfully made on some representative known estrogenic chemicals. Conclusion: We show how structure-based methods, widely applied to drug discovery programs, can be adapted to meet the conditions of the regulatory context. Importantly, these methods enable one to employ the physicochemical information contained in the X-ray solved biological target and to screen structurally-unrelated chemicals. Shows how structure-based methods, widely applied to drug discovery programs, can be adapted to meet the conditions of the regulatory context. Evaluation of 24 reliable dockin
Anti-reflective and anti-soiling coatings for self-cleaning properties
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brophy, Brenor L.; Nair, Vinod; Dave, Bakul Champaklal
The disclosure discloses abrasion resistant, persistently hydrophobic and oleophobic, anti-reflective and anti-soiling coatings for glass. The coatings described herein have wide application, including for example the front cover glass of solar modules. Methods of applying the coatings using various apparatus are disclosed. Methods for using the coatings in solar energy generation plants to achieve greater energy yield and reduced operations costs are disclosed. Coating materials are formed by combinations of hydrolyzed silane-base precursors through sol-gel processes. Several methods of synthesis and formulation of coating materials are disclosed.
An Application of Gröbner Basis in Differential Equations of Physics
NASA Astrophysics Data System (ADS)
Chaharbashloo, Mohammad Saleh; Basiri, Abdolali; Rahmany, Sajjad; Zarrinkamar, Saber
2013-11-01
We apply the Gröbner basis to the ansatz method in quantum mechanics to obtain the energy eigenvalues and the wave functions in a very simple manner. There are important physical potentials such as the Cornell interaction which play significant roles in particle physics and can be treated via this technique. As a typical example, the algorithm is applied to the semi-relativistic spinless Salpeter equation under the Cornell interaction. Many other applications of the idea in a wide range of physical fields are listed as well.
Contact angle determination procedure and detection of an invisible surface film
NASA Technical Reports Server (NTRS)
Meyer, G.; Grat, R.
1990-01-01
The contact angle value, i.e., the tangent angle of liquid resting on a planar solid surface, is a basic parameter which can be applied to a wide range of applications. The goal is to provide a basic understanding of the contact angle measurement technique and to present a simple illustration that can be applied as a quality control method; namely, detection of a surface contaminant which exists on a surface that appears clean to the unaided eye. The equipment and experimental procedures are detailed.
HF Surface Wave Radar Tests at the Eastern China Sea
NASA Astrophysics Data System (ADS)
Wu, Xiong Bin; Cheng, Feng; Wu, Shi Cai; Yang, Zi Jie; Wen, Biyang; Shi, Zhen Hua; Tian, Jiansheng; Ke, Hengyu; Gao, Huotao
2005-01-01
The HF surface wave radar system OSMAR2000 adopts Frequency Modulated Interrupted Continuous Waveform (FMICW) and its 120m-antenna array is transmitting/receiving co-used. MUSIC and MVM are applied to obtain sea echo's direction of arrival (DOA) when extracting currents information. Verification tests of OSMAR2000 ocean surface dynamics detection against in-situ measurements had been accomplished on Oct. 23~29, 2000. Ship detection test was carried out on Dec.24, 2001. It shows that OSMAR2000 is capable of detecting 1000 tons ships with a wide beam out to 70 km. This paper introduces the radar system and the applied DOA estimation methods in the first, and then presents ship detection results and some sea state measurement results of surface currents and waves. The results indicate the validity of the developed radar system and the effectiveness of the applied signal processing methods.
Recent Developments and Applications of the MMPBSA Method
Wang, Changhao; Greene, D'Artagnan; Xiao, Li; Qi, Ruxi; Luo, Ray
2018-01-01
The Molecular Mechanics Poisson-Boltzmann Surface Area (MMPBSA) approach has been widely applied as an efficient and reliable free energy simulation method to model molecular recognition, such as for protein-ligand binding interactions. In this review, we focus on recent developments and applications of the MMPBSA method. The methodology review covers solvation terms, the entropy term, extensions to membrane proteins and high-speed screening, and new automation toolkits. Recent applications in various important biomedical and chemical fields are also reviewed. We conclude with a few future directions aimed at making MMPBSA a more robust and efficient method. PMID:29367919
A direct method for nonlinear ill-posed problems
NASA Astrophysics Data System (ADS)
Lakhal, A.
2018-02-01
We propose a direct method for solving nonlinear ill-posed problems in Banach-spaces. The method is based on a stable inversion formula we explicitly compute by applying techniques for analytic functions. Furthermore, we investigate the convergence and stability of the method and prove that the derived noniterative algorithm is a regularization. The inversion formula provides a systematic sensitivity analysis. The approach is applicable to a wide range of nonlinear ill-posed problems. We test the algorithm on a nonlinear problem of travel-time inversion in seismic tomography. Numerical results illustrate the robustness and efficiency of the algorithm.
Excited-State Effective Masses in Lattice QCD
DOE Office of Scientific and Technical Information (OSTI.GOV)
George Fleming, Saul Cohen, Huey-Wen Lin
2009-10-01
We apply black-box methods, i.e. where the performance of the method does not depend upon initial guesses, to extract excited-state energies from Euclidean-time hadron correlation functions. In particular, we extend the widely used effective-mass method to incorporate multiple correlation functions and produce effective mass estimates for multiple excited states. In general, these excited-state effective masses will be determined by finding the roots of some polynomial. We demonstrate the method using sample lattice data to determine excited-state energies of the nucleon and compare the results to other energy-level finding techniques.
Ab initio method for calculating total cross sections
NASA Technical Reports Server (NTRS)
Bhatia, A. K.; Schneider, B. I.; Temkin, A.
1993-01-01
A method for calculating total cross sections without formally including nonelastic channels is presented. The idea is to use a one channel T-matrix variational principle with a complex correlation function. The derived T matrix is therefore not unitary. Elastic scattering is calculated from T-parallel-squared, but total scattering is derived from the imaginary part of T using the optical theorem. The method is applied to the spherically symmetric model of electron-hydrogen scattering. No spurious structure arises; results for sigma(el) and sigma(total) are in excellent agreement with calculations of Callaway and Oza (1984). The method has wide potential applicability.
THz-wave parametric source and its imaging applications
NASA Astrophysics Data System (ADS)
Kawase, Kodo
2004-08-01
Widely tunable coherent terahertz (THz) wave generation has been demonstrated based on the parametric oscillation using MgO doped LiNbO3 crystal pumped by a Q-switched Nd:YAG laser. This method exhibits multiple advantages like wide tunability, coherency and compactness of its system. We have developed a novel basic technology for terahertz (THz) imaging, which allows detection and identification of chemicals by introducing the component spatial pattern analysis. The spatial distributions of the chemicals were obtained from terahertz multispectral transillumination images, using absorption spectra previously measured with a widely tunable THz-wave parametric oscillator. Further we have applied this technique to the detection and identification of illicit drugs concealed in envelopes. The samples we used were methamphetamine and MDMA, two of the most widely consumed illegal drugs in Japan, and aspirin as a reference.
UK audit of glomerular filtration rate measurement from plasma sampling in 2013.
Murray, Anthony W; Lawson, Richard S; Cade, Sarah C; Hall, David O; Kenny, Bob; O'Shaughnessy, Emma; Taylor, Jon; Towey, David; White, Duncan; Carson, Kathryn
2014-11-01
An audit was carried out into UK glomerular filtration rate (GFR) calculation. The results were compared with an identical 2001 audit. Participants used their routine method to calculate GFR for 20 data sets (four plasma samples) in millilitres per minute and also the GFR normalized for body surface area. Some unsound data sets were included to analyse the applied quality control (QC) methods. Variability between centres was assessed for each data set, compared with the national median and a reference value calculated using the method recommended in the British Nuclear Medicine Society guidelines. The influence of the number of samples on variability was studied. Supplementary data were requested on workload and methodology. The 59 returns showed widespread standardization. The applied early exponential clearance correction was the main contributor to the observed variability. These corrections were applied by 97% of centres (50% - 2001) with 80% using the recommended averaged Brochner-Mortenson correction. Approximately 75% applied the recommended Haycock body surface area formula for adults (78% for children). The effect of the number of samples used was not significant. There was wide variability in the applied QC techniques, especially in terms of the use of the volume of distribution. The widespread adoption of the guidelines has harmonized national GFR calculation compared with the previous audit. Further standardization could further reduce variability. This audit has highlighted the need to address the national standardization of QC methods. Radionuclide techniques are confirmed as the preferred method for GFR measurement when an unequivocal result is required.
A comparison of radiosity with current methods of sound level prediction in commercial spaces
NASA Astrophysics Data System (ADS)
Beamer, C. Walter, IV; Muehleisen, Ralph T.
2002-11-01
The ray tracing and image methods (and variations thereof) are widely used for the computation of sound fields in architectural spaces. The ray tracing and image methods are best suited for spaces with mostly specular reflecting surfaces. The radiosity method, a method based on solving a system of energy balance equations, is best applied to spaces with mainly diffusely reflective surfaces. Because very few spaces are either purely specular or purely diffuse, all methods must deal with both types of reflecting surfaces. A comparison of the radiosity method to other methods for the prediction of sound levels in commercial environments is presented. [Work supported by NSF.
Nonuniformity correction of imaging systems with a spatially nonhomogeneous radiation source.
Gutschwager, Berndt; Hollandt, Jörg
2015-12-20
We present a novel method of nonuniformity correction of imaging systems in a wide optical spectral range by applying a radiation source with an unknown and spatially nonhomogeneous radiance or radiance temperature distribution. The benefit of this method is that it can be applied with radiation sources of arbitrary spatial radiance or radiance temperature distribution and only requires the sufficient temporal stability of this distribution during the measurement process. The method is based on the recording of several (at least three) images of a radiation source and a purposeful row- and line-shift of these sequent images in relation to the first primary image. The mathematical procedure is explained in detail. Its numerical verification with a source of a predefined nonhomogenous radiance distribution and a thermal imager of a predefined nonuniform focal plane array responsivity is presented.
Virtual fringe projection system with nonparallel illumination based on iteration
NASA Astrophysics Data System (ADS)
Zhou, Duo; Wang, Zhangying; Gao, Nan; Zhang, Zonghua; Jiang, Xiangqian
2017-06-01
Fringe projection profilometry has been widely applied in many fields. To set up an ideal measuring system, a virtual fringe projection technique has been studied to assist in the design of hardware configurations. However, existing virtual fringe projection systems use parallel illumination and have a fixed optical framework. This paper presents a virtual fringe projection system with nonparallel illumination. Using an iterative method to calculate intersection points between rays and reference planes or object surfaces, the proposed system can simulate projected fringe patterns and captured images. A new explicit calibration method has been presented to validate the precision of the system. Simulated results indicate that the proposed iterative method outperforms previous systems. Our virtual system can be applied to error analysis, algorithm optimization, and help operators to find ideal system parameter settings for actual measurements.
Tolerance allocation for an electronic system using neural network/Monte Carlo approach
NASA Astrophysics Data System (ADS)
Al-Mohammed, Mohammed; Esteve, Daniel; Boucher, Jaque
2001-12-01
The intense global competition to produce quality products at a low cost has led many industrial nations to consider tolerances as a key factor to bring about cost as well as to remain competitive. In actually, Tolerance allocation stays widely applied on the Mechanic System. It is known that to study the tolerances in an electronic domain, Monte-Carlo method well be used. But the later method spends a long time. This paper reviews several methods (Worst-case, Statistical Method, Least Cost Allocation by Optimization methods) that can be used for treating the tolerancing problem for an Electronic System and explains their advantages and their limitations. Then, it proposes an efficient method based on the Neural Networks associated with Monte-Carlo method as basis data. The network is trained using the Error Back Propagation Algorithm to predict the individual part tolerances, minimizing the total cost of the system by a method of optimization. This proposed approach has been applied on Small-Signal Amplifier Circuit as an example. This method can be easily extended to a complex system of n-components.
NASA Technical Reports Server (NTRS)
Spiegel, Seth C.; Huynh, H. T.; DeBonis, James R.
2015-01-01
High-order methods are quickly becoming popular for turbulent flows as the amount of computer processing power increases. The flux reconstruction (FR) method presents a unifying framework for a wide class of high-order methods including discontinuous Galerkin (DG), Spectral Difference (SD), and Spectral Volume (SV). It offers a simple, efficient, and easy way to implement nodal-based methods that are derived via the differential form of the governing equations. Whereas high-order methods have enjoyed recent success, they have been known to introduce numerical instabilities due to polynomial aliasing when applied to under-resolved nonlinear problems. Aliasing errors have been extensively studied in reference to DG methods; however, their study regarding FR methods has mostly been limited to the selection of the nodal points used within each cell. Here, we extend some of the de-aliasing techniques used for DG methods, primarily over-integration, to the FR framework. Our results show that over-integration does remove aliasing errors but may not remove all instabilities caused by insufficient resolution (for FR as well as DG).
Ohashi, J; Clark, A G
2005-05-01
The recent cataloguing of a large number of SNPs enables us to perform genome-wide association studies for detecting common genetic variants associated with disease. Such studies, however, generally have limited research budgets for genotyping and phenotyping. It is therefore necessary to optimize the study design by determining the most cost-effective numbers of SNPs and individuals to analyze. In this report we applied the stepwise focusing method, with two-stage design, developed by Satagopan et al. (2002) and Saito & Kamatani (2002), to optimize the cost-effectiveness of a genome-wide direct association study using a transmission/disequilibrium test (TDT). The stepwise focusing method consists of two steps: a large number of SNPs are examined in the first focusing step, and then all the SNPs showing a significant P-value are tested again using a larger set of individuals in the second focusing step. In the framework of optimization, the numbers of SNPs and families and the significance levels in the first and second steps were regarded as variables to be considered. Our results showed that the stepwise focusing method achieves a distinct gain of power compared to a conventional method with the same research budget.
Numerical simulation using vorticity-vector potential formulation
NASA Technical Reports Server (NTRS)
Tokunaga, Hiroshi
1993-01-01
An accurate and efficient computational method is needed for three-dimensional incompressible viscous flows in engineering applications. On solving the turbulent shear flows directly or using the subgrid scale model, it is indispensable to resolve the small scale fluid motions as well as the large scale motions. From this point of view, the pseudo-spectral method is used so far as the computational method. However, the finite difference or the finite element methods are widely applied for computing the flow with practical importance since these methods are easily applied to the flows with complex geometric configurations. However, there exist several problems in applying the finite difference method to direct and large eddy simulations. Accuracy is one of most important problems. This point was already addressed by the present author on the direct simulations on the instability of the plane Poiseuille flow and also on the transition to turbulence. In order to obtain high efficiency, the multi-grid Poisson solver is combined with the higher-order, accurate finite difference method. The formulation method is also one of the most important problems in applying the finite difference method to the incompressible turbulent flows. The three-dimensional Navier-Stokes equations have been solved so far in the primitive variables formulation. One of the major difficulties of this method is the rigorous satisfaction of the equation of continuity. In general, the staggered grid is used for the satisfaction of the solenoidal condition for the velocity field at the wall boundary. However, the velocity field satisfies the equation of continuity automatically in the vorticity-vector potential formulation. From this point of view, the vorticity-vector potential method was extended to the generalized coordinate system. In the present article, we adopt the vorticity-vector potential formulation, the generalized coordinate system, and the 4th-order accurate difference method as the computational method. We present the computational method and apply the present method to computations of flows in a square cavity at large Reynolds number in order to investigate its effectiveness.
Landcover Classification Using Deep Fully Convolutional Neural Networks
NASA Astrophysics Data System (ADS)
Wang, J.; Li, X.; Zhou, S.; Tang, J.
2017-12-01
Land cover classification has always been an essential application in remote sensing. Certain image features are needed for land cover classification whether it is based on pixel or object-based methods. Different from other machine learning methods, deep learning model not only extracts useful information from multiple bands/attributes, but also learns spatial characteristics. In recent years, deep learning methods have been developed rapidly and widely applied in image recognition, semantic understanding, and other application domains. However, there are limited studies applying deep learning methods in land cover classification. In this research, we used fully convolutional networks (FCN) as the deep learning model to classify land covers. The National Land Cover Database (NLCD) within the state of Kansas was used as training dataset and Landsat images were classified using the trained FCN model. We also applied an image segmentation method to improve the original results from the FCN model. In addition, the pros and cons between deep learning and several machine learning methods were compared and explored. Our research indicates: (1) FCN is an effective classification model with an overall accuracy of 75%; (2) image segmentation improves the classification results with better match of spatial patterns; (3) FCN has an excellent ability of learning which can attains higher accuracy and better spatial patterns compared with several machine learning methods.
GO-PCA: An Unsupervised Method to Explore Gene Expression Data Using Prior Knowledge.
Wagner, Florian
2015-01-01
Genome-wide expression profiling is a widely used approach for characterizing heterogeneous populations of cells, tissues, biopsies, or other biological specimen. The exploratory analysis of such data typically relies on generic unsupervised methods, e.g. principal component analysis (PCA) or hierarchical clustering. However, generic methods fail to exploit prior knowledge about the molecular functions of genes. Here, I introduce GO-PCA, an unsupervised method that combines PCA with nonparametric GO enrichment analysis, in order to systematically search for sets of genes that are both strongly correlated and closely functionally related. These gene sets are then used to automatically generate expression signatures with functional labels, which collectively aim to provide a readily interpretable representation of biologically relevant similarities and differences. The robustness of the results obtained can be assessed by bootstrapping. I first applied GO-PCA to datasets containing diverse hematopoietic cell types from human and mouse, respectively. In both cases, GO-PCA generated a small number of signatures that represented the majority of lineages present, and whose labels reflected their respective biological characteristics. I then applied GO-PCA to human glioblastoma (GBM) data, and recovered signatures associated with four out of five previously defined GBM subtypes. My results demonstrate that GO-PCA is a powerful and versatile exploratory method that reduces an expression matrix containing thousands of genes to a much smaller set of interpretable signatures. In this way, GO-PCA aims to facilitate hypothesis generation, design of further analyses, and functional comparisons across datasets.
ERIC Educational Resources Information Center
Strobl, Carolin; Malley, James; Tutz, Gerhard
2009-01-01
Recursive partitioning methods have become popular and widely used tools for nonparametric regression and classification in many scientific fields. Especially random forests, which can deal with large numbers of predictor variables even in the presence of complex interactions, have been applied successfully in genetics, clinical medicine, and…
Q Methodology for Post-Social-Turn Research in SLA
ERIC Educational Resources Information Center
Irie, Kay
2014-01-01
Q methodology, an approach to inquiry on the subjective views about a complex phenomenon/issue which has been increasingly employed in a wide range of social science fields has not yet been applied in language learning and teaching research. It is a unique approach that has characteristics of both qualitative and quantitative research methods. The…
Status of Single-Case Research Designs for Evidence-Based Practice
ERIC Educational Resources Information Center
Matson, Johnny L.; Turygin, Nicole C.; Beighley, Jennifer; Matson, Michael L.
2012-01-01
The single-case research design has become a paradoxical methodology in the applied sciences. While various experimental designs have been in place for over 50 years, there has not been wide acceptance of single-case methodology outside clinical and school psychology, or the field of special education. These methods were developed in the U.S.A.,…
A generic strategy for pharmacological caging of growth factors for tissue engineering.
Karlsson, Maria; Lienemann, Philipp S; Sprossmann, Natallia; Heilmann, Katharina; Brummer, Tilman; Lutolf, Matthias P; Ehrbar, Martin; Weber, Wilfried
2013-07-07
The caging of small molecules has revolutionized biological research by providing a means to regulate a wide range of processes. Here we report on a generic pharmacological method to cage proteins in a similar fashion. The present approach is of value in both fundamental and applied research, e.g. in tissue engineering.
Fast, Computer Supported Experimental Determination of Absolute Zero Temperature at School
ERIC Educational Resources Information Center
Bogacz, Bogdan F.; Pedziwiatr, Antoni T.
2014-01-01
A simple and fast experimental method of determining absolute zero temperature is presented. Air gas thermometer coupled with pressure sensor and data acquisition system COACH is applied in a wide range of temperature. By constructing a pressure vs temperature plot for air under constant volume it is possible to obtain--by extrapolation to zero…
USDA-ARS?s Scientific Manuscript database
The butanol-HCl spectrophotometric assay is widely used for quantifying extractable and insoluble condensed tannins (CT, syn. proanthocyanidins) in foods, feeds, and foliage of herbaceous and woody plants, but the method underestimates total CT content when applied directly to plant material. To imp...
USDA-ARS?s Scientific Manuscript database
As a primary flux in the global water cycle, evapotranspiration (ET) connects hydrologic and biological processes and is directly affected by water management, land use change and climate change. The two source energy balance (TSEB) model has been widely applied to quantify field scale ET using sate...
Users' Interaction with World Wide Web Resources: An Exploratory Study Using a Holistic Approach.
ERIC Educational Resources Information Center
Wang, Peiling; Hawk, William B.; Tenopir, Carol
2000-01-01
Presents results of a study that explores factors of user-Web interaction in finding factual information, develops a conceptual framework for studying user-Web interaction, and applies a process-tracing method for conducting holistic user-Web studies. Describes measurement techniques and proposes a model consisting of the user, interface, and the…
NASA Technical Reports Server (NTRS)
Lagow, R. J.; Dumitru, E. T.
1983-01-01
The direct fluorination method of converting carefully selected hydrocarbon substrates to fluorinated membranes was successfully applied to produce promising, novel membranes for electrochemical devices. A family of polymer blends was identified which permits wide latitude in the concentration of both crosslinks and carboxyl groups in hydrocarbon membranes. The membranes of paragraph two were successfully fluorinated.
SSD for R: A Comprehensive Statistical Package to Analyze Single-System Data
ERIC Educational Resources Information Center
Auerbach, Charles; Schudrich, Wendy Zeitlin
2013-01-01
The need for statistical analysis in single-subject designs presents a challenge, as analytical methods that are applied to group comparison studies are often not appropriate in single-subject research. "SSD for R" is a robust set of statistical functions with wide applicability to single-subject research. It is a comprehensive package…
Cooperative Learning Instructional Methods for CS1: Design, Implementation, and Evaluation
ERIC Educational Resources Information Center
Beck, Leland; Chizhik, Alexander
2013-01-01
Cooperative learning is a well-known instructional technique that has been applied with a wide variety of subject matter and a broad spectrum of populations. This article briefly reviews the principles of cooperative learning, and describes how these principles were incorporated into a comprehensive set of cooperative learning activities for a CS1…
USDA-ARS?s Scientific Manuscript database
Centrifugation of milk is widely used as a separation/concentration step in assays for pathogenic microorganisms. Separation of the cream and liquid supernate from the pellet containing sedimented solids, somatic cells and microorganisms eliminates many interfering substances, and resuspension of th...
Modeling the Hydration Layer around Proteins: Applications to Small- and Wide-Angle X-Ray Scattering
Virtanen, Jouko Juhani; Makowski, Lee; Sosnick, Tobin R.; Freed, Karl F.
2011-01-01
Small-/wide-angle x-ray scattering (SWAXS) experiments can aid in determining the structures of proteins and protein complexes, but success requires accurate computational treatment of solvation. We compare two methods by which to calculate SWAXS patterns. The first approach uses all-atom explicit-solvent molecular dynamics (MD) simulations. The second, far less computationally expensive method involves prediction of the hydration density around a protein using our new HyPred solvation model, which is applied without the need for additional MD simulations. The SWAXS patterns obtained from the HyPred model compare well to both experimental data and the patterns predicted by the MD simulations. Both approaches exhibit advantages over existing methods for analyzing SWAXS data. The close correspondence between calculated and observed SWAXS patterns provides strong experimental support for the description of hydration implicit in the HyPred model. PMID:22004761
ELM: an Algorithm to Estimate the Alpha Abundance from Low-resolution Spectra
NASA Astrophysics Data System (ADS)
Bu, Yude; Zhao, Gang; Pan, Jingchang; Bharat Kumar, Yerra
2016-01-01
We have investigated a novel methodology using the extreme learning machine (ELM) algorithm to determine the α abundance of stars. Applying two methods based on the ELM algorithm—ELM+spectra and ELM+Lick indices—to the stellar spectra from the ELODIE database, we measured the α abundance with a precision better than 0.065 dex. By applying these two methods to the spectra with different signal-to-noise ratios (S/Ns) and different resolutions, we found that ELM+spectra is more robust against degraded resolution and ELM+Lick indices is more robust against variation in S/N. To further validate the performance of ELM, we applied ELM+spectra and ELM+Lick indices to SDSS spectra and estimated α abundances with a precision around 0.10 dex, which is comparable to the results given by the SEGUE Stellar Parameter Pipeline. We further applied ELM to the spectra of stars in Galactic globular clusters (M15, M13, M71) and open clusters (NGC 2420, M67, NGC 6791), and results show good agreement with previous studies (within 1σ). A comparison of the ELM with other widely used methods including support vector machine, Gaussian process regression, artificial neural networks, and linear least-squares regression shows that ELM is efficient with computational resources and more accurate than other methods.
BOREHOLE NEUTRON ACTIVATION: THE RARE EARTHS.
Mikesell, J.L.; Senftle, F.E.
1987-01-01
Neutron-induced borehole gamma-ray spectroscopy has been widely used as a geophysical exploration technique by the petroleum industry, but its use for mineral exploration is not as common. Nuclear methods can be applied to mineral exploration, for determining stratigraphy and bed correlations, for mapping ore deposits, and for studying mineral concentration gradients. High-resolution detectors are essential for mineral exploration, and by using them an analysis of the major element concentrations in a borehole can usually be made. A number of economically important elements can be detected at typical ore-grade concentrations using this method. Because of the application of the rare-earth elements to high-temperature superconductors, these elements are examined in detail as an example of how nuclear techniques can be applied to mineral exploration.
Multi-trait analysis of genome-wide association summary statistics using MTAG.
Turley, Patrick; Walters, Raymond K; Maghzian, Omeed; Okbay, Aysu; Lee, James J; Fontana, Mark Alan; Nguyen-Viet, Tuan Anh; Wedow, Robbee; Zacher, Meghan; Furlotte, Nicholas A; Magnusson, Patrik; Oskarsson, Sven; Johannesson, Magnus; Visscher, Peter M; Laibson, David; Cesarini, David; Neale, Benjamin M; Benjamin, Daniel J
2018-02-01
We introduce multi-trait analysis of GWAS (MTAG), a method for joint analysis of summary statistics from genome-wide association studies (GWAS) of different traits, possibly from overlapping samples. We apply MTAG to summary statistics for depressive symptoms (N eff = 354,862), neuroticism (N = 168,105), and subjective well-being (N = 388,538). As compared to the 32, 9, and 13 genome-wide significant loci identified in the single-trait GWAS (most of which are themselves novel), MTAG increases the number of associated loci to 64, 37, and 49, respectively. Moreover, association statistics from MTAG yield more informative bioinformatics analyses and increase the variance explained by polygenic scores by approximately 25%, matching theoretical expectations.
van den Broek, Evert; van Lieshout, Stef; Rausch, Christian; Ylstra, Bauke; van de Wiel, Mark A; Meijer, Gerrit A; Fijneman, Remond J A; Abeln, Sanne
2016-01-01
Development of cancer is driven by somatic alterations, including numerical and structural chromosomal aberrations. Currently, several computational methods are available and are widely applied to detect numerical copy number aberrations (CNAs) of chromosomal segments in tumor genomes. However, there is lack of computational methods that systematically detect structural chromosomal aberrations by virtue of the genomic location of CNA-associated chromosomal breaks and identify genes that appear non-randomly affected by chromosomal breakpoints across (large) series of tumor samples. 'GeneBreak' is developed to systematically identify genes recurrently affected by the genomic location of chromosomal CNA-associated breaks by a genome-wide approach, which can be applied to DNA copy number data obtained by array-Comparative Genomic Hybridization (CGH) or by (low-pass) whole genome sequencing (WGS). First, 'GeneBreak' collects the genomic locations of chromosomal CNA-associated breaks that were previously pinpointed by the segmentation algorithm that was applied to obtain CNA profiles. Next, a tailored annotation approach for breakpoint-to-gene mapping is implemented. Finally, dedicated cohort-based statistics is incorporated with correction for covariates that influence the probability to be a breakpoint gene. In addition, multiple testing correction is integrated to reveal recurrent breakpoint events. This easy-to-use algorithm, 'GeneBreak', is implemented in R ( www.cran.r-project.org ) and is available from Bioconductor ( www.bioconductor.org/packages/release/bioc/html/GeneBreak.html ).
Using mark-recapture distance sampling methods on line transect surveys
Burt, Louise M.; Borchers, David L.; Jenkins, Kurt J.; Marques, Tigao A
2014-01-01
Synthesis and applications. Mark–recapture DS is a widely used method for estimating animal density and abundance when detection of animals at distance zero is not certain. Two observer configurations and three statistical models are described, and it is important to choose the most appropriate model for the observer configuration and target species in question. By way of making the methods more accessible to practicing ecologists, we describe the key ideas underlying MRDS methods, the sometimes subtle differences between them, and we illustrate these by applying different kinds of MRDS method to surveys of two different target species using different survey configurations.
Phosphatase activity tunes two-component system sensor detection threshold.
Landry, Brian P; Palanki, Rohan; Dyulgyarov, Nikola; Hartsough, Lucas A; Tabor, Jeffrey J
2018-04-12
Two-component systems (TCSs) are the largest family of multi-step signal transduction pathways in biology, and a major source of sensors for biotechnology. However, the input concentrations to which biosensors respond are often mismatched with application requirements. Here, we utilize a mathematical model to show that TCS detection thresholds increase with the phosphatase activity of the sensor histidine kinase. We experimentally validate this result in engineered Bacillus subtilis nitrate and E. coli aspartate TCS sensors by tuning their detection threshold up to two orders of magnitude. We go on to apply our TCS tuning method to recently described tetrathionate and thiosulfate sensors by mutating a widely conserved residue previously shown to impact phosphatase activity. Finally, we apply TCS tuning to engineer B. subtilis to sense and report a wide range of fertilizer concentrations in soil. This work will enable the engineering of tailor-made biosensors for diverse synthetic biology applications.
The effect of finite geometry on the three-dimensional transfer of solar irradiance in clouds
NASA Technical Reports Server (NTRS)
Davies, R.
1978-01-01
Results are presented for a Monte Carlo model applied to a wide range of cloud widths and heights, and for an analytical model restricted in its application to cuboidally shaped clouds whose length, breadth, and depth may be varied independently; the clouds must be internally homogeneous with respect to their intrinsic radiative properties. Comparative results from the Monte Carlo method and the derived analytical model are presented for a wide range of cloud sizes, with special emphasis on the effects of varying the single scatter albedo, the solar zenith angle, and the scattering phase angle.
Carlsen, Lars; Bruggemann, Rainer
2018-06-03
In chemistry there is a long tradition in classification. Usually methods are adopted from the wide field of cluster analysis. Here, based on the example of 21 alkyl anilines we show that also concepts taken out from the mathematical discipline of partially ordered sets may also be applied. The chemical compounds are described by a multi-indicator system. For the present study four indicators, mainly taken from the field of environmental chemistry were applied and a Hasse diagram was constructed. A Hasse diagram is an acyclic, transitively reduced, triangle free graph that may have several components. The crucial question is, whether or not the Hasse diagram can be interpreted from a structural chemical point of view. This is indeed the case, but it must be clearly stated that a guarantee for meaningful results in general cannot be given. For that further theoretical work is needed. Two cluster analysis methods are applied (K-means and a hierarchical cluster method). In both cases the partitioning of the set of 21 compounds by the component structure of the Hasse diagram appears to be better interpretable. Copyright© Bentham Science Publishers; For any queries, please email at epub@benthamscience.org.
Variable Selection in the Presence of Missing Data: Imputation-based Methods.
Zhao, Yize; Long, Qi
2017-01-01
Variable selection plays an essential role in regression analysis as it identifies important variables that associated with outcomes and is known to improve predictive accuracy of resulting models. Variable selection methods have been widely investigated for fully observed data. However, in the presence of missing data, methods for variable selection need to be carefully designed to account for missing data mechanisms and statistical techniques used for handling missing data. Since imputation is arguably the most popular method for handling missing data due to its ease of use, statistical methods for variable selection that are combined with imputation are of particular interest. These methods, valid used under the assumptions of missing at random (MAR) and missing completely at random (MCAR), largely fall into three general strategies. The first strategy applies existing variable selection methods to each imputed dataset and then combine variable selection results across all imputed datasets. The second strategy applies existing variable selection methods to stacked imputed datasets. The third variable selection strategy combines resampling techniques such as bootstrap with imputation. Despite recent advances, this area remains under-developed and offers fertile ground for further research.
Garst, Andrew D; Bassalo, Marcelo C; Pines, Gur; Lynch, Sean A; Halweg-Edwards, Andrea L; Liu, Rongming; Liang, Liya; Wang, Zhiwen; Zeitoun, Ramsey; Alexander, William G; Gill, Ryan T
2017-01-01
Improvements in DNA synthesis and sequencing have underpinned comprehensive assessment of gene function in bacteria and eukaryotes. Genome-wide analyses require high-throughput methods to generate mutations and analyze their phenotypes, but approaches to date have been unable to efficiently link the effects of mutations in coding regions or promoter elements in a highly parallel fashion. We report that CRISPR-Cas9 gene editing in combination with massively parallel oligomer synthesis can enable trackable editing on a genome-wide scale. Our method, CRISPR-enabled trackable genome engineering (CREATE), links each guide RNA to homologous repair cassettes that both edit loci and function as barcodes to track genotype-phenotype relationships. We apply CREATE to site saturation mutagenesis for protein engineering, reconstruction of adaptive laboratory evolution experiments, and identification of stress tolerance and antibiotic resistance genes in bacteria. We provide preliminary evidence that CREATE will work in yeast. We also provide a webtool to design multiplex CREATE libraries.
System-wide identification of wild-type SUMO-2 conjugation sites
Hendriks, Ivo A.; D'Souza, Rochelle C.; Chang, Jer-Gung; Mann, Matthias; Vertegaal, Alfred C. O.
2015-01-01
SUMOylation is a reversible post-translational modification (PTM) regulating all nuclear processes. Identification of SUMOylation sites by mass spectrometry (MS) has been hampered by bulky tryptic fragments, which thus far necessitated the use of mutated SUMO. Here we present a SUMO-specific protease-based methodology which circumvents this problem, dubbed Protease-Reliant Identification of SUMO Modification (PRISM). PRISM allows for detection of SUMOylated proteins as well as identification of specific sites of SUMOylation while using wild-type SUMO. The method is generic and could be widely applied to study lysine PTMs. We employ PRISM in combination with high-resolution MS to identify SUMOylation sites from HeLa cells under standard growth conditions and in response to heat shock. We identified 751 wild-type SUMOylation sites on endogenous proteins, including 200 dynamic SUMO sites in response to heat shock. Thus, we have developed a method capable of quantitatively studying wild-type mammalian SUMO at the site-specific and system-wide level. PMID:26073453
Forman, Henry Jay; Augusto, Ohara; Brigelius-Flohe, Regina; Dennery, Phyllis A; Kalyanaraman, Balaraman; Ischiropoulos, Harry; Mann, Giovanni E; Radi, Rafael; Roberts, L Jackson; Vina, Jose; Davies, Kelvin J A
2015-01-01
Free radicals and oxidants are now implicated in physiological responses and in several diseases. Given the wide range of expertise of free radical researchers, application of the greater understanding of chemistry has not been uniformly applied to biological studies. We suggest that some widely used methodologies and terminologies hamper progress and need to be addressed. We make the case for abandonment and judicious use of several methods and terms and suggest practical and viable alternatives. These changes are suggested in four areas: use of fluorescent dyes to identify and quantify reactive species, methods for measurement of lipid peroxidation in complex biological systems, claims of antioxidants as radical scavengers, and use of the terms for reactive species. Copyright © 2014 Elsevier Inc. All rights reserved.
Use of necrophagous insects as evidence of cadaver relocation: myth or reality?
Gosselin, Matthias; Hedouin, Valéry
2017-01-01
The use of insects as indicators of post-mortem displacement is discussed in many texts, courses and TV shows, and several studies addressing this issue have been published. Although the concept is widely cited, it is poorly understood, and only a few forensic cases have successfully applied such a method. The use of necrophagous insects as evidence of cadaver relocation actually involves a wide range of biological aspects. Distribution, microhabitat, phenology, behavioral ecology, and molecular analysis are among the research areas associated with this topic. This article provides the first review of the current knowledge and addresses the potential and limitations of different methods to evaluate their applicability. This work reveals numerous weaknesses and erroneous beliefs as well as many possibilities and research opportunities. PMID:28785513
Ouyang, Tingping; Fu, Shuqing; Zhu, Zhaoyu; Kuang, Yaoqiu; Huang, Ningsheng; Wu, Zhifeng
2008-11-01
The thermodynamic law is one of the most widely used scientific principles. The comparability between the environmental impact of urbanization and the thermodynamic entropy was systematically analyzed. Consequently, the concept "Urban Environment Entropy" was brought forward and the "Urban Environment Entropy" model was established for urbanization environmental impact assessment in this study. The model was then utilized in a case study for the assessment of river water quality in the Pearl River Delta Economic Zone. The results indicated that the assessing results of the model are consistent to that of the equalized synthetic pollution index method. Therefore, it can be concluded that the Urban Environment Entropy model has high reliability and can be applied widely in urbanization environmental assessment research using many different environmental parameters.
NASA Astrophysics Data System (ADS)
Nikolić, G. S.; Žerajić, S.; Cakić, M.
2011-10-01
Multivariate calibration method is a powerful mathematical tool that can be applied in analytical chemistry when the analytical signals are highly overlapped. The method with regression by partial least squares is proposed for the simultaneous spectrophotometric determination of adrenergic vasoconstrictors in decongestive solution containing two active components: phenyleprine hydrochloride and trimazoline hydrochloride. These sympathomimetic agents are that frequently associated in pharmaceutical formulations against the common cold. The proposed method, which is, simple and rapid, offers the advantages of sensitivity and wide range of determinations without the need for extraction of the vasoconstrictors. In order to minimize the optimal factors necessary to obtain the calibration matrix by multivariate calibration, different parameters were evaluated. The adequate selection of the spectral regions proved to be important on the number of factors. In order to simultaneously quantify both hydrochlorides among excipients, the spectral region between 250 and 290 nm was selected. A recovery for the vasoconstrictor was 98-101%. The developed method was applied to assay of two decongestive pharmaceutical preparations.
Korany, Mohamed A; Abdine, Heba H; Ragab, Marwa A A; Aboras, Sara I
2015-05-15
This paper discusses a general method for the use of orthogonal polynomials for unequal intervals (OPUI) to eliminate interferences in two-component spectrophotometric analysis. In this paper, a new approach was developed by using first derivative D1 curve instead of absorbance curve to be convoluted using OPUI method for the determination of metronidazole (MTR) and nystatin (NYS) in their mixture. After applying derivative treatment of the absorption data many maxima and minima points appeared giving characteristic shape for each drug allowing the selection of different number of points for the OPUI method for each drug. This allows the specific and selective determination of each drug in presence of the other and in presence of any matrix interference. The method is particularly useful when the two absorption spectra have considerable overlap. The results obtained are encouraging and suggest that the method can be widely applied to similar problems. Copyright © 2015 Elsevier B.V. All rights reserved.
Stereo Sound Field Controller Design Using Partial Model Matching on the Frequency Domain
NASA Astrophysics Data System (ADS)
Kumon, Makoto; Miike, Katsuhiro; Eguchi, Kazuki; Mizumoto, Ikuro; Iwai, Zenta
The objective of sound field control is to make the acoustic characteristics of a listening room close to those of the desired system. Conventional methods apply feedforward controllers, such as digital filters, to achieve this objective. However, feedback controllers are also necessary in order to attenuate noise or to compensate the uncertainty of the acoustic characteristics of the listening room. Since acoustic characteristics are well modeled on the frequency domain, it is efficient to design controllers with respect to frequency responses, but it is difficult to design a multi input multi output (MIMO) control system on a wide frequency domain. In the present study, a partial model matching method on the frequency domain was adopted because this method requires only sampled data, rather than complex mathematical models of the plant, in order to design controllers for MIMO systems. The partial model matching method was applied to design two-degree-of-freedom controllers for acoustic equalization and noise reduction. Experiments demonstrated effectiveness of the proposed method.
NASA Technical Reports Server (NTRS)
Hucek, Richard R.; Ardanuy, Philip E.; Kyle, H. Lee
1987-01-01
A deconvolution method for extracting the top of the atmosphere (TOA) mean, daily albedo field from a set of wide-FOV (WFOV) shortwave radiometer measurements is proposed. The method is based on constructing a synthetic measurement for each satellite observation. The albedo field is represented as a truncated series of spherical harmonic functions, and these linear equations are presented. Simulation studies were conducted to determine the sensitivity of the method. It is observed that a maximum of about 289 pieces of data can be extracted from a set of Nimbus 7 WFOV satellite measurements. The albedos derived using the deconvolution method are compared with albedos derived using the WFOV archival method; the developed albedo field achieved a 20 percent reduction in the global rms regional reflected flux density errors. The deconvolution method is applied to estimate the mean, daily average TOA albedo field for January 1983. A strong and extensive albedo maximum (0.42), which corresponds to the El Nino/Southern Oscillation event of 1982-1983, is detected over the south central Pacific Ocean.
Alex F, Bokov; Olin, Gail P; Bos, Angela; Tirado-Ramos, Alfredo; Kittrell, Pamela; Jackson, Carlayne
2017-01-01
We present a method for rapidly ranking all distinct facts in an electronic medical record (EMR) system by howover-represented or under-represented they are in a patient cohort of interest relative to some larger referencepopulation of patients in the same EMR. We have implemented this method as a plugin for i2b2, the open sourcedata warehouse platform widely used in research health informatics. Our method is highly flexible in terms of whatmedical terminologies it supports and is vendor-independent thanks to leveraging the i2b2 star schema rather thanany one specific EMR. It can be applied to a wide range of informatics problems including finding healthdisparities, searching for variables to include in a risk calculator or computable phenotype, detection ofcomorbidities, discovery of adverse drug reactions. The case study we present here uses this software to findunlabeled flowsheets for patients suffering from amyotrophic lateral sclerosis.
Auxiliary-field-based trial wave functions in quantum Monte Carlo calculations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chang, Chia -Chen; Rubenstein, Brenda M.; Morales, Miguel A.
2016-12-19
Quantum Monte Carlo (QMC) algorithms have long relied on Jastrow factors to incorporate dynamic correlation into trial wave functions. While Jastrow-type wave functions have been widely employed in real-space algorithms, they have seen limited use in second-quantized QMC methods, particularly in projection methods that involve a stochastic evolution of the wave function in imaginary time. Here we propose a scheme for generating Jastrow-type correlated trial wave functions for auxiliary-field QMC methods. The method is based on decoupling the two-body Jastrow into one-body projectors coupled to auxiliary fields, which then operate on a single determinant to produce a multideterminant trial wavemore » function. We demonstrate that intelligent sampling of the most significant determinants in this expansion can produce compact trial wave functions that reduce errors in the calculated energies. Lastly, our technique may be readily generalized to accommodate a wide range of two-body Jastrow factors and applied to a variety of model and chemical systems.« less
A genome-wide 3C-method for characterizing the three-dimensional architectures of genomes.
Duan, Zhijun; Andronescu, Mirela; Schutz, Kevin; Lee, Choli; Shendure, Jay; Fields, Stanley; Noble, William S; Anthony Blau, C
2012-11-01
Accumulating evidence demonstrates that the three-dimensional (3D) organization of chromosomes within the eukaryotic nucleus reflects and influences genomic activities, including transcription, DNA replication, recombination and DNA repair. In order to uncover structure-function relationships, it is necessary first to understand the principles underlying the folding and the 3D arrangement of chromosomes. Chromosome conformation capture (3C) provides a powerful tool for detecting interactions within and between chromosomes. A high throughput derivative of 3C, chromosome conformation capture on chip (4C), executes a genome-wide interrogation of interaction partners for a given locus. We recently developed a new method, a derivative of 3C and 4C, which, similar to Hi-C, is capable of comprehensively identifying long-range chromosome interactions throughout a genome in an unbiased fashion. Hence, our method can be applied to decipher the 3D architectures of genomes. Here, we provide a detailed protocol for this method. Published by Elsevier Inc.
Chen, Chih-Hao; Hsu, Chueh-Lin; Huang, Shih-Hao; Chen, Shih-Yuan; Hung, Yi-Lin; Chen, Hsiao-Rong; Wu, Yu-Chung
2015-01-01
Although genome-wide expression analysis has become a routine tool for gaining insight into molecular mechanisms, extraction of information remains a major challenge. It has been unclear why standard statistical methods, such as the t-test and ANOVA, often lead to low levels of reproducibility, how likely applying fold-change cutoffs to enhance reproducibility is to miss key signals, and how adversely using such methods has affected data interpretations. We broadly examined expression data to investigate the reproducibility problem and discovered that molecular heterogeneity, a biological property of genetically different samples, has been improperly handled by the statistical methods. Here we give a mathematical description of the discovery and report the development of a statistical method, named HTA, for better handling molecular heterogeneity. We broadly demonstrate the improved sensitivity and specificity of HTA over the conventional methods and show that using fold-change cutoffs has lost much information. We illustrate the especial usefulness of HTA for heterogeneous diseases, by applying it to existing data sets of schizophrenia, bipolar disorder and Parkinson’s disease, and show it can abundantly and reproducibly uncover disease signatures not previously detectable. Based on 156 biological data sets, we estimate that the methodological issue has affected over 96% of expression studies and that HTA can profoundly correct 86% of the affected data interpretations. The methodological advancement can better facilitate systems understandings of biological processes, render biological inferences that are more reliable than they have hitherto been and engender translational medical applications, such as identifying diagnostic biomarkers and drug prediction, which are more robust. PMID:25793610
Physical Chemistry of Nucleic Acids
NASA Astrophysics Data System (ADS)
Tinoco, Ignacio
2002-10-01
The Watson-Crick double helix of DNA was first revealed in 1953. Since then a wide range of physical chemical methods have been applied to DNA and to its more versatile relative RNA to determine their structures and functions. My major goal is to predict the folded structure of any RNA from its sequence. We have used bulk and single-molecule measurements of thermodynamics and kinetics, plus various spectroscopic methods (UV absorption, optical rotation, circular dichroism, circular intensity differential scattering, fluorescence, NMR) to approach this goal.
NASA Technical Reports Server (NTRS)
Schramm, Jr., Harry F. (Inventor); Farris, III, Alex F. (Inventor); Defalco, Francis G. (Inventor); Richmond, Robert Chaffee (Inventor)
2012-01-01
Systems and methods for the use of compounds from the Hofmeister series coupled with specific pH and temperature to provide rapid physico-chemical-managed killing of penicillin-resistant static and growing Gram-positive and Gram-negative vegetative bacteria. The systems and methods represent the more general physico-chemical enhancement of susceptibility for a wide range of pathological macromolecular targets to clinical management by establishing the reactivity of those targets to topically applied drugs or anti-toxins.
1987-03-01
the VLSI Implementation of the Electromagnetic Field of an Arbitrary Current Source" B.A. Hoyt, A.J. Terzuoli, A.V. Lair ., Air Force Institute of...method is that cavities of arbitrary three dimensional shapes and nonuniform lossy materials can be analyzed. THEORY OF VECTOR POTENTIAL FINITE...elements used to model the cavity. The method includes the effects of nonuniform lossy materials and can analyze cavities of a wide variety of two- and
NASA Astrophysics Data System (ADS)
Orozco Cortés, Luis Fernando; Fernández García, Nicolás
2014-05-01
A method to obtain the general solution of any constant piecewise potential is presented, this is achieved by means of the analysis of the transfer matrices in each cutoff. The resonance phenomenon together with the supersymmetric quantum mechanics technique allow us to construct a wide family of complex potentials which can be used as theoretical models for optical systems. The method is applied to the particular case for which the potential function has six cutoff points.
Design of spur gears for improved efficiency
NASA Technical Reports Server (NTRS)
Anderson, N. E.; Loewenthal, S. H.
1981-01-01
A method to calculate spur gear system power loss for a wide range of gear geometries and operating conditions is used to determine design requirements for an efficient gearset. The effects of spur gear size, pitch, ratio, pitch-line-velocity and load on efficiency are shown. A design example is given to illustrate how the method is to be applied. In general, peak efficiencies were found to be greater for larger diameter and fine pitched gears and tare (no-load) losses were found to be significant.
Development of a Compound Optimization Approach Based on Imperialist Competitive Algorithm
NASA Astrophysics Data System (ADS)
Wang, Qimei; Yang, Zhihong; Wang, Yong
In this paper, an improved novel approach is developed for the imperialist competitive algorithm to achieve a greater performance. The Nelder-Meand simplex method is applied to execute alternately with the original procedures of the algorithm. The approach is tested on twelve widely-used benchmark functions and is also compared with other relative studies. It is shown that the proposed approach has a faster convergence rate, better search ability, and higher stability than the original algorithm and other relative methods.
Three-dimensional reconstruction of single-cell chromosome structure using recurrence plots.
Hirata, Yoshito; Oda, Arisa; Ohta, Kunihiro; Aihara, Kazuyuki
2016-10-11
Single-cell analysis of the three-dimensional (3D) chromosome structure can reveal cell-to-cell variability in genome activities. Here, we propose to apply recurrence plots, a mathematical method of nonlinear time series analysis, to reconstruct the 3D chromosome structure of a single cell based on information of chromosomal contacts from genome-wide chromosome conformation capture (Hi-C) data. This recurrence plot-based reconstruction (RPR) method enables rapid reconstruction of a unique structure in single cells, even from incomplete Hi-C information.
Three-dimensional reconstruction of single-cell chromosome structure using recurrence plots
NASA Astrophysics Data System (ADS)
Hirata, Yoshito; Oda, Arisa; Ohta, Kunihiro; Aihara, Kazuyuki
2016-10-01
Single-cell analysis of the three-dimensional (3D) chromosome structure can reveal cell-to-cell variability in genome activities. Here, we propose to apply recurrence plots, a mathematical method of nonlinear time series analysis, to reconstruct the 3D chromosome structure of a single cell based on information of chromosomal contacts from genome-wide chromosome conformation capture (Hi-C) data. This recurrence plot-based reconstruction (RPR) method enables rapid reconstruction of a unique structure in single cells, even from incomplete Hi-C information.
Genomic Methods for Clinical and Translational Pain Research
Wang, Dan; Kim, Hyungsuk; Wang, Xiao-Min; Dionne, Raymond
2012-01-01
Pain is a complex sensory experience for which the molecular mechanisms are yet to be fully elucidated. Individual differences in pain sensitivity are mediated by a complex network of multiple gene polymorphisms, physiological and psychological processes, and environmental factors. Here, we present the methods for applying unbiased molecular-genetic approaches, genome-wide association study (GWAS), and global gene expression analysis, to help better understand the molecular basis of pain sensitivity in humans and variable responses to analgesic drugs. PMID:22351080
Micropunching lithography for generating micro- and submicron-patterns on polymer substrates.
Chakraborty, Anirban; Liu, Xinchuan; Luo, Cheng
2012-07-02
Conducting polymers have attracted great attention since the discovery of high conductivity in doped polyacetylene in 1977(1). They offer the advantages of low weight, easy tailoring of properties and a wide spectrum of applications(2,3). Due to sensitivity of conducting polymers to environmental conditions (e.g., air, oxygen, moisture, high temperature and chemical solutions), lithographic techniques present significant technical challenges when working with these materials(4). For example, current photolithographic methods, such as ultra-violet (UV), are unsuitable for patterning the conducting polymers due to the involvement of wet and/or dry etching processes in these methods. In addition, current micro/nanosystems mainly have a planar form(5,6). One layer of structures is built on the top surfaces of another layer of fabricated features. Multiple layers of these structures are stacked together to form numerous devices on a common substrate. The sidewall surfaces of the microstructures have not been used in constructing devices. On the other hand, sidewall patterns could be used, for example, to build 3-D circuits, modify fluidic channels and direct horizontal growth of nanowires and nanotubes. A macropunching method has been applied in the manufacturing industry to create macropatterns in a sheet metal for over a hundred years. Motivated by this approach, we have developed a micropunching lithography method (MPL) to overcome the obstacles of patterning conducting polymers and generating sidewall patterns. Like the macropunching method, the MPL also includes two operations (Fig. 1): (i) cutting; and (ii) drawing. The "cutting" operation was applied to pattern three conducting polymers(4), polypyrrole (PPy), Poly(3,4-ethylenedioxythiophen)-poly(4-styrenesulphonate) (PEDOT) and polyaniline (PANI). It was also employed to create Al microstructures(7). The fabricated microstructures of conducting polymers have been used as humidity(8), chemical(8), and glucose sensors(9). Combined microstructures of Al and conducting polymers have been employed to fabricate capacitors and various heterojunctions(9,10,11). The "cutting" operation was also applied to generate submicron-patterns, such as 100- and 500-nm-wide PPy lines as well as 100-nm-wide Au wires. The "drawing" operation was employed for two applications: (i) produce Au sidewall patterns on high density polyethylene (HDPE) channels which could be used for building 3D microsystems(12,13,14), and (ii) fabricate polydimethylsiloxane (PDMS) micropillars on HDPE substrates to increase the contact angle of the channel(15).
Military applications and examples of near-surface seismic surface wave methods (Invited)
NASA Astrophysics Data System (ADS)
sloan, S.; Stevens, R.
2013-12-01
Although not always widely known or publicized, the military uses a variety of geophysical methods for a wide range of applications--some that are already common practice in the industry while others are truly novel. Some of those applications include unexploded ordnance detection, general site characterization, anomaly detection, countering improvised explosive devices (IEDs), and security monitoring, to name a few. Techniques used may include, but are not limited to, ground penetrating radar, seismic, electrical, gravity, and electromagnetic methods. Seismic methods employed include surface wave analysis, refraction tomography, and high-resolution reflection methods. Although the military employs geophysical methods, that does not necessarily mean that those methods enable or support combat operations--often times they are being used for humanitarian applications within the military's area of operations to support local populations. The work presented here will focus on the applied use of seismic surface wave methods, including multichannel analysis of surface waves (MASW) and backscattered surface waves, often in conjunction with other methods such as refraction tomography or body-wave diffraction analysis. Multiple field examples will be shown, including explosives testing, tunnel detection, pre-construction site characterization, and cavity detection.
Comparison of methods for measuring cholinesterase inhibition by carbamates
Wilhelm, K.; Vandekar, M.; Reiner, E.
1973-01-01
The Acholest and tintometric methods are used widely for measuring blood cholinesterase activity after exposure to organophosphorus compounds. However, if applied for measuring blood cholinesterase activity in persons exposed to carbamates, the accuracy of the methods requires verification since carbamylated cholinesterases are unstable. The spectrophotometric method was used as a reference method and the two field methods were employed under controlled conditions. Human blood cholinesterases were inhibited in vitro by four methylcarbamates that are used as insecticides. When plasma cholinesterase activity was measured by the Acholest and spectrophotometric methods, no difference was found. The enzyme activity in whole blood determined by the tintometric method was ≤ 11% higher than when the same sample was measured by the spectrophotometric method. PMID:4541147
Guidelines for reporting and using prediction tools for genetic variation analysis.
Vihinen, Mauno
2013-02-01
Computational prediction methods are widely used for the analysis of human genome sequence variants and their effects on gene/protein function, splice site aberration, pathogenicity, and disease risk. New methods are frequently developed. We believe that guidelines are essential for those writing articles about new prediction methods, as well as for those applying these tools in their research, so that the necessary details are reported. This will enable readers to gain the full picture of technical information, performance, and interpretation of results, and to facilitate comparisons of related methods. Here, we provide instructions on how to describe new methods, report datasets, and assess the performance of predictive tools. We also discuss what details of predictor implementation are essential for authors to understand. Similarly, these guidelines for the use of predictors provide instructions on what needs to be delineated in the text, as well as how researchers can avoid unwarranted conclusions. They are applicable to most prediction methods currently utilized. By applying these guidelines, authors will help reviewers, editors, and readers to more fully comprehend prediction methods and their use. © 2012 Wiley Periodicals, Inc.
Participatory Design in Gerontechnology: A Systematic Literature Review.
Merkel, Sebastian; Kucharski, Alexander
2018-05-19
Participatory design (PD) is widely used within gerontechnology but there is no common understanding about which methods are used for what purposes. This review aims to examine what different forms of PD exist in the field of gerontechnology and how these can be categorized. We conducted a systematic literature review covering several databases. The search strategy was based on 3 elements: (1) participatory methods and approaches with (2) older persons aiming at developing (3) technology for older people. Our final review included 26 studies representing a variety of technologies designed/developed and methods/instruments applied. According to the technologies, the publications reviewed can be categorized in 3 groups: Studies that (1) use already existing technology with the aim to find new ways of use; (2) aim at creating new devices; (3) test and/or modify prototypes. The implementation of PD depends on the questions: Why a participatory approach is applied, who is involved as future user(s), when those future users are involved, and how they are incorporated into the innovation process. There are multiple ways, methods, and instruments to integrate users into the innovation process. Which methods should be applied, depends on the context. However, most studies do not evaluate if participatory approaches will lead to a better acceptance and/or use of the co-developed products. Therefore, participatory design should follow a comprehensive strategy, starting with the users' needs and ending with an evaluation if the applied methods have led to better results.
ERIC Educational Resources Information Center
Thomson, Kendra M.; Martin, Garry L.; Fazzio, Daniela; Salem, Sandra; Young, Kristen; Yu, C. T.
2012-01-01
A widely used method for teaching children with autism is applied behavior analysis (ABA), and a main component of ABA programming is discrete-trials teaching (DTT). Using a modified multiple-baseline design across participants, we assessed the effectiveness of a DTT self-instructional package (Fazzio & Martin, 2007) for teaching four pairs of…
ERIC Educational Resources Information Center
Fallon, Lindsay M.; Collier-Meek, Melissa A.; Maggin, Daniel M.; Sanetti, Lisa M. H.; Johnson, Austin H.
2015-01-01
Optimal levels of treatment fidelity, a critical moderator of intervention effectiveness, are often difficult to sustain in applied settings. It is unknown whether performance feedback, a widely researched method for increasing educators' treatment fidelity, is an evidence-based practice. The purpose of this review was to evaluate the current…
A carbon tetrachloride-free synthesis of N-phenyltrifluoroacetimidoyl chloride.
Smith, Dylan G M; Williams, Spencer J
2017-10-10
N-Phenyltrifluoroacetimidoyl chloride (PTFAI-Cl) is a reagent widely used for the preparation of glycosyl N-phenyltrifluoroacetimidates. However, the most commonly applied method requires carbon tetrachloride, a hepatotoxic reagent that has been phased out under the Montreal Protocol. We report a new synthesis of N-phenyltrifluoroacetimidoyl chloride (PTFAI-Cl) using dichlorotriphenylphosphane and triethylamine. Copyright © 2017. Published by Elsevier Ltd.
D. Lee Taylor; Michael G. Booth; Jack W. McFarland; Ian C. Herriott; Niall J. Lennon; Chad Nusbaum; Thomas G. Marr
2008-01-01
High throughput sequencing methods are widely used in analyses of microbial diversity but are generally applied to small numbers of samples, which precludes charaterization of patterns of microbial diversity across space and time. We have designed a primer-tagging approach that allows pooling and subsequent sorting of numerous samples, which is directed to...
Tandem Affinity Purification of Protein Complexes from Eukaryotic Cells.
Ma, Zheng; Fung, Victor; D'Orso, Iván
2017-01-26
The purification of active protein-protein and protein-nucleic acid complexes is crucial for the characterization of enzymatic activities and de novo identification of novel subunits and post-translational modifications. Bacterial systems allow for the expression and purification of a wide variety of single polypeptides and protein complexes. However, this system does not enable the purification of protein subunits that contain post-translational modifications (e.g., phosphorylation and acetylation), and the identification of novel regulatory subunits that are only present/expressed in the eukaryotic system. Here, we provide a detailed description of a novel, robust, and efficient tandem affinity purification (TAP) method using STREP- and FLAG-tagged proteins that facilitates the purification of protein complexes with transiently or stably expressed epitope-tagged proteins from eukaryotic cells. This protocol can be applied to characterize protein complex functionality, to discover post-translational modifications on complex subunits, and to identify novel regulatory complex components by mass spectrometry. Notably, this TAP method can be applied to study protein complexes formed by eukaryotic or pathogenic (viral and bacterial) components, thus yielding a wide array of downstream experimental opportunities. We propose that researchers working with protein complexes could utilize this approach in many different ways.
NASA Astrophysics Data System (ADS)
Renkoski, Timothy E.; Hatch, Kenneth D.; Utzinger, Urs
2012-03-01
With no sufficient screening test for ovarian cancer, a method to evaluate the ovarian disease state quickly and nondestructively is needed. The authors have applied a wide-field spectral imager to freshly resected ovaries of 30 human patients in a study believed to be the first of its magnitude. Endogenous fluorescence was excited with 365-nm light and imaged in eight emission bands collectively covering the 400- to 640-nm range. Linear discriminant analysis was used to classify all image pixels and generate diagnostic maps of the ovaries. Training the classifier with previously collected single-point autofluorescence measurements of a spectroscopic probe enabled this novel classification. The process by which probe-collected spectra were transformed for comparison with imager spectra is described. Sensitivity of 100% and specificity of 51% were obtained in classifying normal and cancerous ovaries using autofluorescence data alone. Specificity increased to 69% when autofluorescence data were divided by green reflectance data to correct for spatial variation in tissue absorption properties. Benign neoplasm ovaries were also found to classify as nonmalignant using the same algorithm. Although applied ex vivo, the method described here appears useful for quick assessment of cancer presence in the human ovary.
Jeon, Hyungkook; Kim, Youngkyu; Lim, Geunbae
2016-01-28
In this paper, we introduce pressure-driven flow-induced miniaturizing free-flow electrophoresis (PDF-induced μ-FFE), a novel continuous separation method. In our separation system, the external flow and electric field are applied to particles, such that particle movement is affected by pressure-driven flow, electroosmosis, and electrophoresis. We then analyzed the hydrodynamic drag force and electrophoretic force applied to the particles in opposite directions. Based on this analysis, micro- and nano-sized particles were separated according to their electrophoretic mobilities with high separation efficiency. Because the separation can be achieved in a simple T-shaped microchannel, without the use of internal electrodes, it offers the advantages of low-cost, simple device fabrication and bubble-free operation, compared with conventional μ-FFE methods. Therefore, we expect the proposed separation method to have a wide range of filtering/separation applications in biochemical analysis.
Karakaya, Jale; Karabulut, Erdem; Yucel, Recai M.
2015-01-01
Modern statistical methods using incomplete data have been increasingly applied in a wide variety of substantive problems. Similarly, receiver operating characteristic (ROC) analysis, a method used in evaluating diagnostic tests or biomarkers in medical research, has also been increasingly popular problem in both its development and application. While missing-data methods have been applied in ROC analysis, the impact of model mis-specification and/or assumptions (e.g. missing at random) underlying the missing data has not been thoroughly studied. In this work, we study the performance of multiple imputation (MI) inference in ROC analysis. Particularly, we investigate parametric and non-parametric techniques for MI inference under common missingness mechanisms. Depending on the coherency of the imputation model with the underlying data generation mechanism, our results show that MI generally leads to well-calibrated inferences under ignorable missingness mechanisms. PMID:26379316
Method for lateral force calibration in atomic force microscope using MEMS microforce sensor.
Dziekoński, Cezary; Dera, Wojciech; Jarząbek, Dariusz M
2017-11-01
In this paper we present a simple and direct method for the lateral force calibration constant determination. Our procedure does not require any knowledge about material or geometrical parameters of an investigated cantilever. We apply a commercially available microforce sensor with advanced electronics for direct measurement of the friction force applied by the cantilever's tip to a flat surface of the microforce sensor measuring beam. Due to the third law of dynamics, the friction force of the equal value tilts the AFM cantilever. Therefore, torsional (lateral force) signal is compared with the signal from the microforce sensor and the lateral force calibration constant is determined. The method is easy to perform and could be widely used for the lateral force calibration constant determination in many types of atomic force microscopes. Copyright © 2017 Elsevier B.V. All rights reserved.
Milotti, Valeria; Pietsch, Manuel; Strunk, Karl-Philipp; Melzer, Christian
2018-01-01
We report a Kelvin-probe method to investigate the lateral charge-transport properties of semiconductors, most notably the charge-carrier mobility. The method is based on successive charging and discharging of a pre-biased metal-insulator-semiconductor stack by an alternating voltage applied to one edge of a laterally confined semiconductor layer. The charge carriers spreading along the insulator-semiconductor interface are directly measured by a Kelvin-probe, following the time evolution of the surface potential. A model is presented, describing the device response for arbitrary applied biases allowing the extraction of the lateral charge-carrier mobility from experimentally measured surface potentials. The method is tested using the organic semiconductor poly(3-hexylthiophene), and the extracted mobilities are validated through current voltage measurements on respective field-effect transistors. Our widely applicable approach enables robust measurements of the lateral charge-carrier mobility in semiconductors with weak impact from the utilized contact materials.
NASA Astrophysics Data System (ADS)
Milotti, Valeria; Pietsch, Manuel; Strunk, Karl-Philipp; Melzer, Christian
2018-01-01
We report a Kelvin-probe method to investigate the lateral charge-transport properties of semiconductors, most notably the charge-carrier mobility. The method is based on successive charging and discharging of a pre-biased metal-insulator-semiconductor stack by an alternating voltage applied to one edge of a laterally confined semiconductor layer. The charge carriers spreading along the insulator-semiconductor interface are directly measured by a Kelvin-probe, following the time evolution of the surface potential. A model is presented, describing the device response for arbitrary applied biases allowing the extraction of the lateral charge-carrier mobility from experimentally measured surface potentials. The method is tested using the organic semiconductor poly(3-hexylthiophene), and the extracted mobilities are validated through current voltage measurements on respective field-effect transistors. Our widely applicable approach enables robust measurements of the lateral charge-carrier mobility in semiconductors with weak impact from the utilized contact materials.
Derrac, Joaquín; Triguero, Isaac; Garcia, Salvador; Herrera, Francisco
2012-10-01
Cooperative coevolution is a successful trend of evolutionary computation which allows us to define partitions of the domain of a given problem, or to integrate several related techniques into one, by the use of evolutionary algorithms. It is possible to apply it to the development of advanced classification methods, which integrate several machine learning techniques into a single proposal. A novel approach integrating instance selection, instance weighting, and feature weighting into the framework of a coevolutionary model is presented in this paper. We compare it with a wide range of evolutionary and nonevolutionary related methods, in order to show the benefits of the employment of coevolution to apply the techniques considered simultaneously. The results obtained, contrasted through nonparametric statistical tests, show that our proposal outperforms other methods in the comparison, thus becoming a suitable tool in the task of enhancing the nearest neighbor classifier.
Force sensing using 3D displacement measurements in linear elastic bodies
NASA Astrophysics Data System (ADS)
Feng, Xinzeng; Hui, Chung-Yuen
2016-07-01
In cell traction microscopy, the mechanical forces exerted by a cell on its environment is usually determined from experimentally measured displacement by solving an inverse problem in elasticity. In this paper, an innovative numerical method is proposed which finds the "optimal" traction to the inverse problem. When sufficient regularization is applied, we demonstrate that the proposed method significantly improves the widely used approach using Green's functions. Motivated by real cell experiments, the equilibrium condition of a slowly migrating cell is imposed as a set of equality constraints on the unknown traction. Our validation benchmarks demonstrate that the numeric solution to the constrained inverse problem well recovers the actual traction when the optimal regularization parameter is used. The proposed method can thus be applied to study general force sensing problems, which utilize displacement measurements to sense inaccessible forces in linear elastic bodies with a priori constraints.
Applying a weed risk assessment approach to GM crops.
Keese, Paul K; Robold, Andrea V; Myers, Ruth C; Weisman, Sarah; Smith, Joe
2014-12-01
Current approaches to environmental risk assessment of genetically modified (GM) plants are modelled on chemical risk assessment methods, which have a strong focus on toxicity. There are additional types of harms posed by plants that have been extensively studied by weed scientists and incorporated into weed risk assessment methods. Weed risk assessment uses robust, validated methods that are widely applied to regulatory decision-making about potentially problematic plants. They are designed to encompass a broad variety of plant forms and traits in different environments, and can provide reliable conclusions even with limited data. The knowledge and experience that underpin weed risk assessment can be harnessed for environmental risk assessment of GM plants. A case study illustrates the application of the Australian post-border weed risk assessment approach to a representative GM plant. This approach is a valuable tool to identify potential risks from GM plants.
Herbei, Radu; Kubatko, Laura
2013-03-26
Markov chains are widely used for modeling in many areas of molecular biology and genetics. As the complexity of such models advances, it becomes increasingly important to assess the rate at which a Markov chain converges to its stationary distribution in order to carry out accurate inference. A common measure of convergence to the stationary distribution is the total variation distance, but this measure can be difficult to compute when the state space of the chain is large. We propose a Monte Carlo method to estimate the total variation distance that can be applied in this situation, and we demonstrate how the method can be efficiently implemented by taking advantage of GPU computing techniques. We apply the method to two Markov chains on the space of phylogenetic trees, and discuss the implications of our findings for the development of algorithms for phylogenetic inference.
Jeon, Hyungkook; Kim, Youngkyu; Lim, Geunbae
2016-01-01
In this paper, we introduce pressure-driven flow-induced miniaturizing free-flow electrophoresis (PDF-induced μ-FFE), a novel continuous separation method. In our separation system, the external flow and electric field are applied to particles, such that particle movement is affected by pressure-driven flow, electroosmosis, and electrophoresis. We then analyzed the hydrodynamic drag force and electrophoretic force applied to the particles in opposite directions. Based on this analysis, micro- and nano-sized particles were separated according to their electrophoretic mobilities with high separation efficiency. Because the separation can be achieved in a simple T-shaped microchannel, without the use of internal electrodes, it offers the advantages of low-cost, simple device fabrication and bubble-free operation, compared with conventional μ-FFE methods. Therefore, we expect the proposed separation method to have a wide range of filtering/separation applications in biochemical analysis. PMID:26819221
Rawls’s Wide Reflective Equilibrium as a Method for Engaged Interdisciplinary Collaboration
Taebi, Behnam
2017-01-01
The introduction of new technologies in society is sometimes met with public resistance. Supported by public policy calls for “upstream engagement” and “responsible innovation,” recent years have seen a notable rise in attempts to attune research and innovation processes to societal needs, so that stakeholders’ concerns are taken into account in the design phase of technology. Both within the social sciences and in the ethics of technology, we see many interdisciplinary collaborations being initiated that aim to address tensions between various normative expectations about science and engineering and the actual outcomes. However, despite pleas to integrate social science research into the ethics of technology, effective normative models for assessing technologies are still scarce. Rawls’s wide reflective equilibrium (WRE) is often mentioned as a promising approach to integrate insights from the social sciences in the normative analysis of concrete cases, but an in-depth discussion of how this would work in practice is still lacking. In this article, we explore to what extent the WRE method can be used in the context of technology development. Using cases in engineering and technology development, we discuss three issues that are currently neglected in the applied ethics literature on WRE. The first issue concerns the operationalization of abstract background theories to moral principles. The second issue concerns the inclusiveness of the method and the demand for openness. The third issue is how to establish whether or not an equilibrium has been reached. These issues should be taken into account when applying the methods to real-world cases involving technological risks. Applying the WRE method in the context of engaged interdisciplinary collaboration requires sensitivity for issues of power and representativeness to properly deal with the dynamics between the technical and normative researchers involved as well as society at large. PMID:29657348
Doorn, Neelke; Taebi, Behnam
2018-05-01
The introduction of new technologies in society is sometimes met with public resistance. Supported by public policy calls for "upstream engagement" and "responsible innovation," recent years have seen a notable rise in attempts to attune research and innovation processes to societal needs, so that stakeholders' concerns are taken into account in the design phase of technology. Both within the social sciences and in the ethics of technology, we see many interdisciplinary collaborations being initiated that aim to address tensions between various normative expectations about science and engineering and the actual outcomes. However, despite pleas to integrate social science research into the ethics of technology, effective normative models for assessing technologies are still scarce. Rawls's wide reflective equilibrium (WRE) is often mentioned as a promising approach to integrate insights from the social sciences in the normative analysis of concrete cases, but an in-depth discussion of how this would work in practice is still lacking. In this article, we explore to what extent the WRE method can be used in the context of technology development. Using cases in engineering and technology development, we discuss three issues that are currently neglected in the applied ethics literature on WRE. The first issue concerns the operationalization of abstract background theories to moral principles. The second issue concerns the inclusiveness of the method and the demand for openness. The third issue is how to establish whether or not an equilibrium has been reached. These issues should be taken into account when applying the methods to real-world cases involving technological risks. Applying the WRE method in the context of engaged interdisciplinary collaboration requires sensitivity for issues of power and representativeness to properly deal with the dynamics between the technical and normative researchers involved as well as society at large.
Dai, Wensheng; Wu, Jui-Yu; Lu, Chi-Jie
2014-01-01
Sales forecasting is one of the most important issues in managing information technology (IT) chain store sales since an IT chain store has many branches. Integrating feature extraction method and prediction tool, such as support vector regression (SVR), is a useful method for constructing an effective sales forecasting scheme. Independent component analysis (ICA) is a novel feature extraction technique and has been widely applied to deal with various forecasting problems. But, up to now, only the basic ICA method (i.e., temporal ICA model) was applied to sale forecasting problem. In this paper, we utilize three different ICA methods including spatial ICA (sICA), temporal ICA (tICA), and spatiotemporal ICA (stICA) to extract features from the sales data and compare their performance in sales forecasting of IT chain store. Experimental results from a real sales data show that the sales forecasting scheme by integrating stICA and SVR outperforms the comparison models in terms of forecasting error. The stICA is a promising tool for extracting effective features from branch sales data and the extracted features can improve the prediction performance of SVR for sales forecasting.
Dai, Wensheng
2014-01-01
Sales forecasting is one of the most important issues in managing information technology (IT) chain store sales since an IT chain store has many branches. Integrating feature extraction method and prediction tool, such as support vector regression (SVR), is a useful method for constructing an effective sales forecasting scheme. Independent component analysis (ICA) is a novel feature extraction technique and has been widely applied to deal with various forecasting problems. But, up to now, only the basic ICA method (i.e., temporal ICA model) was applied to sale forecasting problem. In this paper, we utilize three different ICA methods including spatial ICA (sICA), temporal ICA (tICA), and spatiotemporal ICA (stICA) to extract features from the sales data and compare their performance in sales forecasting of IT chain store. Experimental results from a real sales data show that the sales forecasting scheme by integrating stICA and SVR outperforms the comparison models in terms of forecasting error. The stICA is a promising tool for extracting effective features from branch sales data and the extracted features can improve the prediction performance of SVR for sales forecasting. PMID:25165740
A scan statistic to extract causal gene clusters from case-control genome-wide rare CNV data.
Nishiyama, Takeshi; Takahashi, Kunihiko; Tango, Toshiro; Pinto, Dalila; Scherer, Stephen W; Takami, Satoshi; Kishino, Hirohisa
2011-05-26
Several statistical tests have been developed for analyzing genome-wide association data by incorporating gene pathway information in terms of gene sets. Using these methods, hundreds of gene sets are typically tested, and the tested gene sets often overlap. This overlapping greatly increases the probability of generating false positives, and the results obtained are difficult to interpret, particularly when many gene sets show statistical significance. We propose a flexible statistical framework to circumvent these problems. Inspired by spatial scan statistics for detecting clustering of disease occurrence in the field of epidemiology, we developed a scan statistic to extract disease-associated gene clusters from a whole gene pathway. Extracting one or a few significant gene clusters from a global pathway limits the overall false positive probability, which results in increased statistical power, and facilitates the interpretation of test results. In the present study, we applied our method to genome-wide association data for rare copy-number variations, which have been strongly implicated in common diseases. Application of our method to a simulated dataset demonstrated the high accuracy of this method in detecting disease-associated gene clusters in a whole gene pathway. The scan statistic approach proposed here shows a high level of accuracy in detecting gene clusters in a whole gene pathway. This study has provided a sound statistical framework for analyzing genome-wide rare CNV data by incorporating topological information on the gene pathway.
A strategy for evaluating pathway analysis methods.
Yu, Chenggang; Woo, Hyung Jun; Yu, Xueping; Oyama, Tatsuya; Wallqvist, Anders; Reifman, Jaques
2017-10-13
Researchers have previously developed a multitude of methods designed to identify biological pathways associated with specific clinical or experimental conditions of interest, with the aim of facilitating biological interpretation of high-throughput data. Before practically applying such pathway analysis (PA) methods, we must first evaluate their performance and reliability, using datasets where the pathways perturbed by the conditions of interest have been well characterized in advance. However, such 'ground truths' (or gold standards) are often unavailable. Furthermore, previous evaluation strategies that have focused on defining 'true answers' are unable to systematically and objectively assess PA methods under a wide range of conditions. In this work, we propose a novel strategy for evaluating PA methods independently of any gold standard, either established or assumed. The strategy involves the use of two mutually complementary metrics, recall and discrimination. Recall measures the consistency of the perturbed pathways identified by applying a particular analysis method to an original large dataset and those identified by the same method to a sub-dataset of the original dataset. In contrast, discrimination measures specificity-the degree to which the perturbed pathways identified by a particular method to a dataset from one experiment differ from those identifying by the same method to a dataset from a different experiment. We used these metrics and 24 datasets to evaluate six widely used PA methods. The results highlighted the common challenge in reliably identifying significant pathways from small datasets. Importantly, we confirmed the effectiveness of our proposed dual-metric strategy by showing that previous comparative studies corroborate the performance evaluations of the six methods obtained by our strategy. Unlike any previously proposed strategy for evaluating the performance of PA methods, our dual-metric strategy does not rely on any ground truth, either established or assumed, of the pathways perturbed by a specific clinical or experimental condition. As such, our strategy allows researchers to systematically and objectively evaluate pathway analysis methods by employing any number of datasets for a variety of conditions.
Introduction to multigrid methods
NASA Technical Reports Server (NTRS)
Wesseling, P.
1995-01-01
These notes were written for an introductory course on the application of multigrid methods to elliptic and hyperbolic partial differential equations for engineers, physicists and applied mathematicians. The use of more advanced mathematical tools, such as functional analysis, is avoided. The course is intended to be accessible to a wide audience of users of computational methods. We restrict ourselves to finite volume and finite difference discretization. The basic principles are given. Smoothing methods and Fourier smoothing analysis are reviewed. The fundamental multigrid algorithm is studied. The smoothing and coarse grid approximation properties are discussed. Multigrid schedules and structured programming of multigrid algorithms are treated. Robustness and efficiency are considered.
Residual stress alleviation of aircraft metal structures reinforced with filamentary composites
NASA Technical Reports Server (NTRS)
Kelly, J. B.; June, R. R.
1973-01-01
Methods to eliminate or reduce residual stresses in aircraft metal structures reinforced by filamentary composites are discussed. Residual stress level reductions were achieved by modifying the manufacturing procedures used during adhesive bonding. The residual stress alleviation techniques involved various forms of mechanical constraint which were applied to the components during bonding. Nine methods were evaluated, covering a wide range in complexity. All methods investigated during the program affected the residual stress level. In general, residual stresses were reduced by 70 percent or more from the stress level produced by conventional adhesive bonding procedures.
Malacrida, Leonel; Gratton, Enrico; Jameson, David M
2016-01-01
In this note, we present a discussion of the advantages and scope of model-free analysis methods applied to the popular solvatochromic probe LAURDAN, which is widely used as an environmental probe to study dynamics and structure in membranes. In particular, we compare and contrast the generalized polarization approach with the spectral phasor approach. To illustrate our points we utilize several model membrane systems containing pure lipid phases and, in some cases, cholesterol or surfactants. We demonstrate that the spectral phasor method offers definitive advantages in the case of complex systems. PMID:27182438
Evaluation of Piloted Inputs for Onboard Frequency Response Estimation
NASA Technical Reports Server (NTRS)
Grauer, Jared A.; Martos, Borja
2013-01-01
Frequency response estimation results are presented using piloted inputs and a real-time estimation method recently developed for multisine inputs. A nonlinear simulation of the F-16 and a Piper Saratoga research aircraft were subjected to different piloted test inputs while the short period stabilator/elevator to pitch rate frequency response was estimated. Results show that the method can produce accurate results using wide-band piloted inputs instead of multisines. A new metric is introduced for evaluating which data points to include in the analysis and recommendations are provided for applying this method with piloted inputs.
Performance comparison of ISAR imaging method based on time frequency transforms
NASA Astrophysics Data System (ADS)
Xie, Chunjian; Guo, Chenjiang; Xu, Jiadong
2013-03-01
Inverse synthetic aperture radar (ISAR) can image the moving target, especially the target in the air, so it is important in the air defence and missile defence system. Time-frequency Transform was applied to ISAR imaging process widely. Several time frequency transforms were introduced. Noise jamming methods were analysed, and when these noise jamming were added to the echo of the ISAR receiver, the image can become blur even can't to be identify. But the effect is different to the different time frequency analysis. The results of simulation experiment show the Performance Comparison of the method.
Identification of FGF7 as a novel susceptibility locus for chronic obstructive pulmonary disease.
Brehm, John M; Hagiwara, Koichi; Tesfaigzi, Yohannes; Bruse, Shannon; Mariani, Thomas J; Bhattacharya, Soumyaroop; Boutaoui, Nadia; Ziniti, John P; Soto-Quiros, Manuel E; Avila, Lydiana; Cho, Michael H; Himes, Blanca; Litonjua, Augusto A; Jacobson, Francine; Bakke, Per; Gulsvik, Amund; Anderson, Wayne H; Lomas, David A; Forno, Erick; Datta, Soma; Silverman, Edwin K; Celedón, Juan C
2011-12-01
Traditional genome-wide association studies (GWASs) of large cohorts of subjects with chronic obstructive pulmonary disease (COPD) have successfully identified novel candidate genes, but several other plausible loci do not meet strict criteria for genome-wide significance after correction for multiple testing. The authors hypothesise that by applying unbiased weights derived from unique populations we can identify additional COPD susceptibility loci. Methods The authors performed a homozygosity haplotype analysis on a group of subjects with and without COPD to identify regions of conserved homozygosity haplotype (RCHHs). Weights were constructed based on the frequency of these RCHHs in case versus controls, and used to adjust the p values from a large collaborative GWAS of COPD. The authors identified 2318 RCHHs, of which 576 were significantly (p<0.05) over-represented in cases. After applying the weights constructed from these regions to a collaborative GWAS of COPD, the authors identified two single nucleotide polymorphisms (SNPs) in a novel gene (fibroblast growth factor-7 (FGF7)) that gained genome-wide significance by the false discovery rate method. In a follow-up analysis, both SNPs (rs12591300 and rs4480740) were significantly associated with COPD in an independent population (combined p values of 7.9E-7 and 2.8E-6, respectively). In another independent population, increased lung tissue FGF7 expression was associated with worse measures of lung function. Weights constructed from a homozygosity haplotype analysis of an isolated population successfully identify novel genetic associations from a GWAS on a separate population. This method can be used to identify promising candidate genes that fail to meet strict correction for multiple testing.
Use of Ultrasound Elastography in the Assessment of the Musculoskeletal System.
Paluch, Łukasz; Nawrocka-Laskus, Ewa; Wieczorek, Janusz; Mruk, Bartosz; Frel, Małgorzata; Walecki, Jerzy
2016-01-01
This article presents possible applications of ultrasound elastography in musculoskeletal imaging based on the available literature, as well as the possibility of extending indications for the use of elastography in the future. Ultrasound elastography (EUS) is a new method that shows structural changes in tissues following application of physical stress. Elastography techniques have been widely used to assess muscles and tendons in vitro since the early parts of the twentieth century. Only recently with the advent of new technology and creation of highly specialized ultrasound devices, has elastography gained widespread use in numerous applications. The authors performed a search of the Medline/PubMed databases for original research and reviewed publications on the application of ultrasound elastography for musculoskeletal imaging. All publications demonstrate possible uses of ultrasound elastography in examinations of the musculoskeletal system. The most widely studied areas include the muscles, tendons and rheumatic diseases. There are also reports on the employment in vessel imaging. The main limitation of elastography as a technique is above all the variability of applied pressure during imaging, which is operator-dependent. It would therefore be reasonable to provide clear guidelines on the technique applied, as well as clear indications for performing the test. It is important to develop methods for creating artifact-free, closed-loop, compression-decompression cycles. The main advantages include cost-effectiveness, short duration of the study, non-invasive nature of the procedure, as well as a potentially broader clinical availability. There are no clear guidelines with regard to indications as well as examination techniques. Ultrasound elastography is a new and still poorly researched method. We conclude, however, that it can be widely used in the examinations of musculoskeletal system. Therefore, it is necessary to conduct large, multi-center studies to determine the methodology, indications and technique of examination.
Whitmire, Jeannette M; Merrell, D Scott
2017-01-01
Mutagenesis is a valuable tool to examine the structure-function relationships of bacterial proteins. As such, a wide variety of mutagenesis techniques and strategies have been developed. This chapter details a selection of random mutagenesis methods and site-directed mutagenesis procedures that can be applied to an array of bacterial species. Additionally, the direct application of the techniques to study the Helicobacter pylori Ferric Uptake Regulator (Fur) protein is described. The varied approaches illustrated herein allow the robust investigation of the structural-functional relationships within a protein of interest.
Dittmar, John C.; Pierce, Steven; Rothstein, Rodney; Reid, Robert J. D.
2013-01-01
Genome-wide experiments often measure quantitative differences between treated and untreated cells to identify affected strains. For these studies, statistical models are typically used to determine significance cutoffs. We developed a method termed “CLIK” (Cutoff Linked to Interaction Knowledge) that overlays biological knowledge from the interactome on screen results to derive a cutoff. The method takes advantage of the fact that groups of functionally related interacting genes often respond similarly to experimental conditions and, thus, cluster in a ranked list of screen results. We applied CLIK analysis to five screens of the yeast gene disruption library and found that it defined a significance cutoff that differed from traditional statistics. Importantly, verification experiments revealed that the CLIK cutoff correlated with the position in the rank order where the rate of true positives drops off significantly. In addition, the gene sets defined by CLIK analysis often provide further biological perspectives. For example, applying CLIK analysis retrospectively to a screen for cisplatin sensitivity allowed us to identify the importance of the Hrq1 helicase in DNA crosslink repair. Furthermore, we demonstrate the utility of CLIK to determine optimal treatment conditions by analyzing genome-wide screens at multiple rapamycin concentrations. We show that CLIK is an extremely useful tool for evaluating screen quality, determining screen cutoffs, and comparing results between screens. Furthermore, because CLIK uses previously annotated interaction data to determine biologically informed cutoffs, it provides additional insights into screen results, which supplement traditional statistical approaches. PMID:23589890
de la Calle, Maria B; Devesa, Vicenta; Fiamegos, Yiannis; Vélez, Dinoraz
2017-09-01
The European Food Safety Authority (EFSA) underlined in its Scientific Opinion on Arsenic in Food that in order to support a sound exposure assessment to inorganic arsenic through diet, information about distribution of arsenic species in various food types must be generated. A method, previously validated in a collaborative trial, has been applied to determine inorganic arsenic in a wide variety of food matrices, covering grains, mushrooms and food of marine origin (31 samples in total). The method is based on detection by flow injection-hydride generation-atomic absorption spectrometry of the iAs selectively extracted into chloroform after digestion of the proteins with concentrated HCl. The method is characterized by a limit of quantification of 10 µg/kg dry weight, which allowed quantification of inorganic arsenic in a large amount of food matrices. Information is provided about performance scores given to results obtained with this method and which were reported by different laboratories in several proficiency tests. The percentage of satisfactory results obtained with the discussed method is higher than that of the results obtained with other analytical approaches.
An Effective Measured Data Preprocessing Method in Electrical Impedance Tomography
Yu, Chenglong; Yue, Shihong; Wang, Jianpei; Wang, Huaxiang
2014-01-01
As an advanced process detection technology, electrical impedance tomography (EIT) has widely been paid attention to and studied in the industrial fields. But the EIT techniques are greatly limited to the low spatial resolutions. This problem may result from the incorrect preprocessing of measuring data and lack of general criterion to evaluate different preprocessing processes. In this paper, an EIT data preprocessing method is proposed by all rooting measured data and evaluated by two constructed indexes based on all rooted EIT measured data. By finding the optimums of the two indexes, the proposed method can be applied to improve the EIT imaging spatial resolutions. In terms of a theoretical model, the optimal rooting times of the two indexes range in [0.23, 0.33] and in [0.22, 0.35], respectively. Moreover, these factors that affect the correctness of the proposed method are generally analyzed. The measuring data preprocessing is necessary and helpful for any imaging process. Thus, the proposed method can be generally and widely used in any imaging process. Experimental results validate the two proposed indexes. PMID:25165735
Genetic markers, genotyping methods & next generation sequencing in Mycobacterium tuberculosis
Desikan, Srinidhi; Narayanan, Sujatha
2015-01-01
Molecular epidemiology (ME) is one of the main areas in tuberculosis research which is widely used to study the transmission epidemics and outbreaks of tubercle bacilli. It exploits the presence of various polymorphisms in the genome of the bacteria that can be widely used as genetic markers. Many DNA typing methods apply these genetic markers to differentiate various strains and to study the evolutionary relationships between them. The three widely used genotyping tools to differentiate Mycobacterium tuberculosis strains are IS6110 restriction fragment length polymorphism (RFLP), spacer oligotyping (Spoligotyping), and mycobacterial interspersed repeat units - variable number of tandem repeats (MIRU-VNTR). A new prospect towards ME was introduced with the development of whole genome sequencing (WGS) and the next generation sequencing (NGS) methods, where the entire genome is sequenced that not only helps in pointing out minute differences between the various sequences but also saves time and the cost. NGS is also found to be useful in identifying single nucleotide polymorphisms (SNPs), comparative genomics and also various aspects about transmission dynamics. These techniques enable the identification of mycobacterial strains and also facilitate the study of their phylogenetic and evolutionary traits. PMID:26205019
Disintegration impact on sludge digestion process.
Dauknys, Regimantas; Rimeika, Mindaugas; Jankeliūnaitė, Eglė; Mažeikienė, Aušra
2016-11-01
The anaerobic sludge digestion is a widely used method for sludge stabilization in wastewater treatment plant. This process can be improved by applying the sludge disintegration methods. As the sludge disintegration is not investigated enough, an analysis of how the application of thermal hydrolysis affects the sludge digestion process based on full-scale data was conducted. The results showed that the maximum volatile suspended solids (VSS) destruction reached the value of 65% independently on the application of thermal hydrolysis. The average VSS destruction increased by 14% when thermal hydrolysis was applied. In order to have the maximum VSS reduction and biogas production, it is recommended to keep the maximum defined VSS loading of 5.7 kg VSS/m(3)/d when the thermal hydrolysis is applied and to keep the VSS loading between 2.1-2.4 kg VSS/m(3)/d when the disintegration of sludge is not applied. The application of thermal hydrolysis leads to an approximately 2.5 times higher VSS loading maintenance comparing VSS loading without the disintegration; therefore, digesters with 1.8 times smaller volume is required.
NASA Astrophysics Data System (ADS)
Song, Qi; Song, Y. D.; Cai, Wenchuan
2011-09-01
Although backstepping control design approach has been widely utilised in many practical systems, little effort has been made in applying this useful method to train systems. The main purpose of this paper is to apply this popular control design technique to speed and position tracking control of high-speed trains. By integrating adaptive control with backstepping control, we develop a control scheme that is able to address not only the traction and braking dynamics ignored in most existing methods, but also the uncertain friction and aerodynamic drag forces arisen from uncertain resistance coefficients. As such, the resultant control algorithms are able to achieve high precision train position and speed tracking under varying operation railway conditions, as validated by theoretical analysis and numerical simulations.
[Application of photodynamic therapy in dentistry – literature review].
Oruba, Zuzanna; Chomyszyn-Gajewska, Maria
Photodynamic therapy (PDT) is based on the principle that the target cells are destroyed by means of toxic reactive oxygen species generated upon the interaction of a photosensitizer, light and oxygen. This method is nowadays widely applied in various branches of medicine, mainly in oncology and dermatology. It is also applied in dentistry in the treatment of oral potentially malignant disorders (like lichen planus or leukoplakia) and infectious conditions (periodontitis, herpetic cheilitis, root canal disinfection). The application of the photodynamic therapy in the abovementioned indications is worth attention, as the method is noninvasive, painless, and the results of the published studies seem promising. The present article aims at presenting the principle of the photodynamic therapy and, based on the literature, the possibilities and results of its application in dentistry.
De Angelis, Danilo; Mele, Elia; Gibelli, Daniele; Merelli, Vera; Spagnoli, Laura; Cattaneo, Cristina
2015-01-01
The Lamendin method is widely reported as one of the most reliable means of age estimation of skeletal remains, but very little is known concerning the influence of burial in soil. This study aimed at verifying the reliability of the Lamendin method on corpses buried for 16 years in a cemetery. The Lamendin and the Prince and Ubelaker methods were applied. In all age groups except the 40- to 49-year-olds, the error was higher in the buried sample. The age-at-death error ranged between 10.7 and 36.8 years for the Lamendin method (vs. the reported 7.3-18.9 years) and 9.5 and 35.7 for the Prince and Ubelaker one (vs. the original 5.2-32.6 years); in all age groups, the error is closer to that found on archeological populations. These results suggest caution in applying the Lamendin method to forensic cases of human remains buried even for a brief period under soil. © 2014 American Academy of Forensic Sciences.
A method to align a bent crystal for channeling experiments by using quasichanneling oscillations
NASA Astrophysics Data System (ADS)
Sytov, A. I.; Guidi, V.; Tikhomirov, V. V.; Bandiera, L.; Bagli, E.; Germogli, G.; Mazzolari, A.; Romagnoni, M.
2018-04-01
A method to calculate both the bent crystal angle of alignment and radius of curvature by using only one distribution of deflection angles has been developed. The method is based on measuring of the angular position of recently predicted and observed quasichanneling oscillations in the deflection angle distribution and consequent fitting of both the radius and angular alignment by analytic formulae. In this paper this method is applied on the example of simulated angular distributions over a wide range of values of both radius and alignment for electrons. It is carried out through the example of (111) nonequidistant planes though this technique is general and could be applied to any kind of planes. In addition, the method application constraints are also discussed. It is shown by simulations that this method, being in fact a sort of beam diagnostics, allows one in a certain case to increase the crystal alignment accuracy as well as to control precisely the radius of curvature inside an accelerator tube without vacuum breaking. In addition, it speeds up the procedure of crystal alignment in channeling experiments, reducing beamtime consuming.
From picture to porosity of river bed material using Structure-from-Motion with Multi-View-Stereo
NASA Astrophysics Data System (ADS)
Seitz, Lydia; Haas, Christian; Noack, Markus; Wieprecht, Silke
2018-04-01
Common methods for in-situ determination of porosity of river bed material are time- and effort-consuming. Although mathematical predictors can be used for estimation, they do not adequately represent porosities. The objective of this study was to assess a new approach for the determination of porosity of frozen sediment samples. The method is based on volume determination by applying Structure-from-Motion with Multi View Stereo (SfM-MVS) to estimate a 3D volumetric model based on overlapping imagery. The method was applied on artificial sediment mixtures as well as field samples. In addition, the commonly used water replacement method was applied to determine porosities in comparison with the SfM-MVS method. We examined a range of porosities from 0.16 to 0.46 that are representative of the wide range of porosities found in rivers. SfM-MVS performed well in determining volumes of the sediment samples. A very good correlation (r = 0.998, p < 0.0001) was observed between the SfM-MVS and the water replacement method. Results further show that the water replacement method underestimated total sample volumes. A comparison with several mathematical predictors showed that for non-uniform samples the calculated porosity based on the standard deviation performed better than porosities based on the median grain size. None of the predictors were effective at estimating the porosity of the field samples.
Spurgin, Kurt A; Kaprelian, Anthony; Gutierrez, Roberto; Jha, Vidyasagar; Wilson, Christopher G; Dobyns, Abigail; Xu, Karen H; Curras-Collazo, Margarita C
2017-02-01
The purpose of this study was to develop a method for applying calibrated manual massage pressures by using commonly available, inexpensive sphygmomanometer parts and validate the use of this approach as a quantitative method of applying massage therapy to rodents. Massage pressures were monitored by using a modified neonatal blood pressure (BP) cuff attached to an aneroid gauge. Lightly anesthetized rats were stroked on the ventral abdomen for 5 minutes at pressures of 20 mm Hg and 40 mm Hg. Blood pressure was monitored noninvasively for 20 minutes following massage therapy at 5-minute intervals. Interexaminer reliability was assessed by applying 20 mm Hg and 40 mm Hg pressures to a digital scale in the presence or absence of the pressure gauge. With the use of this method, we observed good interexaminer reliability, with intraclass coefficients of 0.989 versus 0.624 in blinded controls. In Long-Evans rats, systolic BP dropped by an average of 9.86% ± 0.27% following application of 40 mm Hg massage pressure. Similar effects were seen following 20 mm Hg pressure (6.52% ± 1.7%), although latency to effect was greater than at 40 mm Hg. Sprague-Dawley rats behaved similarly to Long-Evans rats. Low-frequency/high-frequency ratio, a widely-used index of autonomic tone in cardiovascular regulation, showed a significant increase within 5 minutes after 40 mm Hg massage pressure was applied. The calibrated massage method was shown to be a reproducible method for applying massage pressures in rodents and lowering BP. Copyright © 2016. Published by Elsevier Inc.
Pfliegler, W P; Sipiczki, M
2016-12-01
Simple and efficient genotyping methods are widely used to assess the diversity of a large number of microbial strains, e.g. wine yeasts isolated from a specific geographical area or a vintage. Such methods are often also the first to be applied, to decrease the number of strains deemed interesting for a more time-consuming physiological characterization. Here, we aimed to use a physiologically characterized strain collection of 69 Saccharomyces cerevisiae strains from Hungarian wine regions to determine whether geographical origin or physiological similarity can be recovered by clustering the strains with one or two simultaneously used variations of interdelta genotyping. Our results indicate that although a detailed clustering with high resolution can be achieved with this method, the clustering of strains is largely contrasting when different primer sets are used and it does not recover geographical or physiological groups. Genotyping is routinely used for assessing the diversity of a large number of isolates/strains of a single species, e.g. a collection of wine yeasts. We tested the efficiency of interdelta genotyping on a collection of Saccharomyces wine yeasts from four wine regions of Hungary that was previously characterized physiologically. Interdelta fingerprinting recovered neither physiological nor geographical similarities, and in addition, the two different primer pairs widely used for this method showed conflicting and barely comparable results. Thus, this method does not necessarily represent the true diversity of a strain collection, but detailed clustering may be achieved by the combined use of primer sets. © 2016 The Society for Applied Microbiology.
NASA Astrophysics Data System (ADS)
Schauer, F.; Nádaždy, V.; Gmucová, K.
2018-04-01
There is potential in applying conjugated polymers in novel organic optoelectronic devices, where a comprehensive understanding of the fundamental processes and energetics involved during transport and recombination is still lacking, limiting further device optimization. The electronic transport modeling and its optimization need the energy distribution of transport and defect states, expressed by the energy distribution of the Density of States (DOS) function, as input/comparative parameters. We present the Energy Resolved-Electrochemical Impedance Spectroscopy (ER-EIS) method for the study of transport and defect electronic states in organic materials. The method allows mapping over unprecedentedly wide energy and DOS ranges. The ER-EIS spectroscopic method is based on the small signal interaction between the surface of the organic film and the liquid electrolyte containing reduction-oxidation (redox) species, which is similar to the extraction of an electron by an acceptor and capture of an electron by a donor at a semiconductor surface. The desired DOS of electronic transport and defect states can be derived directly from the measured redox response signal to the small voltage perturbation at the instantaneous position of the Fermi energy, given by the externally applied voltage. The theory of the ER-EIS method and conditions for its validity for solid polymers are presented in detail. We choose four case studies on poly(3-hexylthiophene-2,5-diyl) and poly[methyl(phenyl)silane] to show the possibilities of the method to investigate the electronic structure expressed by DOS of polymers with a high resolution of about 6 orders of magnitude and in a wide energy range of 6 eV.
Zhang, Xiaoshuai; Xue, Fuzhong; Liu, Hong; Zhu, Dianwen; Peng, Bin; Wiemels, Joseph L; Yang, Xiaowei
2014-12-10
Genome-wide Association Studies (GWAS) are typically designed to identify phenotype-associated single nucleotide polymorphisms (SNPs) individually using univariate analysis methods. Though providing valuable insights into genetic risks of common diseases, the genetic variants identified by GWAS generally account for only a small proportion of the total heritability for complex diseases. To solve this "missing heritability" problem, we implemented a strategy called integrative Bayesian Variable Selection (iBVS), which is based on a hierarchical model that incorporates an informative prior by considering the gene interrelationship as a network. It was applied here to both simulated and real data sets. Simulation studies indicated that the iBVS method was advantageous in its performance with highest AUC in both variable selection and outcome prediction, when compared to Stepwise and LASSO based strategies. In an analysis of a leprosy case-control study, iBVS selected 94 SNPs as predictors, while LASSO selected 100 SNPs. The Stepwise regression yielded a more parsimonious model with only 3 SNPs. The prediction results demonstrated that the iBVS method had comparable performance with that of LASSO, but better than Stepwise strategies. The proposed iBVS strategy is a novel and valid method for Genome-wide Association Studies, with the additional advantage in that it produces more interpretable posterior probabilities for each variable unlike LASSO and other penalized regression methods.
Exploiting Language Models to Classify Events from Twitter
Vo, Duc-Thuan; Hai, Vo Thuan; Ock, Cheol-Young
2015-01-01
Classifying events is challenging in Twitter because tweets texts have a large amount of temporal data with a lot of noise and various kinds of topics. In this paper, we propose a method to classify events from Twitter. We firstly find the distinguishing terms between tweets in events and measure their similarities with learning language models such as ConceptNet and a latent Dirichlet allocation method for selectional preferences (LDA-SP), which have been widely studied based on large text corpora within computational linguistic relations. The relationship of term words in tweets will be discovered by checking them under each model. We then proposed a method to compute the similarity between tweets based on tweets' features including common term words and relationships among their distinguishing term words. It will be explicit and convenient for applying to k-nearest neighbor techniques for classification. We carefully applied experiments on the Edinburgh Twitter Corpus to show that our method achieves competitive results for classifying events. PMID:26451139
An improved method of measuring heart rate using a webcam
NASA Astrophysics Data System (ADS)
Liu, Yi; Ouyang, Jianfei; Yan, Yonggang
2014-09-01
Measuring heart rate traditionally requires special equipment and physical contact with the subject. Reliable non-contact and low-cost measurements are highly desirable for convenient and comfortable physiological self-assessment. Previous work has shown that consumer-grade cameras can provide useful signals for remote heart rate measurements. In this paper a simple and robust method of measuring the heart rate using low-cost webcam is proposed. Blood volume pulse is extracted by proper Region of Interest (ROI) and color channel selection from image sequences of human faces without complex computation. Heart rate is subsequently quantified by spectrum analysis. The method is successfully applied under natural lighting conditions. Results of experiments show that it takes less time, is much simpler, and has similar accuracy to the previously published and widely used method of Independent Component Analysis (ICA). Benefitting from non-contact, convenience, and low-costs, it provides great promise for popularization of home healthcare and can further be applied to biomedical research.
The Use of Empirical Methods for Testing Granular Materials in Analogue Modelling
Montanari, Domenico; Agostini, Andrea; Bonini, Marco; Corti, Giacomo; Del Ventisette, Chiara
2017-01-01
The behaviour of a granular material is mainly dependent on its frictional properties, angle of internal friction, and cohesion, which, together with material density, are the key factors to be considered during the scaling procedure of analogue models. The frictional properties of a granular material are usually investigated by means of technical instruments such as a Hubbert-type apparatus and ring shear testers, which allow for investigating the response of the tested material to a wide range of applied stresses. Here we explore the possibility to determine material properties by means of different empirical methods applied to mixtures of quartz and K-feldspar sand. Empirical methods exhibit the great advantage of measuring the properties of a certain analogue material under the experimental conditions, which are strongly sensitive to the handling techniques. Finally, the results obtained from the empirical methods have been compared with ring shear tests carried out on the same materials, which show a satisfactory agreement with those determined empirically. PMID:28772993
Saad, Ahmed S; Abo-Talib, Nisreen F; El-Ghobashy, Mohamed R
2016-01-05
Different methods have been introduced to enhance selectivity of UV-spectrophotometry thus enabling accurate determination of co-formulated components, however mixtures whose components exhibit wide variation in absorptivities has been an obstacle against application of UV-spectrophotometry. The developed ratio difference at coabsorptive point method (RDC) represents a simple effective solution for the mentioned problem, where the additive property of light absorbance enabled the consideration of the two components as multiples of the lower absorptivity component at certain wavelength (coabsorptive point), at which their total concentration multiples could be determined, whereas the other component was selectively determined by applying the ratio difference method in a single step. Mixture of perindopril arginine (PA) and amlodipine besylate (AM) figures that problem, where the low absorptivity of PA relative to AM hinders selective spectrophotometric determination of PA. The developed method successfully determined both components in the overlapped region of their spectra with accuracy 99.39±1.60 and 100.51±1.21, for PA and AM, respectively. The method was validated as per the USP guidelines and showed no significant difference upon statistical comparison with reported chromatographic method. Copyright © 2015 Elsevier B.V. All rights reserved.
Numerical simulation of separated flows. Ph.D. Thesis - Stanford Univ., Calif.
NASA Technical Reports Server (NTRS)
Spalart, P. R.; Leonard, A.; Baganoff, D.
1983-01-01
A new numerical method, based on the Vortex Method, for the simulation of two-dimensional separated flows, was developed and tested on a wide range of gases. The fluid is incompressible and the Reynolds number is high. A rigorous analytical basis for the representation of the Navier-Stokes equation in terms of the vorticity is used. An equation for the control of circulation around each body is included. An inviscid outer flow (computed by the Vortex Method) was coupled with a viscous boundary layer flow (computed by an Eulerian method). This version of the Vortex Method treats bodies of arbitrary shape, and accurately computes the pressure and shear stress at the solid boundary. These two quantities reflect the structure of the boundary layer. Several versions of the method are presented and applied to various problems, most of which have massive separation. Comparison of its results with other results, generally experimental, demonstrates the reliability and the general accuracy of the new method, with little dependence on empirical parameters. Many of the complex features of the flow past a circular cylinder, over a wide range of Reynolds numbers, are correctly reproduced.
Oral flora of Python regius kept as pets.
Dipineto, L; Russo, T P; Calabria, M; De Rosa, L; Capasso, M; Menna, L F; Borrelli, L; Fioretti, A
2014-05-01
This study was aimed at evaluating the oral bacterial flora of 60 Python regius kept as pets by culture and biochemical methods. All isolates were also submitted to antimicrobial susceptibility testing using the disc diffusion method. The oral cavity of snakes sampled harboured a wide range of Gram-negative bacteria mainly constituted by Pseudomonas spp., Morganella morganii, Acinetobacter calcoaceticus, Aeromonas hydrophila, but also by Salmonella spp. Staphylococcus spp. was the commonest Gram-positive isolates, and various anaerobic Clostridium species were also found. The most effective antimicrobial agents were enrofloxacin and ciprofloxacin, followed by doxycycline and gentamicin. The oral cavity of snakes sampled harboured a wide range of bacteria. Our results suggest that people who come in contact with snakes could be at risk of infection and should follow proper hygiene practices when handling these reptiles. © 2014 The Society for Applied Microbiology.
Yang, Zhenyu; Gonzalez, Christina M; Purkait, Tapas K; Iqbal, Muhammad; Meldrum, Al; Veinot, Jonathan G C
2015-09-29
Hydrosilylation is among the most common methods used for modifying silicon surface chemistry. It provides a wide range of surface functionalities and effective passivation of surface sites. Herein, we report a systematic study of radical initiated hydrosilylation of silicon nanocrystal (SiNC) surfaces using two common radical initiators (i.e., 2,2'-azobis(2-methylpropionitrile) and benzoyl peroxide). Compared to other widely applied hydrosilylation methods (e.g., thermal, photochemical, and catalytic), the radical initiator based approach is particle size independent, requires comparatively low reaction temperatures, and yields monolayer surface passivation after short reaction times. The effects of differing functional groups (i.e., alkene, alkyne, carboxylic acid, and ester) on the radical initiated hydrosilylation are also explored. The results indicate functionalization occurs and results in the formation of monolayer passivated surfaces.
Current Trends in Modeling Research for Turbulent Aerodynamic Flows
NASA Technical Reports Server (NTRS)
Gatski, Thomas B.; Rumsey, Christopher L.; Manceau, Remi
2007-01-01
The engineering tools of choice for the computation of practical engineering flows have begun to migrate from those based on the traditional Reynolds-averaged Navier-Stokes approach to methodologies capable, in theory if not in practice, of accurately predicting some instantaneous scales of motion in the flow. The migration has largely been driven by both the success of Reynolds-averaged methods over a wide variety of flows as well as the inherent limitations of the method itself. Practitioners, emboldened by their ability to predict a wide-variety of statistically steady, equilibrium turbulent flows, have now turned their attention to flow control and non-equilibrium flows, that is, separation control. This review gives some current priorities in traditional Reynolds-averaged modeling research as well as some methodologies being applied to a new class of turbulent flow control problems.
Parallel factor ChIP provides essential internal control for quantitative differential ChIP-seq.
Guertin, Michael J; Cullen, Amy E; Markowetz, Florian; Holding, Andrew N
2018-04-17
A key challenge in quantitative ChIP combined with high-throughput sequencing (ChIP-seq) is the normalization of data in the presence of genome-wide changes in occupancy. Analysis-based normalization methods were developed for transcriptomic data and these are dependent on the underlying assumption that total transcription does not change between conditions. For genome-wide changes in transcription factor (TF) binding, these assumptions do not hold true. The challenges in normalization are confounded by experimental variability during sample preparation, processing and recovery. We present a novel normalization strategy utilizing an internal standard of unchanged peaks for reference. Our method can be readily applied to monitor genome-wide changes by ChIP-seq that are otherwise lost or misrepresented through analytical normalization. We compare our approach to normalization by total read depth and two alternative methods that utilize external experimental controls to study TF binding. We successfully resolve the key challenges in quantitative ChIP-seq analysis and demonstrate its application by monitoring the loss of Estrogen Receptor-alpha (ER) binding upon fulvestrant treatment, ER binding in response to estrodiol, ER mediated change in H4K12 acetylation and profiling ER binding in patient-derived xenographs. This is supported by an adaptable pipeline to normalize and quantify differential TF binding genome-wide and generate metrics for differential binding at individual sites.
Tabani, Hadi; Asadi, Sakine; Nojavan, Saeed; Parsa, Mitra
2017-05-12
Developing green methods for analyte extraction is one of the most important topics in the field of sample preparation. In this study, for the first time, agarose gel was used as membrane in electromembrane extraction (EME) without using any organic solvent, for the extraction of four model basic drugs (rivastigmine (RIV), verapamil (VER), amlodipine (AML), and morphine (MOR)) with a wide polarity window (log P from 0.43 to 3.7). Different variables playing vital roles in the proposed method were evaluated and optimized. As a driving force, a 25V electrical field was applied to make the analyte migrate from sample solution with pH 7.0, through the agarose gel 3% (w/v) with 5mm thickness, into an acceptor phase (AP) with pH 2.0. The best extraction efficiency was obtained with an extraction duration of 25min. With this new methodology, MOR with high polarity (log P=0.43) was efficiently extracted without using any carrier or ion pair reagents. Limits of detection (LODs) and quantification (LOQs) were in the ranges of 1.5-1.8ngmL -1 and 5.0-6.0ngmL -1 , respectively. Finally, the proposed method was successfully applied to determine concentrations of the model drugs in the wastewater sample. Copyright © 2017 Elsevier B.V. All rights reserved.
ELM: AN ALGORITHM TO ESTIMATE THE ALPHA ABUNDANCE FROM LOW-RESOLUTION SPECTRA
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bu, Yude; Zhao, Gang; Kumar, Yerra Bharat
We have investigated a novel methodology using the extreme learning machine (ELM) algorithm to determine the α abundance of stars. Applying two methods based on the ELM algorithm—ELM+spectra and ELM+Lick indices—to the stellar spectra from the ELODIE database, we measured the α abundance with a precision better than 0.065 dex. By applying these two methods to the spectra with different signal-to-noise ratios (S/Ns) and different resolutions, we found that ELM+spectra is more robust against degraded resolution and ELM+Lick indices is more robust against variation in S/N. To further validate the performance of ELM, we applied ELM+spectra and ELM+Lick indices to SDSSmore » spectra and estimated α abundances with a precision around 0.10 dex, which is comparable to the results given by the SEGUE Stellar Parameter Pipeline. We further applied ELM to the spectra of stars in Galactic globular clusters (M15, M13, M71) and open clusters (NGC 2420, M67, NGC 6791), and results show good agreement with previous studies (within 1σ). A comparison of the ELM with other widely used methods including support vector machine, Gaussian process regression, artificial neural networks, and linear least-squares regression shows that ELM is efficient with computational resources and more accurate than other methods.« less
Wide-field two-photon microscopy with temporal focusing and HiLo background rejection
NASA Astrophysics Data System (ADS)
Yew, Elijah Y. S.; Choi, Heejin; Kim, Daekeun; So, Peter T. C.
2011-03-01
Scanningless depth-resolved microscopy is achieved through spatial-temporal focusing and has been demonstrated previously. The advantage of this method is that a large area may be imaged without scanning resulting in higher throughput of the imaging system. Because it is a widefield technique, the optical sectioning effect is considerably poorer than with conventional spatial focusing two-photon microscopy. Here we propose wide-field two-photon microscopy based on spatio-temporal focusing and employing background rejection based on the HiLo microscope principle. We demonstrate the effects of applying HiLo microscopy to widefield temporally focused two-photon microscopy.
NASA Astrophysics Data System (ADS)
Chuan, Lee Te; Rathi, Muhammad Fareez Mohamad; Abidin, Muhamad Yusuf Zainal; Abdullah, Hasan Zuhudi; Idris, Maizlinda Izwana
2015-07-01
Anodic oxidation is a surface modification method which combines electric field driven metal and oxygen ion diffusion for formation of oxide layer on the anode surface. This method has been widely used to modify the surface morphology of biomaterial especially titanium. This study aimed to investigate the effect of applied voltage on titanium. Specifically, the titanium foil was anodised in mixture of β-glycerophosphate disodium salt pentahydrate (β-GP) and calcium acetate monohydrate (CA) with different applied voltage (50-350 V), electrolyte concentration (0.04 M β-GP + 0.4 M CA), anodising time (10minutes) and current density (50 and 70 mA.cm-2) at room temperature. Surface oxide properties of anodised titanium were characterised by digital single-lens reflex camera (DSLR camera), field emission scanning electron microscope (FESEM) and atomic force microscopy (AFM). At lower applied voltage (≤150 V), surface of titanium foils were relatively smooth. With increasing applied voltage (≥250 V), the oxide layer became more porous and donut-shaped pores were formed on the surface of titanium foils. The AFM results indicated that the surface roughness of anodised titanium increases with increasing of applied voltage. The porous and rough surface is able to promote the osseointegration and reduce the suffering time of patient.
Kuipers, Jeroen; Kalicharan, Ruby D; Wolters, Anouk H G; van Ham, Tjakko J; Giepmans, Ben N G
2016-05-25
Large-scale 2D electron microscopy (EM), or nanotomy, is the tissue-wide application of nanoscale resolution electron microscopy. Others and we previously applied large scale EM to human skin pancreatic islets, tissue culture and whole zebrafish larvae(1-7). Here we describe a universally applicable method for tissue-scale scanning EM for unbiased detection of sub-cellular and molecular features. Nanotomy was applied to investigate the healthy and a neurodegenerative zebrafish brain. Our method is based on standardized EM sample preparation protocols: Fixation with glutaraldehyde and osmium, followed by epoxy-resin embedding, ultrathin sectioning and mounting of ultrathin-sections on one-hole grids, followed by post staining with uranyl and lead. Large-scale 2D EM mosaic images are acquired using a scanning EM connected to an external large area scan generator using scanning transmission EM (STEM). Large scale EM images are typically ~ 5 - 50 G pixels in size, and best viewed using zoomable HTML files, which can be opened in any web browser, similar to online geographical HTML maps. This method can be applied to (human) tissue, cross sections of whole animals as well as tissue culture(1-5). Here, zebrafish brains were analyzed in a non-invasive neuronal ablation model. We visualize within a single dataset tissue, cellular and subcellular changes which can be quantified in various cell types including neurons and microglia, the brain's macrophages. In addition, nanotomy facilitates the correlation of EM with light microscopy (CLEM)(8) on the same tissue, as large surface areas previously imaged using fluorescent microscopy, can subsequently be subjected to large area EM, resulting in the nano-anatomy (nanotomy) of tissues. In all, nanotomy allows unbiased detection of features at EM level in a tissue-wide quantifiable manner.
Kuipers, Jeroen; Kalicharan, Ruby D.; Wolters, Anouk H. G.
2016-01-01
Large-scale 2D electron microscopy (EM), or nanotomy, is the tissue-wide application of nanoscale resolution electron microscopy. Others and we previously applied large scale EM to human skin pancreatic islets, tissue culture and whole zebrafish larvae1-7. Here we describe a universally applicable method for tissue-scale scanning EM for unbiased detection of sub-cellular and molecular features. Nanotomy was applied to investigate the healthy and a neurodegenerative zebrafish brain. Our method is based on standardized EM sample preparation protocols: Fixation with glutaraldehyde and osmium, followed by epoxy-resin embedding, ultrathin sectioning and mounting of ultrathin-sections on one-hole grids, followed by post staining with uranyl and lead. Large-scale 2D EM mosaic images are acquired using a scanning EM connected to an external large area scan generator using scanning transmission EM (STEM). Large scale EM images are typically ~ 5 - 50 G pixels in size, and best viewed using zoomable HTML files, which can be opened in any web browser, similar to online geographical HTML maps. This method can be applied to (human) tissue, cross sections of whole animals as well as tissue culture1-5. Here, zebrafish brains were analyzed in a non-invasive neuronal ablation model. We visualize within a single dataset tissue, cellular and subcellular changes which can be quantified in various cell types including neurons and microglia, the brain's macrophages. In addition, nanotomy facilitates the correlation of EM with light microscopy (CLEM)8 on the same tissue, as large surface areas previously imaged using fluorescent microscopy, can subsequently be subjected to large area EM, resulting in the nano-anatomy (nanotomy) of tissues. In all, nanotomy allows unbiased detection of features at EM level in a tissue-wide quantifiable manner. PMID:27285162
Health systems research training enhances workplace research skills: a qualitative evaluation.
Adams, Jolene; Schaffer, Angela; Lewin, Simon; Zwarenstein, Merrick; van der Walt, Hester
2003-01-01
In-service education is a widely used means of enhancing the skills of health service providers, for example, in undertaking research. However, the transfer of skills acquired during an education course to the workplace is seldom evaluated. The objectives of this study were to assess learner, teacher, and health service manager perceptions of the usefulness, in the work setting, of skills taught on a health systems research education course in South Africa and to assess the extent to which the course stimulated awareness and development of health systems research in the work setting. The education course was evaluated using a qualitative approach. Respondents were selected for interview using purposive sampling. Interviews were conducted with 39 respondents, including all of the major stakeholders. The interviews lasted between 20 and 60 minutes and were conducted either face to face or over the telephone. Thematic analysis was applied to the data, and key themes were identified. The course demystified health systems research and stimulated interest in reading and applying research findings. The course also changed participants' attitudes to routine data collection and was reported to have facilitated the application of informal research or problem-solving methods to everyday work situations. However, inadequate support within the workplace was a significant obstacle to applying the skills learned. A 2-week intensive, experiential course in health systems research methods can provide a mechanism for introducing basic research skills to a wide range of learners. Qualitative evaluation is a useful approach for assessing the impacts of education courses.
ERIC Educational Resources Information Center
Davis, Sandra L.
2012-01-01
The progression of the taxonomic organization of life from Linnaeus's original two kingdoms to the traditional five-kingdom system to today's widely accepted three-domain system is explored in a group-learning activity. Working with a set of organisms, students organize them into each system. Discussion after each step focuses on viewing…
NASA Astrophysics Data System (ADS)
Suzuki, Yasuo
A uniform plasma-based ion implantation and DLC film formation technologies on the surface of complicated 3-dimensional substrates have been developed by applying pulse voltage coupled with RF voltage to the substrates such as plastics, rubber as well as metals with the similar deposition rate. These technologies are widely applicable to both ion implantation and DLC film formation onto the automobile parts, mechanical parts and metal molds. A problem to be solved is reducing cost. The deposition rate of DLC films is expected to increase to around 10μm/hr, which is ten times larger than that of the conventional method, by hybridizing the ICP (Induction Coupling Plasma) with a plus-minus voltage source. This epoch-making technology will be able to substitute for the electro-plating method in the near future. In this paper, the DLC film formation technology by applying both RF and pulse voltage, its applications and its prospect are presented.
Loving, Kathryn A.; Lin, Andy; Cheng, Alan C.
2014-01-01
Advances reported over the last few years and the increasing availability of protein crystal structure data have greatly improved structure-based druggability approaches. However, in practice, nearly all druggability estimation methods are applied to protein crystal structures as rigid proteins, with protein flexibility often not directly addressed. The inclusion of protein flexibility is important in correctly identifying the druggability of pockets that would be missed by methods based solely on the rigid crystal structure. These include cryptic pockets and flexible pockets often found at protein-protein interaction interfaces. Here, we apply an approach that uses protein modeling in concert with druggability estimation to account for light protein backbone movement and protein side-chain flexibility in protein binding sites. We assess the advantages and limitations of this approach on widely-used protein druggability sets. Applying the approach to all mammalian protein crystal structures in the PDB results in identification of 69 proteins with potential druggable cryptic pockets. PMID:25079060
Mini-batch optimized full waveform inversion with geological constrained gradient filtering
NASA Astrophysics Data System (ADS)
Yang, Hui; Jia, Junxiong; Wu, Bangyu; Gao, Jinghuai
2018-05-01
High computation cost and generating solutions without geological sense have hindered the wide application of Full Waveform Inversion (FWI). Source encoding technique is a way to dramatically reduce the cost of FWI but subject to fix-spread acquisition setup requirement and slow convergence for the suppression of cross-talk. Traditionally, gradient regularization or preconditioning is applied to mitigate the ill-posedness. An isotropic smoothing filter applied on gradients generally gives non-geological inversion results, and could also introduce artifacts. In this work, we propose to address both the efficiency and ill-posedness of FWI by a geological constrained mini-batch gradient optimization method. The mini-batch gradient descent optimization is adopted to reduce the computation time by choosing a subset of entire shots for each iteration. By jointly applying the structure-oriented smoothing to the mini-batch gradient, the inversion converges faster and gives results with more geological meaning. Stylized Marmousi model is used to show the performance of the proposed method on realistic synthetic model.
Radio Galaxy Zoo: Machine learning for radio source host galaxy cross-identification
NASA Astrophysics Data System (ADS)
Alger, M. J.; Banfield, J. K.; Ong, C. S.; Rudnick, L.; Wong, O. I.; Wolf, C.; Andernach, H.; Norris, R. P.; Shabala, S. S.
2018-05-01
We consider the problem of determining the host galaxies of radio sources by cross-identification. This has traditionally been done manually, which will be intractable for wide-area radio surveys like the Evolutionary Map of the Universe (EMU). Automated cross-identification will be critical for these future surveys, and machine learning may provide the tools to develop such methods. We apply a standard approach from computer vision to cross-identification, introducing one possible way of automating this problem, and explore the pros and cons of this approach. We apply our method to the 1.4 GHz Australian Telescope Large Area Survey (ATLAS) observations of the Chandra Deep Field South (CDFS) and the ESO Large Area ISO Survey South 1 (ELAIS-S1) fields by cross-identifying them with the Spitzer Wide-area Infrared Extragalactic (SWIRE) survey. We train our method with two sets of data: expert cross-identifications of CDFS from the initial ATLAS data release and crowdsourced cross-identifications of CDFS from Radio Galaxy Zoo. We found that a simple strategy of cross-identifying a radio component with the nearest galaxy performs comparably to our more complex methods, though our estimated best-case performance is near 100 per cent. ATLAS contains 87 complex radio sources that have been cross-identified by experts, so there are not enough complex examples to learn how to cross-identify them accurately. Much larger datasets are therefore required for training methods like ours. We also show that training our method on Radio Galaxy Zoo cross-identifications gives comparable results to training on expert cross-identifications, demonstrating the value of crowdsourced training data.
Putnam, Robert F; Kincaid, Donald
2015-05-01
Horner and Sugai (2015) recently wrote a manuscript providing an overview of school-wide positive behavioral interventions and supports (PBIS) and why it is an example of applied behavior analysis at the scale of social importance. This paper will describe why school-wide PBIS is important to behavior analysts, how it helps promote applied behavior analysis in schools and other organizations, and how behavior analysts can use this framework to assist them in the promotion and implementation of applied behavior analysis at both at the school and organizational level, as well as, the classroom and individual level.
A method of improving sensitivity of carbon/oxygen well logging for low porosity formation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, Juntao; Zhang, Feng; Zhang, Quanying
Carbon/Oxygen (C/O) spectral logging technique has been widely used to determine residual oil saturation and the evaluation of water flooded layer. In order to improve the sensitivity of the technique for low – porosity formation, Gaussian and linear models are applied to fit the peaks of measured spectra to obtain the characteristic coefficients. Standard spectra of carbon and oxygen are combined to establish a new carbon /oxygen value calculation method, and the robustness of the new method is cross – validated with known mixed gamma ray spectrum. Formation models for different porosities and saturations are built using Monte Carlo method.more » The responses of carbon/oxygen which are calculated by conventional energy window method, and the new method is applied to oil saturation under low porosity conditions. The results show the new method can reduce the effects of gamma rays contaminated by the interaction between neutrons and other elements on carbon/oxygen ratio, and therefore can significantly improve the response sensitivity of carbon/oxygen well logging to oil saturation. The new method improves greatly carbon/oxygen well logging in low porosity conditions.« less
A method of improving sensitivity of carbon/oxygen well logging for low porosity formation
Liu, Juntao; Zhang, Feng; Zhang, Quanying; ...
2016-12-01
Carbon/Oxygen (C/O) spectral logging technique has been widely used to determine residual oil saturation and the evaluation of water flooded layer. In order to improve the sensitivity of the technique for low – porosity formation, Gaussian and linear models are applied to fit the peaks of measured spectra to obtain the characteristic coefficients. Standard spectra of carbon and oxygen are combined to establish a new carbon /oxygen value calculation method, and the robustness of the new method is cross – validated with known mixed gamma ray spectrum. Formation models for different porosities and saturations are built using Monte Carlo method.more » The responses of carbon/oxygen which are calculated by conventional energy window method, and the new method is applied to oil saturation under low porosity conditions. The results show the new method can reduce the effects of gamma rays contaminated by the interaction between neutrons and other elements on carbon/oxygen ratio, and therefore can significantly improve the response sensitivity of carbon/oxygen well logging to oil saturation. The new method improves greatly carbon/oxygen well logging in low porosity conditions.« less
Scaling Symmetries in Elastic-Plastic Dynamic Cavity Expansion Equations Using the Isovector Method
DOE Office of Scientific and Technical Information (OSTI.GOV)
Albright, Eric Jason; Ramsey, Scott D.; Schmidt, Joseph H.
Cavity-expansion approximations are widely-used in the study of penetration mechanics and indentation phenomena. We apply the isovector method to a well-established model in the literature for elastic-plastic cavity-expansion to systematically demonstrate the existence of Lie symmetries corresponding to scale-invariant solutions. Here we use the symmetries obtained from the equations of motion to determine compatible auxiliary conditions describing the cavity wall trajectory and the elastic-plastic material interface. The admissible conditions are then compared with specific similarity solutions in the literature.
Sub-Plate Overlap Code Documentation
NASA Technical Reports Server (NTRS)
Taff, L. G.; Bucciarelli, B.; Zarate, N.
1997-01-01
An expansion of the plate overlap method of astrometric data reduction to a single plate has been proposed and successfully tested. Each plate is (artificially) divided into sub-plates which can then be overlapped. This reduces the area of a 'plate' over which a plate model needs to accurately represent the relationship between measured coordinates and standard coordinates. Application is made to non-astrographic plates such as Schmidt plates and to wide-field astrographic plates. Indeed, the method is completely general and can be applied to any type of recording media.
State criminal justice telecommunications (STACOM). Volume 1: Executive summary
NASA Technical Reports Server (NTRS)
Fielding, J. E.; Frewing, H. K.; Lee, J. J.; Leflang, W. G.; Reilly, N. B.
1977-01-01
Techniques for identifying user requirements and network designs for criminal justice networks on a state wide basis are discussed. Topics covered include: methods for determining data required; data collection and survey; data organization procedures, and methods for forecasting network traffic volumes. Developed network design techniques center around a computerized topology program which enables the user to generate least cost network topologies that satisfy network traffic requirements, response time requirements and other specified functional requirements. The developed techniques were applied in Texas and Ohio, and results of these studies are presented.
Laser Interferometry Method as a Novel Tool in Endotoxins Research.
Arabski, Michał; Wąsik, Sławomir
2017-01-01
Optical properties of chemical substances are widely used at present for assays thereof in a variety of scientific disciplines. One of the measurement techniques applied in physical sciences, with a potential for novel applications in biology, is laser interferometry. This method enables to record the diffusion properties of chemical substances. Here we describe the novel application of laser interferometry in chitosan interactions with lipopolysaccharide by detection of colistin diffusion. The proposed model could be used in simple measurements of polymer interactions with endotoxins and/or biological active compounds, like antibiotics.
Current distribution on a cylindrical antenna with parallel orientation in a lossy magnetoplasma
NASA Technical Reports Server (NTRS)
Klein, C. A.; Klock, P. W.; Deschamps, G. A.
1972-01-01
The current distribution and impedance of a thin cylindrical antenna with parallel orientation to the static magnetic field of a lossy magnetoplasma is calculated with the method of moments. The electric field produced by an infinitesimal current source is first derived. Results are presented for a wide range of plasma parameters. Reasonable answers are obtained for all cases except for the overdense hyperbolic case. A discussion of the numerical stability is included which not only applies to this problem but other applications of the method of moments.
Scaling Symmetries in Elastic-Plastic Dynamic Cavity Expansion Equations Using the Isovector Method
Albright, Eric Jason; Ramsey, Scott D.; Schmidt, Joseph H.; ...
2017-09-16
Cavity-expansion approximations are widely-used in the study of penetration mechanics and indentation phenomena. We apply the isovector method to a well-established model in the literature for elastic-plastic cavity-expansion to systematically demonstrate the existence of Lie symmetries corresponding to scale-invariant solutions. Here we use the symmetries obtained from the equations of motion to determine compatible auxiliary conditions describing the cavity wall trajectory and the elastic-plastic material interface. The admissible conditions are then compared with specific similarity solutions in the literature.
Pose estimation of industrial objects towards robot operation
NASA Astrophysics Data System (ADS)
Niu, Jie; Zhou, Fuqiang; Tan, Haishu; Cao, Yu
2017-10-01
With the advantages of wide range, non-contact and high flexibility, the visual estimation technology of target pose has been widely applied in modern industry, robot guidance and other engineering practices. However, due to the influence of complicated industrial environment, outside interference factors, lack of object characteristics, restrictions of camera and other limitations, the visual estimation technology of target pose is still faced with many challenges. Focusing on the above problems, a pose estimation method of the industrial objects is developed based on 3D models of targets. By matching the extracted shape characteristics of objects with the priori 3D model database of targets, the method realizes the recognition of target. Thus a pose estimation of objects can be determined based on the monocular vision measuring model. The experimental results show that this method can be implemented to estimate the position of rigid objects based on poor images information, and provides guiding basis for the operation of the industrial robot.
Gog, Julia R; Lever, Andrew M L; Skittrall, Jordan P
2018-01-01
We present a fast, robust and parsimonious approach to detecting signals in an ordered sequence of numbers. Our motivation is in seeking a suitable method to take a sequence of scores corresponding to properties of positions in virus genomes, and find outlying regions of low scores. Suitable statistical methods without using complex models or making many assumptions are surprisingly lacking. We resolve this by developing a method that detects regions of low score within sequences of real numbers. The method makes no assumptions a priori about the length of such a region; it gives the explicit location of the region and scores it statistically. It does not use detailed mechanistic models so the method is fast and will be useful in a wide range of applications. We present our approach in detail, and test it on simulated sequences. We show that it is robust to a wide range of signal morphologies, and that it is able to capture multiple signals in the same sequence. Finally we apply it to viral genomic data to identify regions of evolutionary conservation within influenza and rotavirus.
The Impact of Normalization Methods on RNA-Seq Data Analysis
Zyprych-Walczak, J.; Szabelska, A.; Handschuh, L.; Górczak, K.; Klamecka, K.; Figlerowicz, M.; Siatkowski, I.
2015-01-01
High-throughput sequencing technologies, such as the Illumina Hi-seq, are powerful new tools for investigating a wide range of biological and medical problems. Massive and complex data sets produced by the sequencers create a need for development of statistical and computational methods that can tackle the analysis and management of data. The data normalization is one of the most crucial steps of data processing and this process must be carefully considered as it has a profound effect on the results of the analysis. In this work, we focus on a comprehensive comparison of five normalization methods related to sequencing depth, widely used for transcriptome sequencing (RNA-seq) data, and their impact on the results of gene expression analysis. Based on this study, we suggest a universal workflow that can be applied for the selection of the optimal normalization procedure for any particular data set. The described workflow includes calculation of the bias and variance values for the control genes, sensitivity and specificity of the methods, and classification errors as well as generation of the diagnostic plots. Combining the above information facilitates the selection of the most appropriate normalization method for the studied data sets and determines which methods can be used interchangeably. PMID:26176014
West, T D; Balas, E A; West, D A
1996-08-01
To obtain cost data needed to improve managed care decisions and negotiate profitable capitation contracts, most healthcare provider organizations use one of three costing methods: the ratio-of-costs-to-charges method, the relative value unit method, or the activity-based costing method. Although the ratio-of-costs to charges is used by a majority of provider organizations, a case study that applied these three methods in a renal dialysis clinic found that the activity-based costing method provided the most accurate cost data. By using this costing method, healthcare financial managers can obtain the data needed to make optimal decisions regarding resource allocation and cost containment, thus assuring the longterm financial viability of their organizations.
NASA Astrophysics Data System (ADS)
Gutschwager, Berndt; Hollandt, Jörg
2017-01-01
We present a novel method of nonuniformity correction (NUC) of infrared cameras and focal plane arrays (FPA) in a wide optical spectral range by reading radiance temperatures and by applying a radiation source with an unknown and spatially nonhomogeneous radiance temperature distribution. The benefit of this novel method is that it works with the display and the calculation of radiance temperatures, it can be applied to radiation sources of arbitrary spatial radiance temperature distribution, and it only requires sufficient temporal stability of this distribution during the measurement process. In contrast to this method, an initially presented method described the calculation of NUC correction with the reading of monitored radiance values. Both methods are based on the recording of several (at least three) images of a radiation source and a purposeful row- and line-shift of these sequent images in relation to the first primary image. The mathematical procedure is explained in detail. Its numerical verification with a source of a predefined nonhomogeneous radiance temperature distribution and a thermal imager of a predefined nonuniform FPA responsivity is presented.
Seo, Su Hyun; Kim, Ki Han; Kim, Min Chan; Choi, Hong Jo; Jung, Ghap Joong
2012-06-01
Mechanical stapler is regarded as a good alternative to the hand sewing technique, when used in gastric reconstruction. The circular stapling method has been widely applied to gastrectomy (open orlaparoscopic), for gastric cancer. We illustrated and compared the hand-sutured method to the circular stapling method, for Billroth-II, in patients who underwent laparoscopy assisted distal gastrectomy for gastric cancer. Between April 2009 and May 2011, 60 patients who underwent laparoscopy assisted distal gastrectomy, with Billroth-II, were enrolled. Hand-sutured Billroth-II was performed in 40 patients (manual group) and circular stapler Billroth-II was performed in 20 patients (stapler group). Clinicopathological features and post-operative outcomes were evaluated and compared between the two groups. Nosignificant differences were observed in clinicopathologic parameters and post-operative outcomes, except in the operation times. Operation times and anastomosis times were significantly shorter in the stapler group (P=0.004 and P<0.001). Compared to the hand-sutured method, the circular stapling method can be applied safely and more efficiently, when performing Billroth-II anastomosis, after laparoscopy assisted distal gastrectomy in patients with gastric cancer.
The floral morphospace – a modern comparative approach to study angiosperm evolution
Chartier, Marion; Jabbour, Florian; Gerber, Sylvain; Mitteroecker, Philipp; Sauquet, Hervé; von Balthazar, Maria; Staedler, Yannick; Crane, Peter R.; Schönenberger, Jürg
2017-01-01
Summary Morphospaces are mathematical representations used for studying the evolution of morphological diversity and for the evaluation of evolved shapes among theoretically possible ones. Although widely used in zoology, they – with few exceptions – have been disregarded in plant science and in particular in the study of broad-scale patterns of floral structure and evolution. Here we provide basic information on the morphospace approach; we review earlier morphospace applications in plant science; and as a practical example, we construct and analyze a floral morphospace. Morphospaces are usually visualized with the help of ordination methods such as principal component analysis (PCA) or nonmetric multidimensional scaling (NMDS). The results of these analyses are then coupled with disparity indices that describe the spread of taxa in the space. We discuss these methods and apply modern statistical tools to the first and only angiosperm-wide floral morphospace published by Stebbins in 1951. Despite the incompleteness of Stebbins’ original dataset, our analyses highlight major, angiosperm-wide trends in the diversity of flower morphology and thereby demonstrate the power of this previously neglected approach in plant science. PMID:25539005
Palatoplasty: suturing the mucoperiosteal flaps to the hard palate through hole.
Hwang, Kun; Lee, Ji Hun; Kim, Yu Jin; Le, Se Il
2009-05-01
We satisfactorily repaired a wide cleft palate using a method of V-Y pushback and anchoring the oral mucoperiosteal flap onto the bony ridge of the cleft. An 8-year-old Vietnamese girl had a wide incomplete bilateral posterior cleft palate associated with congenital cardiac malformations. The gap of the posterior cleft was 2.5 cm, which exceeded the total widths of the palatal shelves. We applied V-Y pushback and used a vomer flap to close the wide cleft palate. The posterior two thirds of the nasal mucosae from the cleft margins were sutured to the vomer flap. The nasal side of the anterior one third of the bony cleft was uncovered. The elevated bilateral mucoperiosteal flaps were brought together to the midline and sutured to the anterior triangular flap in a V-Y pushback fashion. Four holes were drilled 5 mm lateral to each bony cleft margin. The lateral sides of the mucoperiosteal flaps were fixed to the palate bone with 3-0 Vicryl through the hole. This method reduces the tension of the flap which might frequently cause oronasal fistula and also improve viability.
Clerici, Nicola; Bodini, Antonio; Ferrarini, Alessandro
2004-10-01
In order to achieve improved sustainability, local authorities need to use tools that adequately describe and synthesize environmental information. This article illustrates a methodological approach that organizes a wide suite of environmental indicators into few aggregated indices, making use of correlation, principal component analysis, and fuzzy sets. Furthermore, a weighting system, which includes stakeholders' priorities and ambitions, is applied. As a case study, the described methodology is applied to the Reggio Emilia Province in Italy, by considering environmental information from 45 municipalities. Principal component analysis is used to condense an initial set of 19 indicators into 6 fundamental dimensions that highlight patterns of environmental conditions at the provincial scale. These dimensions are further aggregated in two indices of environmental performance through fuzzy sets. The simple form of these indices makes them particularly suitable for public communication, as they condensate a wide set of heterogeneous indicators. The main outcomes of the analysis and the potential applications of the method are discussed.
A Pruning Neural Network Model in Credit Classification Analysis
Tang, Yajiao; Ji, Junkai; Dai, Hongwei; Yu, Yang; Todo, Yuki
2018-01-01
Nowadays, credit classification models are widely applied because they can help financial decision-makers to handle credit classification issues. Among them, artificial neural networks (ANNs) have been widely accepted as the convincing methods in the credit industry. In this paper, we propose a pruning neural network (PNN) and apply it to solve credit classification problem by adopting the well-known Australian and Japanese credit datasets. The model is inspired by synaptic nonlinearity of a dendritic tree in a biological neural model. And it is trained by an error back-propagation algorithm. The model is capable of realizing a neuronal pruning function by removing the superfluous synapses and useless dendrites and forms a tidy dendritic morphology at the end of learning. Furthermore, we utilize logic circuits (LCs) to simulate the dendritic structures successfully which makes PNN be implemented on the hardware effectively. The statistical results of our experiments have verified that PNN obtains superior performance in comparison with other classical algorithms in terms of accuracy and computational efficiency. PMID:29606961
NASA Astrophysics Data System (ADS)
Plestenjak, Bor; Gheorghiu, Călin I.; Hochstenbach, Michiel E.
2015-10-01
In numerous science and engineering applications a partial differential equation has to be solved on some fairly regular domain that allows the use of the method of separation of variables. In several orthogonal coordinate systems separation of variables applied to the Helmholtz, Laplace, or Schrödinger equation leads to a multiparameter eigenvalue problem (MEP); important cases include Mathieu's system, Lamé's system, and a system of spheroidal wave functions. Although multiparameter approaches are exploited occasionally to solve such equations numerically, MEPs remain less well known, and the variety of available numerical methods is not wide. The classical approach of discretizing the equations using standard finite differences leads to algebraic MEPs with large matrices, which are difficult to solve efficiently. The aim of this paper is to change this perspective. We show that by combining spectral collocation methods and new efficient numerical methods for algebraic MEPs it is possible to solve such problems both very efficiently and accurately. We improve on several previous results available in the literature, and also present a MATLAB toolbox for solving a wide range of problems.
Ma, Xiaolei; Dai, Zhuang; He, Zhengbing; Ma, Jihui; Wang, Yong; Wang, Yunpeng
2017-04-10
This paper proposes a convolutional neural network (CNN)-based method that learns traffic as images and predicts large-scale, network-wide traffic speed with a high accuracy. Spatiotemporal traffic dynamics are converted to images describing the time and space relations of traffic flow via a two-dimensional time-space matrix. A CNN is applied to the image following two consecutive steps: abstract traffic feature extraction and network-wide traffic speed prediction. The effectiveness of the proposed method is evaluated by taking two real-world transportation networks, the second ring road and north-east transportation network in Beijing, as examples, and comparing the method with four prevailing algorithms, namely, ordinary least squares, k-nearest neighbors, artificial neural network, and random forest, and three deep learning architectures, namely, stacked autoencoder, recurrent neural network, and long-short-term memory network. The results show that the proposed method outperforms other algorithms by an average accuracy improvement of 42.91% within an acceptable execution time. The CNN can train the model in a reasonable time and, thus, is suitable for large-scale transportation networks.
Ma, Xiaolei; Dai, Zhuang; He, Zhengbing; Ma, Jihui; Wang, Yong; Wang, Yunpeng
2017-01-01
This paper proposes a convolutional neural network (CNN)-based method that learns traffic as images and predicts large-scale, network-wide traffic speed with a high accuracy. Spatiotemporal traffic dynamics are converted to images describing the time and space relations of traffic flow via a two-dimensional time-space matrix. A CNN is applied to the image following two consecutive steps: abstract traffic feature extraction and network-wide traffic speed prediction. The effectiveness of the proposed method is evaluated by taking two real-world transportation networks, the second ring road and north-east transportation network in Beijing, as examples, and comparing the method with four prevailing algorithms, namely, ordinary least squares, k-nearest neighbors, artificial neural network, and random forest, and three deep learning architectures, namely, stacked autoencoder, recurrent neural network, and long-short-term memory network. The results show that the proposed method outperforms other algorithms by an average accuracy improvement of 42.91% within an acceptable execution time. The CNN can train the model in a reasonable time and, thus, is suitable for large-scale transportation networks. PMID:28394270
Development and evaluation of modified envelope correlation method for deep tectonic tremor
NASA Astrophysics Data System (ADS)
Mizuno, N.; Ide, S.
2017-12-01
We develop a new location method for deep tectonic tremors, as an improvement of widely used envelope correlation method, and applied it to construct a tremor catalog in western Japan. Using the cross-correlation functions as objective functions and weighting components of data by the inverse of error variances, the envelope cross-correlation method is redefined as a maximum likelihood method. This method is also capable of multiple source detection, because when several events occur almost simultaneously, they appear as local maxima of likelihood.The average of weighted cross-correlation functions, defined as ACC, is a nonlinear function whose variable is a position of deep tectonic tremor. The optimization method has two steps. First, we fix the source depth to 30 km and use a grid search with 0.2 degree intervals to find the maxima of ACC, which are candidate event locations. Then, using each of the candidate locations as initial values, we apply a gradient method to determine horizontal and vertical components of a hypocenter. Sometimes, several source locations are determined in a time window of 5 minutes. We estimate the resolution, which is defined as a distance of sources to be detected separately by the location method, is about 100 km. The validity of this estimation is confirmed by a numerical test using synthetic waveforms. Applying to continuous seismograms in western Japan for over 10 years, the new method detected 27% more tremors than a previous method, owing to the multiple detection and improvement of accuracy by appropriate weighting scheme.
Gaussian Multiscale Aggregation Applied to Segmentation in Hand Biometrics
de Santos Sierra, Alberto; Ávila, Carmen Sánchez; Casanova, Javier Guerra; del Pozo, Gonzalo Bailador
2011-01-01
This paper presents an image segmentation algorithm based on Gaussian multiscale aggregation oriented to hand biometric applications. The method is able to isolate the hand from a wide variety of background textures such as carpets, fabric, glass, grass, soil or stones. The evaluation was carried out by using a publicly available synthetic database with 408,000 hand images in different backgrounds, comparing the performance in terms of accuracy and computational cost to two competitive segmentation methods existing in literature, namely Lossy Data Compression (LDC) and Normalized Cuts (NCuts). The results highlight that the proposed method outperforms current competitive segmentation methods with regard to computational cost, time performance, accuracy and memory usage. PMID:22247658
Gaussian multiscale aggregation applied to segmentation in hand biometrics.
de Santos Sierra, Alberto; Avila, Carmen Sánchez; Casanova, Javier Guerra; del Pozo, Gonzalo Bailador
2011-01-01
This paper presents an image segmentation algorithm based on Gaussian multiscale aggregation oriented to hand biometric applications. The method is able to isolate the hand from a wide variety of background textures such as carpets, fabric, glass, grass, soil or stones. The evaluation was carried out by using a publicly available synthetic database with 408,000 hand images in different backgrounds, comparing the performance in terms of accuracy and computational cost to two competitive segmentation methods existing in literature, namely Lossy Data Compression (LDC) and Normalized Cuts (NCuts). The results highlight that the proposed method outperforms current competitive segmentation methods with regard to computational cost, time performance, accuracy and memory usage.
Computation of Pressurized Gas Bearings Using CE/SE Method
NASA Technical Reports Server (NTRS)
Cioc, Sorin; Dimofte, Florin; Keith, Theo G., Jr.; Fleming, David P.
2003-01-01
The space-time conservation element and solution element (CE/SE) method is extended to compute compressible viscous flows in pressurized thin fluid films. This numerical scheme has previously been used successfully to solve a wide variety of compressible flow problems, including flows with large and small discontinuities. In this paper, the method is applied to calculate the pressure distribution in a hybrid gas journal bearing. The formulation of the problem is presented, including the modeling of the feeding system. the numerical results obtained are compared with experimental data. Good agreement between the computed results and the test data were obtained, and thus validate the CE/SE method to solve such problems.
Frequency domain phase-shifted confocal microscopy (FDPCM) with array detection
NASA Astrophysics Data System (ADS)
Ge, Baoliang; Huang, Yujia; Fang, Yue; Kuang, Cuifang; Xiu, Peng; Liu, Xu
2017-09-01
We proposed a novel method to reconstruct images taken by array detected confocal microscopy without prior knowledge about its detector distribution. The proposed frequency domain phase-shifted confocal microscopy (FDPCM) shifts the image from each detection channel to its corresponding place by substituting the phase information in Fourier domain. Theoretical analysis shows that our method could approach the resolution nearly twofold of wide-field microscopy. Simulation and experiment results are also shown to verify the applicability and effectiveness of our method. Compared to Airyscan, our method holds the advantage of simplicity and convenience to be applied to array detectors with different structure, which makes FDPCM have great potential in the application of biomedical observation in the future.
Numerical Simulation of Bulging Deformation for Wide-Thick Slab Under Uneven Cooling Conditions
NASA Astrophysics Data System (ADS)
Wu, Chenhui; Ji, Cheng; Zhu, Miaoyong
2018-06-01
In the present work, the bulging deformation of a wide-thick slab under uneven cooling conditions was studied using finite element method. The non-uniform solidification was first calculated using a 2D heat transfer model. The thermal material properties were derived based on a microsegregation model, and the water flux distribution was measured and applied to calculate the cooling boundary conditions. Based on the solidification results, a 3D bulging model was established. The 2D heat transfer model was verified by the measured shell thickness and the slab surface temperature, and the 3D bulging model was verified by the calculated maximum bulging deflections using formulas. The bulging deformation behavior of the wide-thick slab under uneven cooling condition was then determined, and the effect of uneven solidification, casting speed, and roll misalignment were investigated.
Numerical Simulation of Bulging Deformation for Wide-Thick Slab Under Uneven Cooling Conditions
NASA Astrophysics Data System (ADS)
Wu, Chenhui; Ji, Cheng; Zhu, Miaoyong
2018-02-01
In the present work, the bulging deformation of a wide-thick slab under uneven cooling conditions was studied using finite element method. The non-uniform solidification was first calculated using a 2D heat transfer model. The thermal material properties were derived based on a microsegregation model, and the water flux distribution was measured and applied to calculate the cooling boundary conditions. Based on the solidification results, a 3D bulging model was established. The 2D heat transfer model was verified by the measured shell thickness and the slab surface temperature, and the 3D bulging model was verified by the calculated maximum bulging deflections using formulas. The bulging deformation behavior of the wide-thick slab under uneven cooling condition was then determined, and the effect of uneven solidification, casting speed, and roll misalignment were investigated.
Discovering time-lagged rules from microarray data using gene profile classifiers
2011-01-01
Background Gene regulatory networks have an essential role in every process of life. In this regard, the amount of genome-wide time series data is becoming increasingly available, providing the opportunity to discover the time-delayed gene regulatory networks that govern the majority of these molecular processes. Results This paper aims at reconstructing gene regulatory networks from multiple genome-wide microarray time series datasets. In this sense, a new model-free algorithm called GRNCOP2 (Gene Regulatory Network inference by Combinatorial OPtimization 2), which is a significant evolution of the GRNCOP algorithm, was developed using combinatorial optimization of gene profile classifiers. The method is capable of inferring potential time-delay relationships with any span of time between genes from various time series datasets given as input. The proposed algorithm was applied to time series data composed of twenty yeast genes that are highly relevant for the cell-cycle study, and the results were compared against several related approaches. The outcomes have shown that GRNCOP2 outperforms the contrasted methods in terms of the proposed metrics, and that the results are consistent with previous biological knowledge. Additionally, a genome-wide study on multiple publicly available time series data was performed. In this case, the experimentation has exhibited the soundness and scalability of the new method which inferred highly-related statistically-significant gene associations. Conclusions A novel method for inferring time-delayed gene regulatory networks from genome-wide time series datasets is proposed in this paper. The method was carefully validated with several publicly available data sets. The results have demonstrated that the algorithm constitutes a usable model-free approach capable of predicting meaningful relationships between genes, revealing the time-trends of gene regulation. PMID:21524308
NASA Astrophysics Data System (ADS)
Huang, Yishuo; Chiang, Chih-Hung; Hsu, Keng-Tsang
2018-03-01
Defects presented on the facades of a building do have profound impacts on extending the life cycle of the building. How to identify the defects is a crucial issue; destructive and non-destructive methods are usually employed to identify the defects presented on a building. Destructive methods always cause the permanent damages for the examined objects; on the other hand, non-destructive testing (NDT) methods have been widely applied to detect those defects presented on exterior layers of a building. However, NDT methods cannot provide efficient and reliable information for identifying the defects because of the huge examination areas. Infrared thermography is often applied to quantitative energy performance measurements for building envelopes. Defects on the exterior layer of buildings may be caused by several factors: ventilation losses, conduction losses, thermal bridging, defective services, moisture condensation, moisture ingress, and structure defects. Analyzing the collected thermal images can be quite difficult when the spatial variations of surface temperature are small. In this paper the authors employ image segmentation to cluster those pixels with similar surface temperatures such that the processed thermal images can be composed of limited groups. The surface temperature distribution in each segmented group is homogenous. In doing so, the regional boundaries of the segmented regions can be identified and extracted. A terrestrial laser scanner (TLS) is widely used to collect the point clouds of a building, and those point clouds are applied to reconstruct the 3D model of the building. A mapping model is constructed such that the segmented thermal images can be projected onto the 2D image of the specified 3D building. In this paper, the administrative building in Chaoyang University campus is used as an example. The experimental results not only provide the defect information but also offer their corresponding spatial locations in the 3D model.
Shen, Yu-Chu; Eggleston, Karen; Lau, Joseph; Schmid, Christopher H
2007-01-01
This study applies meta-analytic methods to conduct a quantitative review of the empirical literature on hospital ownership since 1990. We examine four financial outcomes across 40 studies: cost, revenue, profit margin, and efficiency. We find that variation in the magnitudes of ownership effects can be explained by a study's research focus and methodology. Studies using empirical methods that control for few confounding factors tend to find larger differences between for-profit and not-for-profit hospitals than studies that control for a wider range of confounding factors. Functional form and sample size also matter. Failure to apply log transformation to highly skewed expenditure data yields misleadingly large estimated differences between for-profits and not-for-profits. Studies with fewer than 200 observations also produce larger point estimates and wide confidence intervals.
Analysis of decay chains of superheavy nuclei produced in the 249Bk+48Ca and 243Am+48Ca reactions
NASA Astrophysics Data System (ADS)
Zlokazov, V. B.; Utyonkov, V. K.
2017-07-01
The analysis of decay chains starting at superheavy nuclei 293Ts and 289Mc is presented. The spectroscopic properties of nuclei identified during the experiments using the 249Bk+48Ca and 243Am+48Ca reactions studied at the gas-filled separators DGFRS, TASCA and BGS are considered. We present the analysis of decay data using widely adopted statistical methods and applying them to the short decay chains of parent odd-Z nuclei. We find out that the recently suggested method of analyzing decay chains by Forsberg et al may lead to questionable conclusions when applied for the analysis of radioactive decays. Our discussion demonstrates reasonable congruence of α-particle energies and decay times of nuclei assigned to isotopes 289Mc, 285Nh and 281Rg observed in both reactions.
Liu, Bailing; Zhang, Fumin; Qu, Xinghua; Shi, Xiaojia
2016-02-18
Coordinate transformation plays an indispensable role in industrial measurements, including photogrammetry, geodesy, laser 3-D measurement and robotics. The widely applied methods of coordinate transformation are generally based on solving the equations of point clouds. Despite the high accuracy, this might result in no solution due to the use of ill conditioned matrices. In this paper, a novel coordinate transformation method is proposed, not based on the equation solution but based on the geometric transformation. We construct characteristic lines to represent the coordinate systems. According to the space geometry relation, the characteristic line scan is made to coincide by a series of rotations and translations. The transformation matrix can be obtained using matrix transformation theory. Experiments are designed to compare the proposed method with other methods. The results show that the proposed method has the same high accuracy, but the operation is more convenient and flexible. A multi-sensor combined measurement system is also presented to improve the position accuracy of a robot with the calibration of the robot kinematic parameters. Experimental verification shows that the position accuracy of robot manipulator is improved by 45.8% with the proposed method and robot calibration.
Anguera, M Teresa; Camerino, Oleguer; Castañer, Marta; Sánchez-Algarra, Pedro; Onwuegbuzie, Anthony J
2017-01-01
Mixed methods studies are been increasingly applied to a diversity of fields. In this paper, we discuss the growing use-and enormous potential-of mixed methods research in the field of sport and physical activity. A second aim is to contribute to strengthening the characteristics of mixed methods research by showing how systematic observation offers rigor within a flexible framework that can be applied to a wide range of situations. Observational methodology is characterized by high scientific rigor and flexibility throughout its different stages and allows the objective study of spontaneous behavior in natural settings, with no external influence. Mixed methods researchers need to take bold yet thoughtful decisions regarding both substantive and procedural issues. We present three fundamental and complementary ideas to guide researchers in this respect: we show why studies of sport and physical activity that use a mixed methods research approach should be included in the field of mixed methods research, we highlight the numerous possibilities offered by observational methodology in this field through the transformation of descriptive data into quantifiable code matrices, and we discuss possible solutions for achieving true integration of qualitative and quantitative findings.
Decomposing the Apoptosis Pathway Into Biologically Interpretable Principal Components
Wang, Min; Kornblau, Steven M; Coombes, Kevin R
2018-01-01
Principal component analysis (PCA) is one of the most common techniques in the analysis of biological data sets, but applying PCA raises 2 challenges. First, one must determine the number of significant principal components (PCs). Second, because each PC is a linear combination of genes, it rarely has a biological interpretation. Existing methods to determine the number of PCs are either subjective or computationally extensive. We review several methods and describe a new R package, PCDimension, that implements additional methods, the most important being an algorithm that extends and automates a graphical Bayesian method. Using simulations, we compared the methods. Our newly automated procedure is competitive with the best methods when considering both accuracy and speed and is the most accurate when the number of objects is small compared with the number of attributes. We applied the method to a proteomics data set from patients with acute myeloid leukemia. Proteins in the apoptosis pathway could be explained using 6 PCs. By clustering the proteins in PC space, we were able to replace the PCs by 6 “biological components,” 3 of which could be immediately interpreted from the current literature. We expect this approach combining PCA with clustering to be widely applicable. PMID:29881252
Liu, Bailing; Zhang, Fumin; Qu, Xinghua; Shi, Xiaojia
2016-01-01
Coordinate transformation plays an indispensable role in industrial measurements, including photogrammetry, geodesy, laser 3-D measurement and robotics. The widely applied methods of coordinate transformation are generally based on solving the equations of point clouds. Despite the high accuracy, this might result in no solution due to the use of ill conditioned matrices. In this paper, a novel coordinate transformation method is proposed, not based on the equation solution but based on the geometric transformation. We construct characteristic lines to represent the coordinate systems. According to the space geometry relation, the characteristic line scan is made to coincide by a series of rotations and translations. The transformation matrix can be obtained using matrix transformation theory. Experiments are designed to compare the proposed method with other methods. The results show that the proposed method has the same high accuracy, but the operation is more convenient and flexible. A multi-sensor combined measurement system is also presented to improve the position accuracy of a robot with the calibration of the robot kinematic parameters. Experimental verification shows that the position accuracy of robot manipulator is improved by 45.8% with the proposed method and robot calibration. PMID:26901203
Comparative study on the welded structure fatigue strength assessment method
NASA Astrophysics Data System (ADS)
Hu, Tao
2018-04-01
Due to the welding structure is widely applied in various industries, especially the pressure container, motorcycle, automobile, aviation, ship industry, such as large crane steel structure, so for welded structure fatigue strength evaluation is particularly important. For welded structure fatigue strength evaluation method mainly has four kinds of, the more from the use of two kinds of welded structure fatigue strength evaluation method, namely the nominal stress method and the hot spot stress evaluation method, comparing from its principle, calculation method for the process analysis and research, compare the similarities and the advantages and disadvantages, the analysis of practical engineering problems to provide the reference for every profession and trade, as well as the future welded structure fatigue strength and life evaluation method put forward outlook.
Surface Passivation in Empirical Tight Binding
NASA Astrophysics Data System (ADS)
He, Yu; Tan, Yaohua; Jiang, Zhengping; Povolotskyi, Michael; Klimeck, Gerhard; Kubis, Tillmann
2016-03-01
Empirical Tight Binding (TB) methods are widely used in atomistic device simulations. Existing TB methods to passivate dangling bonds fall into two categories: 1) Method that explicitly includes passivation atoms is limited to passivation with atoms and small molecules only. 2) Method that implicitly incorporates passivation does not distinguish passivation atom types. This work introduces an implicit passivation method that is applicable to any passivation scenario with appropriate parameters. This method is applied to a Si quantum well and a Si ultra-thin body transistor oxidized with SiO2 in several oxidation configurations. Comparison with ab-initio results and experiments verifies the presented method. Oxidation configurations that severely hamper the transistor performance are identified. It is also shown that the commonly used implicit H atom passivation overestimates the transistor performance.
Comprehensive comparative analysis of 5'-end RNA-sequencing methods.
Adiconis, Xian; Haber, Adam L; Simmons, Sean K; Levy Moonshine, Ami; Ji, Zhe; Busby, Michele A; Shi, Xi; Jacques, Justin; Lancaster, Madeline A; Pan, Jen Q; Regev, Aviv; Levin, Joshua Z
2018-06-04
Specialized RNA-seq methods are required to identify the 5' ends of transcripts, which are critical for studies of gene regulation, but these methods have not been systematically benchmarked. We directly compared six such methods, including the performance of five methods on a single human cellular RNA sample and a new spike-in RNA assay that helps circumvent challenges resulting from uncertainties in annotation and RNA processing. We found that the 'cap analysis of gene expression' (CAGE) method performed best for mRNA and that most of its unannotated peaks were supported by evidence from other genomic methods. We applied CAGE to eight brain-related samples and determined sample-specific transcription start site (TSS) usage, as well as a transcriptome-wide shift in TSS usage between fetal and adult brain.
NASA Astrophysics Data System (ADS)
Stuart, M. R.; Pinsky, M. L.
2016-02-01
The ability to use DNA to identify individuals and their offspring has begun to revolutionize marine ecology. However, genetic mark-recapture and parentage studies typically require large numbers of individuals and associated high genotyping costs. Here, we describe a rapid and relatively low-cost protocol for genotyping non-model organisms at thousands of Single Nucleotide Polymorphisms (SNPs) using massively parallel sequencing. We apply the approach to a population of yellowtail clownfish, Amphiprion clarkii, to detect genetic mark-recaptures and parent-offspring relationships. We test multiple bioinformatic approaches and describe how this method could be applied to a wide variety of marine organisms.
NASA Astrophysics Data System (ADS)
Tran, Van-Quyet; Wu, Yu-Ren
2017-12-01
For some specific purposes, a helical gear with wide face-width is applied for meshing with two other gears simultaneously, such as the idle pinions in the vehicle differential. However, due to the fact of gear deformation, the tooth edge contact and stress concentration might occur. Single lead-crowning is no more suitable for such a case to get the appropriate position of contact pattern and improve the load distribution on tooth surfaces. Therefore, a novel *Email: method is proposed in this paper to achieve the wide-face-width helical gears with the dual lead-crowned and the anti-twisted tooth surfaces by controlling the swivel angle and the rotation angle of the honing wheel respectively on an internal gear honing machine. Numerical examples are practiced to illustrate and verified the merits of the proposed method.
Method for detection and imaging over a broad spectral range
Yefremenko, Volodymyr; Gordiyenko, Eduard; Pishko, legal representative, Olga; Novosad, Valentyn; Pishko, deceased; Vitalii
2007-09-25
A method of controlling the coordinate sensitivity in a superconducting microbolometer employs localized light, heating or magnetic field effects to form normal or mixed state regions on a superconducting film and to control the spatial location. Electron beam lithography and wet chemical etching were applied as pattern transfer processes in epitaxial Y--Ba--Cu--O films. Two different sensor designs were tested: (i) a 3 millimeter long and 40 micrometer wide stripe and (ii) a 1.25 millimeters long, and 50 micron wide meandering-like structure. Scanning the laser beam along the stripe leads to physical displacement of the sensitive area, and, therefore, may be used as a basis for imaging over a broad spectral range. Forming the superconducting film as a meandering structure provides the equivalent of a two-dimensional detector array. Advantages of this approach are simplicity of detector fabrication, and simplicity of the read-out process requiring only two electrical terminals.
Multi-criteria decision making approaches for quality control of genome-wide association studies.
Malovini, Alberto; Rognoni, Carla; Puca, Annibale; Bellazzi, Riccardo
2009-03-01
Experimental errors in the genotyping phases of a Genome-Wide Association Study (GWAS) can lead to false positive findings and to spurious associations. An appropriate quality control phase could minimize the effects of this kind of errors. Several filtering criteria can be used to perform quality control. Currently, no formal methods have been proposed for taking into account at the same time these criteria and the experimenter's preferences. In this paper we propose two strategies for setting appropriate genotyping rate thresholds for GWAS quality control. These two approaches are based on the Multi-Criteria Decision Making theory. We have applied our method on a real dataset composed by 734 individuals affected by Arterial Hypertension (AH) and 486 nonagenarians without history of AH. The proposed strategies appear to deal with GWAS quality control in a sound way, as they lead to rationalize and make explicit the experimenter's choices thus providing more reproducible results.
Spatial reconstruction of single-cell gene expression data.
Satija, Rahul; Farrell, Jeffrey A; Gennert, David; Schier, Alexander F; Regev, Aviv
2015-05-01
Spatial localization is a key determinant of cellular fate and behavior, but methods for spatially resolved, transcriptome-wide gene expression profiling across complex tissues are lacking. RNA staining methods assay only a small number of transcripts, whereas single-cell RNA-seq, which measures global gene expression, separates cells from their native spatial context. Here we present Seurat, a computational strategy to infer cellular localization by integrating single-cell RNA-seq data with in situ RNA patterns. We applied Seurat to spatially map 851 single cells from dissociated zebrafish (Danio rerio) embryos and generated a transcriptome-wide map of spatial patterning. We confirmed Seurat's accuracy using several experimental approaches, then used the strategy to identify a set of archetypal expression patterns and spatial markers. Seurat correctly localizes rare subpopulations, accurately mapping both spatially restricted and scattered groups. Seurat will be applicable to mapping cellular localization within complex patterned tissues in diverse systems.
Classifying Degraded Modern Polymeric Museum Artefacts by Their Smell.
Curran, Katherine; Underhill, Mark; Grau-Bové, Josep; Fearn, Tom; Gibson, Lorraine T; Strlič, Matija
2018-02-05
The use of VOC analysis to diagnose degradation in modern polymeric museum artefacts is reported. Volatile organic compound (VOC) analysis is a successful method for diagnosing medical conditions but to date has found little application in museums. Modern polymers are increasingly found in museum collections but pose serious conservation difficulties owing to unstable and widely varying formulations. Solid-phase microextraction gas chromatography/mass spectrometry and linear discriminant analysis were used to classify samples according to the length of time they had been artificially degraded. Accuracies in classification of 50-83 % were obtained after validation with separate test sets. The method was applied to three artefacts from collections at Tate to detect evidence of degradation. This approach could be used for any material in heritage collections and more widely in the field of polymer degradation. © 2018 The Authors. Published by Wiley-VCH Verlag GmbH & Co. KGaA.
Vichi, Stefania; Cortés-Francisco, Nuria; Romero, Agustí; Caixach, Josep
2015-03-01
In the present paper, an electrospray ionization (ESI)-Orbitrap method is proposed for the direct chemical profiling of epicuticular wax (EW) from Olea europaea fruit. It constitutes a rapid and efficient tool suitable for a wide-ranging screening of a large number of samples. In a few minutes, the method provides a comprehensive characterization of total EW extracts, based on the molecular formula of their components. Accurate mass measurements are obtained by ultrahigh resolution mass spectrometry, and compositional restrictions are set on the basis of the information available from previous studies of olive EW. By alternating positive and negative ESI modes within the same analysis, complementary results are obtained and a wide range of chemical species is covered. This provides a detailed compositional overview that otherwise would only be available by applying multiple analytical techniques. Copyright © 2015 John Wiley & Sons, Ltd.
Nonparametric Inference of Doubly Stochastic Poisson Process Data via the Kernel Method
Zhang, Tingting; Kou, S. C.
2010-01-01
Doubly stochastic Poisson processes, also known as the Cox processes, frequently occur in various scientific fields. In this article, motivated primarily by analyzing Cox process data in biophysics, we propose a nonparametric kernel-based inference method. We conduct a detailed study, including an asymptotic analysis, of the proposed method, and provide guidelines for its practical use, introducing a fast and stable regression method for bandwidth selection. We apply our method to real photon arrival data from recent single-molecule biophysical experiments, investigating proteins' conformational dynamics. Our result shows that conformational fluctuation is widely present in protein systems, and that the fluctuation covers a broad range of time scales, highlighting the dynamic and complex nature of proteins' structure. PMID:21258615
Nonparametric Inference of Doubly Stochastic Poisson Process Data via the Kernel Method.
Zhang, Tingting; Kou, S C
2010-01-01
Doubly stochastic Poisson processes, also known as the Cox processes, frequently occur in various scientific fields. In this article, motivated primarily by analyzing Cox process data in biophysics, we propose a nonparametric kernel-based inference method. We conduct a detailed study, including an asymptotic analysis, of the proposed method, and provide guidelines for its practical use, introducing a fast and stable regression method for bandwidth selection. We apply our method to real photon arrival data from recent single-molecule biophysical experiments, investigating proteins' conformational dynamics. Our result shows that conformational fluctuation is widely present in protein systems, and that the fluctuation covers a broad range of time scales, highlighting the dynamic and complex nature of proteins' structure.
Hybrid Semiclassical Theory of Quantum Quenches in One-Dimensional Systems
NASA Astrophysics Data System (ADS)
Moca, Cǎtǎlin Paşcu; Kormos, Márton; Zaránd, Gergely
2017-09-01
We develop a hybrid semiclassical method to study the time evolution of one-dimensional quantum systems in and out of equilibrium. Our method handles internal degrees of freedom completely quantum mechanically by a modified time-evolving block decimation method while treating orbital quasiparticle motion classically. We can follow dynamics up to time scales well beyond the reach of standard numerical methods to observe the crossover between preequilibrated and locally phase equilibrated states. As an application, we investigate the quench dynamics and phase fluctuations of a pair of tunnel-coupled one-dimensional Bose condensates. We demonstrate the emergence of soliton-collision-induced phase propagation, soliton-entropy production, and multistep thermalization. Our method can be applied to a wide range of gapped one-dimensional systems.
Autoclave Sterilization of PEDOT:PSS Electrophysiology Devices.
Uguz, Ilke; Ganji, Mehran; Hama, Adel; Tanaka, Atsunori; Inal, Sahika; Youssef, Ahmed; Owens, Roisin M; Quilichini, Pascale P; Ghestem, Antoine; Bernard, Christophe; Dayeh, Shadi A; Malliaras, George G
2016-12-01
Autoclaving, the most widely available sterilization method, is applied to poly(3,4-ethylenedioxythiophene) doped with polystyrene sulfonate (PEDOT:PSS) electrophysiology devices. The process does not harm morphology or electrical properties, while it effectively kills E. coli intentionally cultured on the devices. This finding paves the way to widespread introduction of PEDOT:PSS electrophysiology devices to the clinic. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
ERIC Educational Resources Information Center
General Accounting Office, Washington, DC. Program Evaluation and Methodology Div.
This general program evaluation framework provides a wide range of criteria that can be applied in the evaluation of diverse federal progams. The framework was developed from a literature search on program evaluation methods and their use, the experiences of the United States Government Accounting Office (GAO), and consideration of the types of…
USDA-ARS?s Scientific Manuscript database
The “quick, easy, cheap, effective, rugged, and safe” (QuEChERS) approach to sample preparation is widely applied in pesticide residue analysis, but the use of magnesium sulfate for salting out in the method is not ideal for mass spectrometry. In this study we developed and evaluated three new diffe...
NASA Technical Reports Server (NTRS)
Lagow, R. J.; Dumitru, E. T.
1982-01-01
The direct fluorination method of converting carefully selected hydrocarbon substrates to fluorinated membranes was successfully applied to produce promising, novel membranes for electrochemical devices. A family of polymer blends was identified which permits wide latitude in the concentration of both crosslinks and carboxyl groups in hydrocarbon membranes. These membranes were successfully fluorinated and are potentially competitive with commercial membranes in performance, and potentially much cheaper in price.
The retrospective chart review: important methodological considerations.
Vassar, Matt; Holzmann, Matthew
2013-01-01
In this paper, we review and discuss ten common methodological mistakes found in retrospective chart reviews. The retrospective chart review is a widely applicable research methodology that can be used by healthcare disciplines as a means to direct subsequent prospective investigations. In many cases in this review, we have also provided suggestions or accessible resources that researchers can apply as a "best practices" guide when planning, conducting, or reviewing this investigative method.
Carlos A. Gonzalez-Benecke; Eric J. Jokela; Wendell P. Cropper; Rosvel Bracho; Daniel J. Leduc
2014-01-01
The forest simulation model, 3-PG, has been widely applied as a useful tool for predicting growth of forest species in many countries. The model has the capability to estimate the effects of management, climate and site characteristics on many stand attributes using easily available data. Currently, there is an increasing interest in estimating biomass and assessing...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ry, Rexha Verdhora, E-mail: rexha.vry@gmail.com; Nugraha, Andri Dian, E-mail: nugraha@gf.itb.ac.id
Observation of earthquakes is routinely used widely in tectonic activity observation, and also in local scale such as volcano tectonic and geothermal activity observation. It is necessary for determining the location of precise hypocenter which the process involves finding a hypocenter location that has minimum error between the observed and the calculated travel times. When solving this nonlinear inverse problem, simulated annealing inversion method can be applied to such global optimization problems, which the convergence of its solution is independent of the initial model. In this study, we developed own program codeby applying adaptive simulated annealing inversion in Matlab environment.more » We applied this method to determine earthquake hypocenter using several data cases which are regional tectonic, volcano tectonic, and geothermal field. The travel times were calculated using ray tracing shooting method. We then compared its results with the results using Geiger’s method to analyze its reliability. Our results show hypocenter location has smaller RMS error compared to the Geiger’s result that can be statistically associated with better solution. The hypocenter of earthquakes also well correlated with geological structure in the study area. Werecommend using adaptive simulated annealing inversion to relocate hypocenter location in purpose to get precise and accurate earthquake location.« less
Cloke, Jonathan; Arizanova, Julia; Crabtree, David; Simpson, Helen; Evans, Katharine; Vaahtoranta, Laura; Palomäki, Jukka-Pekka; Artimo, Paulus; Huang, Feng; Liikanen, Maria; Koskela, Suvi; Chen, Yi
2016-01-01
The Thermo Scientific™ SureTect™ Listeria species Real-Time PCR Assay was certified during 2013 by the AOAC Research Institute (RI) Performance Tested Methods(SM) program as a rapid method for the detection of Listeria species from a wide range of food matrixes and surface samples. A method modification study was conducted in 2015 to extend the matrix claims of the product to a wider range of food matrixes. This report details the method modification study undertaken to extend the use of this PCR kit to the Applied Biosystems™ 7500 Fast PCR Instrument and Applied Biosystems RapidFinder™ Express 2.0 software allowing use of the assay on a 96-well format PCR cycler in addition to the current workflow, using the 24-well Thermo Scientific PikoReal™ PCR Instrument and Thermo Scientific SureTect software. The method modification study presented in this report was assessed by the AOAC-RI as being a level 2 method modification study, necessitating a method developer study on a representative range of food matrixes covering raw ground turkey, 2% fat pasteurized milk, and bagged lettuce as well as stainless steel surface samples. All testing was conducted in comparison to the reference method detailed in International Organization for Standardization (ISO) 6579:2002. No significant difference by probability of detection statistical analysis was found between the SureTect Listeria species PCR Assay or the ISO reference method methods for any of the three food matrixes and the surface samples analyzed during the study.
Connecting clinical and actuarial prediction with rule-based methods.
Fokkema, Marjolein; Smits, Niels; Kelderman, Henk; Penninx, Brenda W J H
2015-06-01
Meta-analyses comparing the accuracy of clinical versus actuarial prediction have shown actuarial methods to outperform clinical methods, on average. However, actuarial methods are still not widely used in clinical practice, and there has been a call for the development of actuarial prediction methods for clinical practice. We argue that rule-based methods may be more useful than the linear main effect models usually employed in prediction studies, from a data and decision analytic as well as a practical perspective. In addition, decision rules derived with rule-based methods can be represented as fast and frugal trees, which, unlike main effects models, can be used in a sequential fashion, reducing the number of cues that have to be evaluated before making a prediction. We illustrate the usability of rule-based methods by applying RuleFit, an algorithm for deriving decision rules for classification and regression problems, to a dataset on prediction of the course of depressive and anxiety disorders from Penninx et al. (2011). The RuleFit algorithm provided a model consisting of 2 simple decision rules, requiring evaluation of only 2 to 4 cues. Predictive accuracy of the 2-rule model was very similar to that of a logistic regression model incorporating 20 predictor variables, originally applied to the dataset. In addition, the 2-rule model required, on average, evaluation of only 3 cues. Therefore, the RuleFit algorithm appears to be a promising method for creating decision tools that are less time consuming and easier to apply in psychological practice, and with accuracy comparable to traditional actuarial methods. (c) 2015 APA, all rights reserved).
Hatch, Kenneth D.
2012-01-01
Abstract. With no sufficient screening test for ovarian cancer, a method to evaluate the ovarian disease state quickly and nondestructively is needed. The authors have applied a wide-field spectral imager to freshly resected ovaries of 30 human patients in a study believed to be the first of its magnitude. Endogenous fluorescence was excited with 365-nm light and imaged in eight emission bands collectively covering the 400- to 640-nm range. Linear discriminant analysis was used to classify all image pixels and generate diagnostic maps of the ovaries. Training the classifier with previously collected single-point autofluorescence measurements of a spectroscopic probe enabled this novel classification. The process by which probe-collected spectra were transformed for comparison with imager spectra is described. Sensitivity of 100% and specificity of 51% were obtained in classifying normal and cancerous ovaries using autofluorescence data alone. Specificity increased to 69% when autofluorescence data were divided by green reflectance data to correct for spatial variation in tissue absorption properties. Benign neoplasm ovaries were also found to classify as nonmalignant using the same algorithm. Although applied ex vivo, the method described here appears useful for quick assessment of cancer presence in the human ovary. PMID:22502561
Falavigna, Claudia; Cirlini, Martina; Galaverna, Gianni; Sforza, Stefano; Dossena, Arnaldo; Dall'Asta, Chiara
2012-09-01
Fumonisins are a family of food-borne mycotoxins with a wide spectrum of toxicological activities, produced by Fusarium verticillioides. Twenty-eight fumonisin analogues have been characterised so far, which can be separated into four main groups, identified as fumonisin A, B, C and P, being fumonisin B the most widely occurring in maize and corn-based food. In this work, major and minor fumonisin analogues produced by F. verticillioides have been determined by the development of a suitable tandem mass spectrometry procedure for target compound identification and quantification. The method has been applied to the determination of the major fumonisins in culture media of F. verticillioides and in mouldy maize. In addition to the main fumonisins produced by F. verticillioides, also secondary compounds such as FB4, FB5, FAs and FCs have been detected in both fungal liquid cultures and contaminated maize samples. The use of this method for quantification of major and minor fumonisins may be useful for an exhaustive evaluation of their occurrence and toxicological relevance in food; moreover, it may be applied for a better definition of the fumonisin biosynthetic pathways in different growing media as well as in maize. Copyright © 2012 John Wiley & Sons, Ltd.
Advanced imaging techniques in brain tumors
2009-01-01
Abstract Perfusion, permeability and magnetic resonance spectroscopy (MRS) are now widely used in the research and clinical settings. In the clinical setting, qualitative, semi-quantitative and quantitative approaches such as review of color-coded maps to region of interest analysis and analysis of signal intensity curves are being applied in practice. There are several pitfalls with all of these approaches. Some of these shortcomings are reviewed, such as the relative low sensitivity of metabolite ratios from MRS and the effect of leakage on the appearance of color-coded maps from dynamic susceptibility contrast (DSC) magnetic resonance (MR) perfusion imaging and what correction and normalization methods can be applied. Combining and applying these different imaging techniques in a multi-parametric algorithmic fashion in the clinical setting can be shown to increase diagnostic specificity and confidence. PMID:19965287
Estimating error rates for firearm evidence identifications in forensic science
Song, John; Vorburger, Theodore V.; Chu, Wei; Yen, James; Soons, Johannes A.; Ott, Daniel B.; Zhang, Nien Fan
2018-01-01
Estimating error rates for firearm evidence identification is a fundamental challenge in forensic science. This paper describes the recently developed congruent matching cells (CMC) method for image comparisons, its application to firearm evidence identification, and its usage and initial tests for error rate estimation. The CMC method divides compared topography images into correlation cells. Four identification parameters are defined for quantifying both the topography similarity of the correlated cell pairs and the pattern congruency of the registered cell locations. A declared match requires a significant number of CMCs, i.e., cell pairs that meet all similarity and congruency requirements. Initial testing on breech face impressions of a set of 40 cartridge cases fired with consecutively manufactured pistol slides showed wide separation between the distributions of CMC numbers observed for known matching and known non-matching image pairs. Another test on 95 cartridge cases from a different set of slides manufactured by the same process also yielded widely separated distributions. The test results were used to develop two statistical models for the probability mass function of CMC correlation scores. The models were applied to develop a framework for estimating cumulative false positive and false negative error rates and individual error rates of declared matches and non-matches for this population of breech face impressions. The prospect for applying the models to large populations and realistic case work is also discussed. The CMC method can provide a statistical foundation for estimating error rates in firearm evidence identifications, thus emulating methods used for forensic identification of DNA evidence. PMID:29331680
Estimating error rates for firearm evidence identifications in forensic science.
Song, John; Vorburger, Theodore V; Chu, Wei; Yen, James; Soons, Johannes A; Ott, Daniel B; Zhang, Nien Fan
2018-03-01
Estimating error rates for firearm evidence identification is a fundamental challenge in forensic science. This paper describes the recently developed congruent matching cells (CMC) method for image comparisons, its application to firearm evidence identification, and its usage and initial tests for error rate estimation. The CMC method divides compared topography images into correlation cells. Four identification parameters are defined for quantifying both the topography similarity of the correlated cell pairs and the pattern congruency of the registered cell locations. A declared match requires a significant number of CMCs, i.e., cell pairs that meet all similarity and congruency requirements. Initial testing on breech face impressions of a set of 40 cartridge cases fired with consecutively manufactured pistol slides showed wide separation between the distributions of CMC numbers observed for known matching and known non-matching image pairs. Another test on 95 cartridge cases from a different set of slides manufactured by the same process also yielded widely separated distributions. The test results were used to develop two statistical models for the probability mass function of CMC correlation scores. The models were applied to develop a framework for estimating cumulative false positive and false negative error rates and individual error rates of declared matches and non-matches for this population of breech face impressions. The prospect for applying the models to large populations and realistic case work is also discussed. The CMC method can provide a statistical foundation for estimating error rates in firearm evidence identifications, thus emulating methods used for forensic identification of DNA evidence. Published by Elsevier B.V.
New technique for ensemble dressing combining Multimodel SuperEnsemble and precipitation PDF
NASA Astrophysics Data System (ADS)
Cane, D.; Milelli, M.
2009-09-01
The Multimodel SuperEnsemble technique (Krishnamurti et al., Science 285, 1548-1550, 1999) is a postprocessing method for the estimation of weather forecast parameters reducing direct model output errors. It differs from other ensemble analysis techniques by the use of an adequate weighting of the input forecast models to obtain a combined estimation of meteorological parameters. Weights are calculated by least-square minimization of the difference between the model and the observed field during a so-called training period. Although it can be applied successfully on the continuous parameters like temperature, humidity, wind speed and mean sea level pressure (Cane and Milelli, Meteorologische Zeitschrift, 15, 2, 2006), the Multimodel SuperEnsemble gives good results also when applied on the precipitation, a parameter quite difficult to handle with standard post-processing methods. Here we present our methodology for the Multimodel precipitation forecasts applied on a wide spectrum of results over Piemonte very dense non-GTS weather station network. We will focus particularly on an accurate statistical method for bias correction and on the ensemble dressing in agreement with the observed precipitation forecast-conditioned PDF. Acknowledgement: this work is supported by the Italian Civil Defence Department.
Hu, Jiazhi; Meyers, Robin M; Dong, Junchao; Panchakshari, Rohit A; Alt, Frederick W; Frock, Richard L
2016-05-01
Unbiased, high-throughput assays for detecting and quantifying DNA double-stranded breaks (DSBs) across the genome in mammalian cells will facilitate basic studies of the mechanisms that generate and repair endogenous DSBs. They will also enable more applied studies, such as those to evaluate the on- and off-target activities of engineered nucleases. Here we describe a linear amplification-mediated high-throughput genome-wide sequencing (LAM-HTGTS) method for the detection of genome-wide 'prey' DSBs via their translocation in cultured mammalian cells to a fixed 'bait' DSB. Bait-prey junctions are cloned directly from isolated genomic DNA using LAM-PCR and unidirectionally ligated to bridge adapters; subsequent PCR steps amplify the single-stranded DNA junction library in preparation for Illumina Miseq paired-end sequencing. A custom bioinformatics pipeline identifies prey sequences that contribute to junctions and maps them across the genome. LAM-HTGTS differs from related approaches because it detects a wide range of broken end structures with nucleotide-level resolution. Familiarity with nucleic acid methods and next-generation sequencing analysis is necessary for library generation and data interpretation. LAM-HTGTS assays are sensitive, reproducible, relatively inexpensive, scalable and straightforward to implement with a turnaround time of <1 week.
Chen, Zhongxue; Ng, Hon Keung Tony; Li, Jing; Liu, Qingzhong; Huang, Hanwen
2017-04-01
In the past decade, hundreds of genome-wide association studies have been conducted to detect the significant single-nucleotide polymorphisms that are associated with certain diseases. However, most of the data from the X chromosome were not analyzed and only a few significant associated single-nucleotide polymorphisms from the X chromosome have been identified from genome-wide association studies. This is mainly due to the lack of powerful statistical tests. In this paper, we propose a novel statistical approach that combines the information of single-nucleotide polymorphisms on the X chromosome from both males and females in an efficient way. The proposed approach avoids the need of making strong assumptions about the underlying genetic models. Our proposed statistical test is a robust method that only makes the assumption that the risk allele is the same for both females and males if the single-nucleotide polymorphism is associated with the disease for both genders. Through simulation study and a real data application, we show that the proposed procedure is robust and have excellent performance compared to existing methods. We expect that many more associated single-nucleotide polymorphisms on the X chromosome will be identified if the proposed approach is applied to current available genome-wide association studies data.
New methods of data calibration for high power-aperture lidar.
Guan, Sai; Yang, Guotao; Chang, Qihai; Cheng, Xuewu; Yang, Yong; Gong, Shaohua; Wang, Jihong
2013-03-25
For high power-aperture lidar sounding of wide atmospheric dynamic ranges, as in middle-upper atmospheric probing, photomultiplier tubes' (PMT) pulse pile-up effects and signal-induced noise (SIN) complicates the extraction of information from lidar return signal, especially from metal layers' fluorescence signal. Pursuit for sophisticated description of metal layers' characteristics at far range (80~130km) with one PMT of high quantum efficiency (QE) and good SNR, contradicts the requirements for signals of wide linear dynamic range (i.e. from approximate 10(2) to 10(8) counts/s). In this article, Substantial improvements on experimental simulation of Lidar signals affected by PMT are reported to evaluate the PMTs' distortions in our High Power-Aperture Sodium LIDAR system. A new method for pile-up calibration is proposed by taking into account PMT and High Speed Data Acquisition Card as an Integrated Black-Box, as well as a new experimental method for identifying and removing SIN from the raw Lidar signals. Contradiction between the limited linear dynamic range of raw signal (55~80km) and requirements for wider acceptable linearity has been effectively solved, without complicating the current lidar system. Validity of these methods was demonstrated by applying calibrated data to retrieve atmospheric parameters (i.e. atmospheric density, temperature and sodium absolutely number density), in comparison with measurements of TIMED satellite and atmosphere model. Good agreements are obtained between results derived from calibrated signal and reference measurements where differences of atmosphere density, temperature are less than 5% in the stratosphere and less than 10K from 30km to mesosphere, respectively. Additionally, approximate 30% changes are shown in sodium concentration at its peak value. By means of the proposed methods to revert the true signal independent of detectors, authors approach a new balance between maintaining the linearity of adequate signal (20-110km) and guaranteeing good SNR (i.e. 10(4):1 around 90km) without debasing QE, in one single detecting channel. For the first time, PMT in photon-counting mode is independently applied to subtract reliable information of atmospheric parameters with wide acceptable linearity over an altitude range from stratosphere up to lower thermosphere (20-110km).
Global Representations of Goal-Directed Behavior in Distinct Cell Types of Mouse Neocortex
Allen, William E.; Kauvar, Isaac V.; Chen, Michael Z.; Richman, Ethan B.; Yang, Samuel J.; Chan, Ken; Gradinaru, Viviana; Deverman, Benjamin E.; Luo, Liqun; Deisseroth, Karl
2017-01-01
SUMMARY The successful planning and execution of adaptive behaviors in mammals may require long-range coordination of neural networks throughout cerebral cortex. The neuronal implementation of signals that could orchestrate cortex-wide activity remains unclear. Here, we develop and apply methods for cortex-wide Ca2+ imaging in mice performing decision-making behavior and identify a global cortical representation of task engagement encoded in the activity dynamics of both single cells and superficial neuropil distributed across the majority of dorsal cortex. The activity of multiple molecularly defined cell types was found to reflect this representation with type-specific dynamics. Focal optogenetic inhibition tiled across cortex revealed a crucial role for frontal cortex in triggering this cortex-wide phenomenon; local inhibition of this region blocked both the cortex-wide response to task-initiating cues and the voluntary behavior. These findings reveal cell-type-specific processes in cortex for globally representing goal-directed behavior and identify a major cortical node that gates the global broadcast of task-related information. PMID:28521139
Freytag, Saskia; Manitz, Juliane; Schlather, Martin; Kneib, Thomas; Amos, Christopher I.; Risch, Angela; Chang-Claude, Jenny; Heinrich, Joachim; Bickeböller, Heike
2014-01-01
Biological pathways provide rich information and biological context on the genetic causes of complex diseases. The logistic kernel machine test integrates prior knowledge on pathways in order to analyze data from genome-wide association studies (GWAS). Here, the kernel converts genomic information of two individuals to a quantitative value reflecting their genetic similarity. With the selection of the kernel one implicitly chooses a genetic effect model. Like many other pathway methods, none of the available kernels accounts for topological structure of the pathway or gene-gene interaction types. However, evidence indicates that connectivity and neighborhood of genes are crucial in the context of GWAS, because genes associated with a disease often interact. Thus, we propose a novel kernel that incorporates the topology of pathways and information on interactions. Using simulation studies, we demonstrate that the proposed method maintains the type I error correctly and can be more effective in the identification of pathways associated with a disease than non-network-based methods. We apply our approach to genome-wide association case control data on lung cancer and rheumatoid arthritis. We identify some promising new pathways associated with these diseases, which may improve our current understanding of the genetic mechanisms. PMID:24434848
FGWAS: Functional genome wide association analysis.
Huang, Chao; Thompson, Paul; Wang, Yalin; Yu, Yang; Zhang, Jingwen; Kong, Dehan; Colen, Rivka R; Knickmeyer, Rebecca C; Zhu, Hongtu
2017-10-01
Functional phenotypes (e.g., subcortical surface representation), which commonly arise in imaging genetic studies, have been used to detect putative genes for complexly inherited neuropsychiatric and neurodegenerative disorders. However, existing statistical methods largely ignore the functional features (e.g., functional smoothness and correlation). The aim of this paper is to develop a functional genome-wide association analysis (FGWAS) framework to efficiently carry out whole-genome analyses of functional phenotypes. FGWAS consists of three components: a multivariate varying coefficient model, a global sure independence screening procedure, and a test procedure. Compared with the standard multivariate regression model, the multivariate varying coefficient model explicitly models the functional features of functional phenotypes through the integration of smooth coefficient functions and functional principal component analysis. Statistically, compared with existing methods for genome-wide association studies (GWAS), FGWAS can substantially boost the detection power for discovering important genetic variants influencing brain structure and function. Simulation studies show that FGWAS outperforms existing GWAS methods for searching sparse signals in an extremely large search space, while controlling for the family-wise error rate. We have successfully applied FGWAS to large-scale analysis of data from the Alzheimer's Disease Neuroimaging Initiative for 708 subjects, 30,000 vertices on the left and right hippocampal surfaces, and 501,584 SNPs. Copyright © 2017 Elsevier Inc. All rights reserved.
Detection of artifacts from high energy bursts in neonatal EEG.
Bhattacharyya, Sourya; Biswas, Arunava; Mukherjee, Jayanta; Majumdar, Arun Kumar; Majumdar, Bandana; Mukherjee, Suchandra; Singh, Arun Kumar
2013-11-01
Detection of non-cerebral activities or artifacts, intermixed within the background EEG, is essential to discard them from subsequent pattern analysis. The problem is much harder in neonatal EEG, where the background EEG contains spikes, waves, and rapid fluctuations in amplitude and frequency. Existing artifact detection methods are mostly limited to detect only a subset of artifacts such as ocular, muscle or power line artifacts. Few methods integrate different modules, each for detection of one specific category of artifact. Furthermore, most of the reference approaches are implemented and tested on adult EEG recordings. Direct application of those methods on neonatal EEG causes performance deterioration, due to greater pattern variation and inherent complexity. A method for detection of a wide range of artifact categories in neonatal EEG is thus required. At the same time, the method should be specific enough to preserve the background EEG information. The current study describes a feature based classification approach to detect both repetitive (generated from ECG, EMG, pulse, respiration, etc.) and transient (generated from eye blinking, eye movement, patient movement, etc.) artifacts. It focuses on artifact detection within high energy burst patterns, instead of detecting artifacts within the complete background EEG with wide pattern variation. The objective is to find true burst patterns, which can later be used to identify the Burst-Suppression (BS) pattern, which is commonly observed during newborn seizure. Such selective artifact detection is proven to be more sensitive to artifacts and specific to bursts, compared to the existing artifact detection approaches applied on the complete background EEG. Several time domain, frequency domain, statistical features, and features generated by wavelet decomposition are analyzed to model the proposed bi-classification between burst and artifact segments. A feature selection method is also applied to select the feature subset producing highest classification accuracy. The suggested feature based classification method is executed using our recorded neonatal EEG dataset, consisting of burst and artifact segments. We obtain 78% sensitivity and 72% specificity as the accuracy measures. The accuracy obtained using the proposed method is found to be about 20% higher than that of the reference approaches. Joint use of the proposed method with our previous work on burst detection outperforms reference methods on simultaneous burst and artifact detection. As the proposed method supports detection of a wide range of artifact patterns, it can be improved to incorporate the detection of artifacts within other seizure patterns and background EEG information as well. © 2013 Elsevier Ltd. All rights reserved.
Method for multi-axis, non-contact mixing of magnetic particle suspensions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Martin, James E.; Solis, Kyle J.
Continuous, three-dimensional control of the vorticity vector is possible by progressively transitioning the field symmetry by applying or removing a dc bias along one of the principal axes of mutually orthogonal alternating fields. By exploiting this transition, the vorticity vector can be oriented in a wide range of directions that comprise all three spatial dimensions. Detuning one or more field components to create phase modulation causes the vorticity vector to trace out complex orbits of a wide variety, creating very robust multiaxial stirring. This multiaxial, non-contact stirring is particularly attractive for applications where the fluid volume has complex boundaries, ormore » is congested.« less
Detection of DNA Methylation by Whole-Genome Bisulfite Sequencing.
Li, Qing; Hermanson, Peter J; Springer, Nathan M
2018-01-01
DNA methylation plays an important role in the regulation of the expression of transposons and genes. Various methods have been developed to assay DNA methylation levels. Bisulfite sequencing is considered to be the "gold standard" for single-base resolution measurement of DNA methylation levels. Coupled with next-generation sequencing, whole-genome bisulfite sequencing (WGBS) allows DNA methylation to be evaluated at a genome-wide scale. Here, we described a protocol for WGBS in plant species with large genomes. This protocol has been successfully applied to assay genome-wide DNA methylation levels in maize and barley. This protocol has also been successfully coupled with sequence capture technology to assay DNA methylation levels in a targeted set of genomic regions.
Wavelet-based multicomponent denoising on GPU to improve the classification of hyperspectral images
NASA Astrophysics Data System (ADS)
Quesada-Barriuso, Pablo; Heras, Dora B.; Argüello, Francisco; Mouriño, J. C.
2017-10-01
Supervised classification allows handling a wide range of remote sensing hyperspectral applications. Enhancing the spatial organization of the pixels over the image has proven to be beneficial for the interpretation of the image content, thus increasing the classification accuracy. Denoising in the spatial domain of the image has been shown as a technique that enhances the structures in the image. This paper proposes a multi-component denoising approach in order to increase the classification accuracy when a classification method is applied. It is computed on multicore CPUs and NVIDIA GPUs. The method combines feature extraction based on a 1Ddiscrete wavelet transform (DWT) applied in the spectral dimension followed by an Extended Morphological Profile (EMP) and a classifier (SVM or ELM). The multi-component noise reduction is applied to the EMP just before the classification. The denoising recursively applies a separable 2D DWT after which the number of wavelet coefficients is reduced by using a threshold. Finally, inverse 2D-DWT filters are applied to reconstruct the noise free original component. The computational cost of the classifiers as well as the cost of the whole classification chain is high but it is reduced achieving real-time behavior for some applications through their computation on NVIDIA multi-GPU platforms.
Neuro-symbolic representation learning on biological knowledge graphs.
Alshahrani, Mona; Khan, Mohammad Asif; Maddouri, Omar; Kinjo, Akira R; Queralt-Rosinach, Núria; Hoehndorf, Robert
2017-09-01
Biological data and knowledge bases increasingly rely on Semantic Web technologies and the use of knowledge graphs for data integration, retrieval and federated queries. In the past years, feature learning methods that are applicable to graph-structured data are becoming available, but have not yet widely been applied and evaluated on structured biological knowledge. Results: We develop a novel method for feature learning on biological knowledge graphs. Our method combines symbolic methods, in particular knowledge representation using symbolic logic and automated reasoning, with neural networks to generate embeddings of nodes that encode for related information within knowledge graphs. Through the use of symbolic logic, these embeddings contain both explicit and implicit information. We apply these embeddings to the prediction of edges in the knowledge graph representing problems of function prediction, finding candidate genes of diseases, protein-protein interactions, or drug target relations, and demonstrate performance that matches and sometimes outperforms traditional approaches based on manually crafted features. Our method can be applied to any biological knowledge graph, and will thereby open up the increasing amount of Semantic Web based knowledge bases in biology to use in machine learning and data analytics. https://github.com/bio-ontology-research-group/walking-rdf-and-owl. robert.hoehndorf@kaust.edu.sa. Supplementary data are available at Bioinformatics online. © The Author(s) 2017. Published by Oxford University Press.
Correlation of live-cell imaging with volume scanning electron microscopy.
Lucas, Miriam S; Günthert, Maja; Bittermann, Anne Greet; de Marco, Alex; Wepf, Roger
2017-01-01
Live-cell imaging is one of the most widely applied methods in live science. Here we describe two setups for live-cell imaging, which can easily be combined with volume SEM for correlative studies. The first procedure applies cell culture dishes with a gridded glass support, which can be used for any light microscopy modality. The second approach is a flow-chamber setup based on Ibidi μ-slides. Both live-cell imaging strategies can be followed up with serial blockface- or focused ion beam-scanning electron microscopy. Two types of resin embedding after heavy metal staining and dehydration are presented making best use of the particular advantages of each imaging modality: classical en-bloc embedding and thin-layer plastification. The latter can be used only for focused ion beam-scanning electron microscopy, but is advantageous for studying cell-interactions with specific substrates, or when the substrate cannot be removed. En-bloc embedding has diverse applications and can be applied for both described volume scanning electron microscopy techniques. Finally, strategies for relocating the cell of interest are discussed for both embedding approaches and in respect to the applied light and scanning electron microscopy methods. Copyright © 2017 Elsevier Inc. All rights reserved.
Hybrid multicore/vectorisation technique applied to the elastic wave equation on a staggered grid
NASA Astrophysics Data System (ADS)
Titarenko, Sofya; Hildyard, Mark
2017-07-01
In modern physics it has become common to find the solution of a problem by solving numerically a set of PDEs. Whether solving them on a finite difference grid or by a finite element approach, the main calculations are often applied to a stencil structure. In the last decade it has become usual to work with so called big data problems where calculations are very heavy and accelerators and modern architectures are widely used. Although CPU and GPU clusters are often used to solve such problems, parallelisation of any calculation ideally starts from a single processor optimisation. Unfortunately, it is impossible to vectorise a stencil structured loop with high level instructions. In this paper we suggest a new approach to rearranging the data structure which makes it possible to apply high level vectorisation instructions to a stencil loop and which results in significant acceleration. The suggested method allows further acceleration if shared memory APIs are used. We show the effectiveness of the method by applying it to an elastic wave propagation problem on a finite difference grid. We have chosen Intel architecture for the test problem and OpenMP (Open Multi-Processing) since they are extensively used in many applications.
NASA Astrophysics Data System (ADS)
Guo, Jun; Lu, Siliang; Zhai, Chao; He, Qingbo
2018-02-01
An automatic bearing fault diagnosis method is proposed for permanent magnet synchronous generators (PMSGs), which are widely installed in wind turbines subjected to low rotating speeds, speed fluctuations, and electrical device noise interferences. The mechanical rotating angle curve is first extracted from the phase current of a PMSG by sequentially applying a series of algorithms. The synchronous sampled vibration signal of the fault bearing is then resampled in the angular domain according to the obtained rotating phase information. Considering that the resampled vibration signal is still overwhelmed by heavy background noise, an adaptive stochastic resonance filter is applied to the resampled signal to enhance the fault indicator and facilitate bearing fault identification. Two types of fault bearings with different fault sizes in a PMSG test rig are subjected to experiments to test the effectiveness of the proposed method. The proposed method is fully automated and thus shows potential for convenient, highly efficient and in situ bearing fault diagnosis for wind turbines subjected to harsh environments.
Promotion of a healthy work life at small enterprises in Thailand by participatory methods.
Krungkraiwong, Sudthida; Itani, Toru; Amornratanapaichit, Ratanaporn
2006-01-01
The major problems of small enterprises include unfavourable working conditions and environment that affect safety and health of workers. The WISE (Work Improvement in Small Enterprises) methodology developed by the ILO has been widely applied to improve occupational safety and health in small enterprises in Thailand. The participatory methods building on local good practices and focusing on practicable improvements have proven effective in controlling the occupational hazards in these enterprises at their sources. As a result of applying the methods in small-scale industries, the frequency of occupational accidents was reduced and the working environment actually improved in the cases studied. The results prove that the participatory approach taken by the WISE activities is a useful and effective tool to make owner/managers and workers in small enterprises voluntarily improve their own working conditions and environment. In promoting a healthy work life at small enterprises in Thailand, it is important to further develop and spread the approach.
Corazza, Marcela Zanetti; Pires, Igor Matheus Ruiz; Diniz, Kristiany Moreira; Segatelli, Mariana Gava; Tarley, César Ricardo Teixeira
2015-08-01
A facile and reliable UV-Vis spectrophotometric method associated with vortex-assisted dispersive liquid-liquid microextraction has been developed and applied to the determination of U(VI) at low levels in water samples. It was based on preconcentration of 24.0 mL sample at pH 8.0 in the presence of 7.4 µmol L(-1) 1-(2-pyridylazo)-2-naphthol, 1.0 mL of methanol as disperser solvent and 1.0 mL of chloroform as extraction solvent. A high preconcentration factor was achieved (396 times), thus providing a wide analytical curve from 6.9 up to 75.9 µg L(-1) (r=0.9982) and limits of detection and quantification of 0.40 and 1.30 µg L(-1), respectively. When necessary, EDTA or KCN can be used to remove interferences of foreign ions. The method was applied to the analysis of real water samples, such as tap, mineral and lake waters with good recovery values.
Assessment of Pansharpening Methods Applied to WorldView-2 Imagery Fusion.
Li, Hui; Jing, Linhai; Tang, Yunwei
2017-01-05
Since WorldView-2 (WV-2) images are widely used in various fields, there is a high demand for the use of high-quality pansharpened WV-2 images for different application purposes. With respect to the novelty of the WV-2 multispectral (MS) and panchromatic (PAN) bands, the performances of eight state-of-art pan-sharpening methods for WV-2 imagery including six datasets from three WV-2 scenes were assessed in this study using both quality indices and information indices, along with visual inspection. The normalized difference vegetation index, normalized difference water index, and morphological building index, which are widely used in applications related to land cover classification, the extraction of vegetation areas, buildings, and water bodies, were employed in this work to evaluate the performance of different pansharpening methods in terms of information presentation ability. The experimental results show that the Haze- and Ratio-based, adaptive Gram-Schmidt, Generalized Laplacian pyramids (GLP) methods using enhanced spectral distortion minimal model and enhanced context-based decision model methods are good choices for producing fused WV-2 images used for image interpretation and the extraction of urban buildings. The two GLP-based methods are better choices than the other methods, if the fused images will be used for applications related to vegetation and water-bodies.
Assessment of Pansharpening Methods Applied to WorldView-2 Imagery Fusion
Li, Hui; Jing, Linhai; Tang, Yunwei
2017-01-01
Since WorldView-2 (WV-2) images are widely used in various fields, there is a high demand for the use of high-quality pansharpened WV-2 images for different application purposes. With respect to the novelty of the WV-2 multispectral (MS) and panchromatic (PAN) bands, the performances of eight state-of-art pan-sharpening methods for WV-2 imagery including six datasets from three WV-2 scenes were assessed in this study using both quality indices and information indices, along with visual inspection. The normalized difference vegetation index, normalized difference water index, and morphological building index, which are widely used in applications related to land cover classification, the extraction of vegetation areas, buildings, and water bodies, were employed in this work to evaluate the performance of different pansharpening methods in terms of information presentation ability. The experimental results show that the Haze- and Ratio-based, adaptive Gram-Schmidt, Generalized Laplacian pyramids (GLP) methods using enhanced spectral distortion minimal model and enhanced context-based decision model methods are good choices for producing fused WV-2 images used for image interpretation and the extraction of urban buildings. The two GLP-based methods are better choices than the other methods, if the fused images will be used for applications related to vegetation and water-bodies. PMID:28067770
DREAMTools: a Python package for scoring collaborative challenges
Cokelaer, Thomas; Bansal, Mukesh; Bare, Christopher; Bilal, Erhan; Bot, Brian M.; Chaibub Neto, Elias; Eduati, Federica; de la Fuente, Alberto; Gönen, Mehmet; Hill, Steven M.; Hoff, Bruce; Karr, Jonathan R.; Küffner, Robert; Menden, Michael P.; Meyer, Pablo; Norel, Raquel; Pratap, Abhishek; Prill, Robert J.; Weirauch, Matthew T.; Costello, James C.; Stolovitzky, Gustavo; Saez-Rodriguez, Julio
2016-01-01
DREAM challenges are community competitions designed to advance computational methods and address fundamental questions in system biology and translational medicine. Each challenge asks participants to develop and apply computational methods to either predict unobserved outcomes or to identify unknown model parameters given a set of training data. Computational methods are evaluated using an automated scoring metric, scores are posted to a public leaderboard, and methods are published to facilitate community discussions on how to build improved methods. By engaging participants from a wide range of science and engineering backgrounds, DREAM challenges can comparatively evaluate a wide range of statistical, machine learning, and biophysical methods. Here, we describe DREAMTools, a Python package for evaluating DREAM challenge scoring metrics. DREAMTools provides a command line interface that enables researchers to test new methods on past challenges, as well as a framework for scoring new challenges. As of March 2016, DREAMTools includes more than 80% of completed DREAM challenges. DREAMTools complements the data, metadata, and software tools available at the DREAM website http://dreamchallenges.org and on the Synapse platform at https://www.synapse.org. Availability: DREAMTools is a Python package. Releases and documentation are available at http://pypi.python.org/pypi/dreamtools. The source code is available at http://github.com/dreamtools/dreamtools. PMID:27134723
Detecting discordance enrichment among a series of two-sample genome-wide expression data sets.
Lai, Yinglei; Zhang, Fanni; Nayak, Tapan K; Modarres, Reza; Lee, Norman H; McCaffrey, Timothy A
2017-01-25
With the current microarray and RNA-seq technologies, two-sample genome-wide expression data have been widely collected in biological and medical studies. The related differential expression analysis and gene set enrichment analysis have been frequently conducted. Integrative analysis can be conducted when multiple data sets are available. In practice, discordant molecular behaviors among a series of data sets can be of biological and clinical interest. In this study, a statistical method is proposed for detecting discordance gene set enrichment. Our method is based on a two-level multivariate normal mixture model. It is statistically efficient with linearly increased parameter space when the number of data sets is increased. The model-based probability of discordance enrichment can be calculated for gene set detection. We apply our method to a microarray expression data set collected from forty-five matched tumor/non-tumor pairs of tissues for studying pancreatic cancer. We divided the data set into a series of non-overlapping subsets according to the tumor/non-tumor paired expression ratio of gene PNLIP (pancreatic lipase, recently shown it association with pancreatic cancer). The log-ratio ranges from a negative value (e.g. more expressed in non-tumor tissue) to a positive value (e.g. more expressed in tumor tissue). Our purpose is to understand whether any gene sets are enriched in discordant behaviors among these subsets (when the log-ratio is increased from negative to positive). We focus on KEGG pathways. The detected pathways will be useful for our further understanding of the role of gene PNLIP in pancreatic cancer research. Among the top list of detected pathways, the neuroactive ligand receptor interaction and olfactory transduction pathways are the most significant two. Then, we consider gene TP53 that is well-known for its role as tumor suppressor in cancer research. The log-ratio also ranges from a negative value (e.g. more expressed in non-tumor tissue) to a positive value (e.g. more expressed in tumor tissue). We divided the microarray data set again according to the expression ratio of gene TP53. After the discordance enrichment analysis, we observed overall similar results and the above two pathways are still the most significant detections. More interestingly, only these two pathways have been identified for their association with pancreatic cancer in a pathway analysis of genome-wide association study (GWAS) data. This study illustrates that some disease-related pathways can be enriched in discordant molecular behaviors when an important disease-related gene changes its expression. Our proposed statistical method is useful in the detection of these pathways. Furthermore, our method can also be applied to genome-wide expression data collected by the recent RNA-seq technology.
An Ensemble Successive Project Algorithm for Liquor Detection Using Near Infrared Sensor.
Qu, Fangfang; Ren, Dong; Wang, Jihua; Zhang, Zhong; Lu, Na; Meng, Lei
2016-01-11
Spectral analysis technique based on near infrared (NIR) sensor is a powerful tool for complex information processing and high precision recognition, and it has been widely applied to quality analysis and online inspection of agricultural products. This paper proposes a new method to address the instability of small sample sizes in the successive projections algorithm (SPA) as well as the lack of association between selected variables and the analyte. The proposed method is an evaluated bootstrap ensemble SPA method (EBSPA) based on a variable evaluation index (EI) for variable selection, and is applied to the quantitative prediction of alcohol concentrations in liquor using NIR sensor. In the experiment, the proposed EBSPA with three kinds of modeling methods are established to test their performance. In addition, the proposed EBSPA combined with partial least square is compared with other state-of-the-art variable selection methods. The results show that the proposed method can solve the defects of SPA and it has the best generalization performance and stability. Furthermore, the physical meaning of the selected variables from the near infrared sensor data is clear, which can effectively reduce the variables and improve their prediction accuracy.
NASA Astrophysics Data System (ADS)
Zakynthinaki, Maria S.; Stirling, James R.; Cordente Martínez, Carlos A.; Díaz de Durana, Alfonso López; Quintana, Manuel Sillero; Romo, Gabriel Rodríguez; Molinuevo, Javier Sampedro
2010-03-01
We present a method of modeling the basin of attraction as a three-dimensional function describing a two-dimensional manifold on which the dynamics of the system evolves from experimental time series data. Our method is based on the density of the data set and uses numerical optimization and data modeling tools. We also show how to obtain analytic curves that describe both the contours and the boundary of the basin. Our method is applied to the problem of regaining balance after perturbation from quiet vertical stance using data of an elite athlete. Our method goes beyond the statistical description of the experimental data, providing a function that describes the shape of the basin of attraction. To test its robustness, our method has also been applied to two different data sets of a second subject and no significant differences were found between the contours of the calculated basin of attraction for the different data sets. The proposed method has many uses in a wide variety of areas, not just human balance for which there are many applications in medicine, rehabilitation, and sport.
Seo, Su Hyun; Kim, Min Chan; Choi, Hong Jo; Jung, Ghap Joong
2012-01-01
Purpose Mechanical stapler is regarded as a good alternative to the hand sewing technique, when used in gastric reconstruction. The circular stapling method has been widely applied to gastrectomy (open orlaparoscopic), for gastric cancer. We illustrated and compared the hand-sutured method to the circular stapling method, for Billroth-II, in patients who underwent laparoscopy assisted distal gastrectomy for gastric cancer. Materials and Methods Between April 2009 and May 2011, 60 patients who underwent laparoscopy assisted distal gastrectomy, with Billroth-II, were enrolled. Hand-sutured Billroth-II was performed in 40 patients (manual group) and circular stapler Billroth-II was performed in 20 patients (stapler group). Clinicopathological features and post-operative outcomes were evaluated and compared between the two groups. Results Nosignificant differences were observed in clinicopathologic parameters and post-operative outcomes, except in the operation times. Operation times and anastomosis times were significantly shorter in the stapler group (P=0.004 and P<0.001). Conclusions Compared to the hand-sutured method, the circular stapling method can be applied safely and more efficiently, when performing Billroth-II anastomosis, after laparoscopy assisted distal gastrectomy in patients with gastric cancer. PMID:22792525
Collaborative simulation method with spatiotemporal synchronization process control
NASA Astrophysics Data System (ADS)
Zou, Yisheng; Ding, Guofu; Zhang, Weihua; Zhang, Jian; Qin, Shengfeng; Tan, John Kian
2016-10-01
When designing a complex mechatronics system, such as high speed trains, it is relatively difficult to effectively simulate the entire system's dynamic behaviors because it involves multi-disciplinary subsystems. Currently,a most practical approach for multi-disciplinary simulation is interface based coupling simulation method, but it faces a twofold challenge: spatial and time unsynchronizations among multi-directional coupling simulation of subsystems. A new collaborative simulation method with spatiotemporal synchronization process control is proposed for coupling simulating a given complex mechatronics system across multiple subsystems on different platforms. The method consists of 1) a coupler-based coupling mechanisms to define the interfacing and interaction mechanisms among subsystems, and 2) a simulation process control algorithm to realize the coupling simulation in a spatiotemporal synchronized manner. The test results from a case study show that the proposed method 1) can certainly be used to simulate the sub-systems interactions under different simulation conditions in an engineering system, and 2) effectively supports multi-directional coupling simulation among multi-disciplinary subsystems. This method has been successfully applied in China high speed train design and development processes, demonstrating that it can be applied in a wide range of engineering systems design and simulation with improved efficiency and effectiveness.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chuan, Lee Te, E-mail: gd130079@siswa.uthm.edu.my; Rathi, Muhammad Fareez Mohamad, E-mail: cd110238@siswa.uthm.edu.my; Abidin, Muhamad Yusuf Zainal, E-mail: cd110221@siswa.uthm.edu.my
Anodic oxidation is a surface modification method which combines electric field driven metal and oxygen ion diffusion for formation of oxide layer on the anode surface. This method has been widely used to modify the surface morphology of biomaterial especially titanium. This study aimed to investigate the effect of applied voltage on titanium. Specifically, the titanium foil was anodised in mixture of β-glycerophosphate disodium salt pentahydrate (β-GP) and calcium acetate monohydrate (CA) with different applied voltage (50-350 V), electrolyte concentration (0.04 M β-GP + 0.4 M CA), anodising time (10minutes) and current density (50 and 70 mA.cm{sup −2}) at room temperature. Surfacemore » oxide properties of anodised titanium were characterised by digital single-lens reflex camera (DSLR camera), field emission scanning electron microscope (FESEM) and atomic force microscopy (AFM). At lower applied voltage (≤150 V), surface of titanium foils were relatively smooth. With increasing applied voltage (≥250 V), the oxide layer became more porous and donut-shaped pores were formed on the surface of titanium foils. The AFM results indicated that the surface roughness of anodised titanium increases with increasing of applied voltage. The porous and rough surface is able to promote the osseointegration and reduce the suffering time of patient.« less
Cloud computing for detecting high-order genome-wide epistatic interaction via dynamic clustering.
Guo, Xuan; Meng, Yu; Yu, Ning; Pan, Yi
2014-04-10
Taking the advantage of high-throughput single nucleotide polymorphism (SNP) genotyping technology, large genome-wide association studies (GWASs) have been considered to hold promise for unravelling complex relationships between genotype and phenotype. At present, traditional single-locus-based methods are insufficient to detect interactions consisting of multiple-locus, which are broadly existing in complex traits. In addition, statistic tests for high order epistatic interactions with more than 2 SNPs propose computational and analytical challenges because the computation increases exponentially as the cardinality of SNPs combinations gets larger. In this paper, we provide a simple, fast and powerful method using dynamic clustering and cloud computing to detect genome-wide multi-locus epistatic interactions. We have constructed systematic experiments to compare powers performance against some recently proposed algorithms, including TEAM, SNPRuler, EDCF and BOOST. Furthermore, we have applied our method on two real GWAS datasets, Age-related macular degeneration (AMD) and Rheumatoid arthritis (RA) datasets, where we find some novel potential disease-related genetic factors which are not shown up in detections of 2-loci epistatic interactions. Experimental results on simulated data demonstrate that our method is more powerful than some recently proposed methods on both two- and three-locus disease models. Our method has discovered many novel high-order associations that are significantly enriched in cases from two real GWAS datasets. Moreover, the running time of the cloud implementation for our method on AMD dataset and RA dataset are roughly 2 hours and 50 hours on a cluster with forty small virtual machines for detecting two-locus interactions, respectively. Therefore, we believe that our method is suitable and effective for the full-scale analysis of multiple-locus epistatic interactions in GWAS.
Cloud computing for detecting high-order genome-wide epistatic interaction via dynamic clustering
2014-01-01
Backgroud Taking the advan tage of high-throughput single nucleotide polymorphism (SNP) genotyping technology, large genome-wide association studies (GWASs) have been considered to hold promise for unravelling complex relationships between genotype and phenotype. At present, traditional single-locus-based methods are insufficient to detect interactions consisting of multiple-locus, which are broadly existing in complex traits. In addition, statistic tests for high order epistatic interactions with more than 2 SNPs propose computational and analytical challenges because the computation increases exponentially as the cardinality of SNPs combinations gets larger. Results In this paper, we provide a simple, fast and powerful method using dynamic clustering and cloud computing to detect genome-wide multi-locus epistatic interactions. We have constructed systematic experiments to compare powers performance against some recently proposed algorithms, including TEAM, SNPRuler, EDCF and BOOST. Furthermore, we have applied our method on two real GWAS datasets, Age-related macular degeneration (AMD) and Rheumatoid arthritis (RA) datasets, where we find some novel potential disease-related genetic factors which are not shown up in detections of 2-loci epistatic interactions. Conclusions Experimental results on simulated data demonstrate that our method is more powerful than some recently proposed methods on both two- and three-locus disease models. Our method has discovered many novel high-order associations that are significantly enriched in cases from two real GWAS datasets. Moreover, the running time of the cloud implementation for our method on AMD dataset and RA dataset are roughly 2 hours and 50 hours on a cluster with forty small virtual machines for detecting two-locus interactions, respectively. Therefore, we believe that our method is suitable and effective for the full-scale analysis of multiple-locus epistatic interactions in GWAS. PMID:24717145
Capomaccio, Stefano; Milanesi, Marco; Bomba, Lorenzo; Cappelli, Katia; Nicolazzi, Ezequiel L; Williams, John L; Ajmone-Marsan, Paolo; Stefanon, Bruno
2015-08-01
Genome-wide association studies (GWAS) have been widely applied to disentangle the genetic basis of complex traits. In cattle breeds, classical GWAS approaches with medium-density marker panels are far from conclusive, especially for complex traits. This is due to the intrinsic limitations of GWAS and the assumptions that are made to step from the association signals to the functional variations. Here, we applied a gene-based strategy to prioritize genotype-phenotype associations found for milk production and quality traits with classical approaches in three Italian dairy cattle breeds with different sample sizes (Italian Brown n = 745; Italian Holstein n = 2058; Italian Simmental n = 477). Although classical regression on single markers revealed only a single genome-wide significant genotype-phenotype association, for Italian Holstein, the gene-based approach identified specific genes in each breed that are associated with milk physiology and mammary gland development. As no standard method has yet been established to step from variation to functional units (i.e., genes), the strategy proposed here may contribute to revealing new genes that play significant roles in complex traits, such as those investigated here, amplifying low association signals using a gene-centric approach. © 2015 Stichting International Foundation for Animal Genetics.
Post-processing method for wind speed ensemble forecast using wind speed and direction
NASA Astrophysics Data System (ADS)
Sofie Eide, Siri; Bjørnar Bremnes, John; Steinsland, Ingelin
2017-04-01
Statistical methods are widely applied to enhance the quality of both deterministic and ensemble NWP forecasts. In many situations, like wind speed forecasting, most of the predictive information is contained in one variable in the NWP models. However, in statistical calibration of deterministic forecasts it is often seen that including more variables can further improve forecast skill. For ensembles this is rarely taken advantage of, mainly due to that it is generally not straightforward how to include multiple variables. In this study, it is demonstrated how multiple variables can be included in Bayesian model averaging (BMA) by using a flexible regression method for estimating the conditional means. The method is applied to wind speed forecasting at 204 Norwegian stations based on wind speed and direction forecasts from the ECMWF ensemble system. At about 85 % of the sites the ensemble forecasts were improved in terms of CRPS by adding wind direction as predictor compared to only using wind speed. On average the improvements were about 5 %, but mainly for moderate to strong wind situations. For weak wind speeds adding wind direction had more or less neutral impact.
2018-01-01
Exhaust gas recirculation (EGR) is one of the main methods of reducing NOX emissions and has been widely used in marine diesel engines. This paper proposes an optimized comprehensive assessment method based on multi-objective grey situation decision theory, grey relation theory and grey entropy analysis to evaluate the performance and optimize rate determination of EGR, which currently lack clear theoretical guidance. First, multi-objective grey situation decision theory is used to establish the initial decision-making model according to the main EGR parameters. The optimal compromise between diesel engine combustion and emission performance is transformed into a decision-making target weight problem. After establishing the initial model and considering the characteristics of EGR under different conditions, an optimized target weight algorithm based on grey relation theory and grey entropy analysis is applied to generate the comprehensive evaluation and decision-making model. Finally, the proposed method is successfully applied to a TBD234V12 turbocharged diesel engine, and the results clearly illustrate the feasibility of the proposed method for providing theoretical support and a reference for further EGR optimization. PMID:29377956
NASA Astrophysics Data System (ADS)
Jaquillard, Lucie; Saab, Fabienne; Schoentgen, Françoise; Cadene, Martine
2012-05-01
There is continued interest in the determination by ESI-MS of equilibrium dissociation constants (KD) that accurately reflect the affinity of a protein-ligand complex in solution. Issues in the measurement of KD are compounded in the case of low affinity complexes. Here we present a KD measurement method and corresponding mathematical model dealing with both gas-phase dissociation (GPD) and aggregation. To this end, a rational mathematical correction of GPD (fsat) is combined with the development of an experimental protocol to deal with gas-phase aggregation. A guide to apply the method to noncovalent protein-ligand systems according to their kinetic behavior is provided. The approach is validated by comparing the KD values determined by this method with in-solution KD literature values. The influence of the type of molecular interactions and instrumental setup on fsat is examined as a first step towards a fine dissection of factors affecting GPD. The method can be reliably applied to a wide array of low affinity systems without the need for a reference ligand or protein.
Zu, Xianghuan; Yang, Chuanlei; Wang, Hechun; Wang, Yinyan
2018-01-01
Exhaust gas recirculation (EGR) is one of the main methods of reducing NOX emissions and has been widely used in marine diesel engines. This paper proposes an optimized comprehensive assessment method based on multi-objective grey situation decision theory, grey relation theory and grey entropy analysis to evaluate the performance and optimize rate determination of EGR, which currently lack clear theoretical guidance. First, multi-objective grey situation decision theory is used to establish the initial decision-making model according to the main EGR parameters. The optimal compromise between diesel engine combustion and emission performance is transformed into a decision-making target weight problem. After establishing the initial model and considering the characteristics of EGR under different conditions, an optimized target weight algorithm based on grey relation theory and grey entropy analysis is applied to generate the comprehensive evaluation and decision-making model. Finally, the proposed method is successfully applied to a TBD234V12 turbocharged diesel engine, and the results clearly illustrate the feasibility of the proposed method for providing theoretical support and a reference for further EGR optimization.
Determination of plane stress state using terahertz time-domain spectroscopy
Wang, Zhiyong; Kang, Kai; Wang, Shibin; Li, Lin'an; Xu, Ningning; Han, Jiaguang; He, Mingxia; Wu, Liang; Zhang, Weili
2016-01-01
THz wave has been increasingly applied in engineering practice. One of its outstanding advantages is the penetrability through certain optically opaque materials, whose interior properties could be therefore obtained. In this report, we develop an experimental method to determine the plane stress state of optically opaque materials based on the stress-optical law using terahertz time-domain spectroscopy (THz-TDS). In this method, two polarizers are combined into the conventional THz-TDS system to sense and adjust the polarization state of THz waves and a theoretical model is established to describe the relationship between phase delay of the received THz wave and the plane stress applied on the specimen. Three stress parameters that represent the plane stress state are finally determined through an error function of THz wave phase-delay. Experiments were conducted on polytetrafluoroethylene (PTFE) specimen and a reasonably good agreement was found with measurement using traditional strain gauges. The presented results validate the effectiveness of the proposed method. The proposed method could be further used in nondestructive tests for a wide range of optically opaque materials. PMID:27824112
Determination of plane stress state using terahertz time-domain spectroscopy
NASA Astrophysics Data System (ADS)
Wang, Zhiyong; Kang, Kai; Wang, Shibin; Li, Lin'an; Xu, Ningning; Han, Jiaguang; He, Mingxia; Wu, Liang; Zhang, Weili
2016-11-01
THz wave has been increasingly applied in engineering practice. One of its outstanding advantages is the penetrability through certain optically opaque materials, whose interior properties could be therefore obtained. In this report, we develop an experimental method to determine the plane stress state of optically opaque materials based on the stress-optical law using terahertz time-domain spectroscopy (THz-TDS). In this method, two polarizers are combined into the conventional THz-TDS system to sense and adjust the polarization state of THz waves and a theoretical model is established to describe the relationship between phase delay of the received THz wave and the plane stress applied on the specimen. Three stress parameters that represent the plane stress state are finally determined through an error function of THz wave phase-delay. Experiments were conducted on polytetrafluoroethylene (PTFE) specimen and a reasonably good agreement was found with measurement using traditional strain gauges. The presented results validate the effectiveness of the proposed method. The proposed method could be further used in nondestructive tests for a wide range of optically opaque materials.
Procop, Mathias; Hodoroaba, Vasile-Dan; Terborg, Ralf; Berger, Dirk
2016-12-01
A method is proposed to determine the effective detector area for energy-dispersive X-ray spectrometers (EDS). Nowadays, detectors are available for a wide range of nominal areas ranging from 10 up to 150 mm2. However, it remains in most cases unknown whether this nominal area coincides with the "net active sensor area" that should be given according to the related standard ISO 15632, or with any other area of the detector device. Moreover, the specific geometry of EDS installation may further reduce a given detector area. The proposed method can be applied to most scanning electron microscope/EDS configurations. The basic idea consists in a comparison of the measured count rate with the count rate resulting from known X-ray yields of copper, titanium, or silicon. The method was successfully tested on three detectors with known effective area and applied further to seven spectrometers from different manufacturers. In most cases the method gave an effective area smaller than the area given in the detector description.
Selective methods for polyphenols and sulphur dioxide determination in wines.
García-Guzmán, Juan J; Hernández-Artiga, María P; Palacios-Ponce de León, Lourdes; Bellido-Milla, Dolores
2015-09-01
A critical review to the methods recommended by international bodies and widely used in the winery industry and research studies was performed. A Laccase biosensor was applied to the selective determination of polyphenols in wines. The biosensor response was characterised and it responds mainly to o-diphenols which are the principal polyphenols responsible for the stability and sensory qualities of wines. The spectrophotometric method to determine free and total sulphur dioxide recommended for beers was applied directly to wines. A sampling of 14 red and white wines was performed and they were analysed for biosensor polyphenol index (IBP) and sulphur dioxide concentration (SO2). The antioxidant capacity by the ABTS(+) spectrophotometric method was also determined. A correlation study was performed to elucidate the influence of the polyphenols and SO2 on the wines stability. High correlations were found between IBP and antioxidant capacity and low correlation between SO2 and antioxidant capacity. To evaluate the benefits of wine drinking a new parameter (IBP/SO2) is proposed. Copyright © 2015 Elsevier Ltd. All rights reserved.
Guo, How-Ran
2011-10-20
Despite its limitations, ecological study design is widely applied in epidemiology. In most cases, adjustment for age is necessary, but different methods may lead to different conclusions. To compare three methods of age adjustment, a study on the associations between arsenic in drinking water and incidence of bladder cancer in 243 townships in Taiwan was used as an example. A total of 3068 cases of bladder cancer, including 2276 men and 792 women, were identified during a ten-year study period in the study townships. Three methods were applied to analyze the same data set on the ten-year study period. The first (Direct Method) applied direct standardization to obtain standardized incidence rate and then used it as the dependent variable in the regression analysis. The second (Indirect Method) applied indirect standardization to obtain standardized incidence ratio and then used it as the dependent variable in the regression analysis instead. The third (Variable Method) used proportions of residents in different age groups as a part of the independent variables in the multiple regression models. All three methods showed a statistically significant positive association between arsenic exposure above 0.64 mg/L and incidence of bladder cancer in men and women, but different results were observed for the other exposure categories. In addition, the risk estimates obtained by different methods for the same exposure category were all different. Using an empirical example, the current study confirmed the argument made by other researchers previously that whereas the three different methods of age adjustment may lead to different conclusions, only the third approach can obtain unbiased estimates of the risks. The third method can also generate estimates of the risk associated with each age group, but the other two are unable to evaluate the effects of age directly.
NASA Astrophysics Data System (ADS)
Zhang, Yongliang; Day-Uei Li, David
2017-02-01
This comment is to clarify that Poisson noise instead of Gaussian noise shall be included to assess the performances of least-squares deconvolution with Laguerre expansion (LSD-LE) for analysing fluorescence lifetime imaging data obtained from time-resolved systems. Moreover, we also corrected an equation in the paper. As the LSD-LE method is rapid and has the potential to be widely applied not only for diagnostic but for wider bioimaging applications, it is desirable to have precise noise models and equations.
The Survey of Fires in Buildings. Third Report: The Use of Information Obtained From Fire Surveys
NASA Technical Reports Server (NTRS)
Silcock, A.
1973-01-01
The previous two reports in this series gave details of the general. scope of the pilot exercise and methods by which it was carried out. In addition the nature of the information obtained was illustrated by preliminary analyses of the house and industrial fires surveyed. Some brief comments on the use of the information were made. This report indicates a method of assessing the nation wide effects of applying conclusions drawn from the results of limited numbers of surveys and considers the use of the information for specific purposes.
Skvortsov, Valeriy; Ivannikov, Alexander; Tikunov, Dimitri; Stepanenko, Valeriy; Borysheva, Natalie; Orlenko, Sergey; Nalapko, Mikhail; Hoshi, Masaharu
2006-02-01
General aspects of applying the method of retrospective dose estimation by electron paramagnetic resonance spectroscopy of human tooth enamel (EPR dosimetry) to the population residing in the vicinity of the Semipalatinsk nuclear test site are analyzed and summarized. The analysis is based on the results obtained during 20 years of investigations conducted in the Medical Radiological Research Center regarding the development and practical application of this method for wide-scale dosimetrical investigation of populations exposed to radiation after the Chernobyl accident and other radiation accidents.
NASA Technical Reports Server (NTRS)
Lo, Ching F.
1999-01-01
The integration of Radial Basis Function Networks and Back Propagation Neural Networks with the Multiple Linear Regression has been accomplished to map nonlinear response surfaces over a wide range of independent variables in the process of the Modem Design of Experiments. The integrated method is capable to estimate the precision intervals including confidence and predicted intervals. The power of the innovative method has been demonstrated by applying to a set of wind tunnel test data in construction of response surface and estimation of precision interval.
Stable isotope dimethyl labelling for quantitative proteomics and beyond
Hsu, Jue-Liang; Chen, Shu-Hui
2016-01-01
Stable-isotope reductive dimethylation, a cost-effective, simple, robust, reliable and easy-to- multiplex labelling method, is widely applied to quantitative proteomics using liquid chromatography-mass spectrometry. This review focuses on biological applications of stable-isotope dimethyl labelling for a large-scale comparative analysis of protein expression and post-translational modifications based on its unique properties of the labelling chemistry. Some other applications of the labelling method for sample preparation and mass spectrometry-based protein identification and characterization are also summarized. This article is part of the themed issue ‘Quantitative mass spectrometry’. PMID:27644970
Verhulst, Brad
2016-01-01
P values have become the scapegoat for a wide variety of problems in science. P values are generally over-emphasized, often incorrectly applied, and in some cases even abused. However, alternative methods of hypothesis testing will likely fall victim to the same criticisms currently leveled at P values if more fundamental changes are not made in the research process. Increasing the general level of statistical literacy and enhancing training in statistical methods provide a potential avenue for identifying, correcting, and preventing erroneous conclusions from entering the academic literature and for improving the general quality of patient care. PMID:28366961
HITS-CLIP yields genome-wide insights into brain alternative RNA processing
NASA Astrophysics Data System (ADS)
Licatalosi, Donny D.; Mele, Aldo; Fak, John J.; Ule, Jernej; Kayikci, Melis; Chi, Sung Wook; Clark, Tyson A.; Schweitzer, Anthony C.; Blume, John E.; Wang, Xuning; Darnell, Jennifer C.; Darnell, Robert B.
2008-11-01
Protein-RNA interactions have critical roles in all aspects of gene expression. However, applying biochemical methods to understand such interactions in living tissues has been challenging. Here we develop a genome-wide means of mapping protein-RNA binding sites in vivo, by high-throughput sequencing of RNA isolated by crosslinking immunoprecipitation (HITS-CLIP). HITS-CLIP analysis of the neuron-specific splicing factor Nova revealed extremely reproducible RNA-binding maps in multiple mouse brains. These maps provide genome-wide in vivo biochemical footprints confirming the previous prediction that the position of Nova binding determines the outcome of alternative splicing; moreover, they are sufficiently powerful to predict Nova action de novo. HITS-CLIP revealed a large number of Nova-RNA interactions in 3' untranslated regions, leading to the discovery that Nova regulates alternative polyadenylation in the brain. HITS-CLIP, therefore, provides a robust, unbiased means to identify functional protein-RNA interactions in vivo.
BiQ Analyzer HT: locus-specific analysis of DNA methylation by high-throughput bisulfite sequencing
Lutsik, Pavlo; Feuerbach, Lars; Arand, Julia; Lengauer, Thomas; Walter, Jörn; Bock, Christoph
2011-01-01
Bisulfite sequencing is a widely used method for measuring DNA methylation in eukaryotic genomes. The assay provides single-base pair resolution and, given sufficient sequencing depth, its quantitative accuracy is excellent. High-throughput sequencing of bisulfite-converted DNA can be applied either genome wide or targeted to a defined set of genomic loci (e.g. using locus-specific PCR primers or DNA capture probes). Here, we describe BiQ Analyzer HT (http://biq-analyzer-ht.bioinf.mpi-inf.mpg.de/), a user-friendly software tool that supports locus-specific analysis and visualization of high-throughput bisulfite sequencing data. The software facilitates the shift from time-consuming clonal bisulfite sequencing to the more quantitative and cost-efficient use of high-throughput sequencing for studying locus-specific DNA methylation patterns. In addition, it is useful for locus-specific visualization of genome-wide bisulfite sequencing data. PMID:21565797
ERIC Educational Resources Information Center
Norton, Helen Rich
1917-01-01
Vocational training, as a part of the great movement for industrial betterment is now widely recognized as an advantageous measure for both the worker and the industry, but it is not many years since such applied education was looked upon with disfavor by employers and employees alike. This report will deal specifically with the development of…
Two Paradoxes in Linear Regression Analysis.
Feng, Ge; Peng, Jing; Tu, Dongke; Zheng, Julia Z; Feng, Changyong
2016-12-25
Regression is one of the favorite tools in applied statistics. However, misuse and misinterpretation of results from regression analysis are common in biomedical research. In this paper we use statistical theory and simulation studies to clarify some paradoxes around this popular statistical method. In particular, we show that a widely used model selection procedure employed in many publications in top medical journals is wrong. Formal procedures based on solid statistical theory should be used in model selection.
Urban air quality estimation study, phase 1
NASA Technical Reports Server (NTRS)
Diamante, J. M.; Englar, T. S., Jr.; Jazwinski, A. H.
1976-01-01
Possibilities are explored for applying estimation theory to the analysis, interpretation, and use of air quality measurements in conjunction with simulation models to provide a cost effective method of obtaining reliable air quality estimates for wide urban areas. The physical phenomenology of real atmospheric plumes from elevated localized sources is discussed. A fluctuating plume dispersion model is derived. Individual plume parameter formulations are developed along with associated a priori information. Individual measurement models are developed.
1988-07-18
million dollars from a U.S. aircraft company , for the sale of f-20 fighters to the Korean air force, was handed over to former President Chon by a...the labor union at the Hyundai Engineering and Construction Company and the sit-in by coal miners in Sabuk was launched. Also, the floor leader...engage in a wide-ranging review, therefore leaving behind a fear that it had applied a rough-and-ready method. In addition, administrative reform at
Traveling and Standing Waves in Coupled Pendula and Newton's Cradle
NASA Astrophysics Data System (ADS)
García-Azpeitia, Carlos
2016-12-01
The existence of traveling and standing waves is investigated for chains of coupled pendula with periodic boundary conditions. The results are proven by applying topological methods to subspaces of symmetric solutions. The main advantage of this approach comes from the fact that only properties of the linearized forces are required. This allows to cover a wide range of models such as Newton's cradle, the Fermi-Pasta-Ulam lattice, and the Toda lattice.
HYPOTHESIS SETTING AND ORDER STATISTIC FOR ROBUST GENOMIC META-ANALYSIS.
Song, Chi; Tseng, George C
2014-01-01
Meta-analysis techniques have been widely developed and applied in genomic applications, especially for combining multiple transcriptomic studies. In this paper, we propose an order statistic of p-values ( r th ordered p-value, rOP) across combined studies as the test statistic. We illustrate different hypothesis settings that detect gene markers differentially expressed (DE) "in all studies", "in the majority of studies", or "in one or more studies", and specify rOP as a suitable method for detecting DE genes "in the majority of studies". We develop methods to estimate the parameter r in rOP for real applications. Statistical properties such as its asymptotic behavior and a one-sided testing correction for detecting markers of concordant expression changes are explored. Power calculation and simulation show better performance of rOP compared to classical Fisher's method, Stouffer's method, minimum p-value method and maximum p-value method under the focused hypothesis setting. Theoretically, rOP is found connected to the naïve vote counting method and can be viewed as a generalized form of vote counting with better statistical properties. The method is applied to three microarray meta-analysis examples including major depressive disorder, brain cancer and diabetes. The results demonstrate rOP as a more generalizable, robust and sensitive statistical framework to detect disease-related markers.
Laser projection positioning of spatial contour curves via a galvanometric scanner
NASA Astrophysics Data System (ADS)
Tu, Junchao; Zhang, Liyan
2018-04-01
The technology of laser projection positioning is widely applied in advanced manufacturing fields (e.g. composite plying, parts location and installation). In order to use it better, a laser projection positioning (LPP) system is designed and implemented. Firstly, the LPP system is built by a laser galvanometric scanning (LGS) system and a binocular vision system. Applying Single-hidden Layer Feed-forward Neural Network (SLFN), the system model is constructed next. Secondly, the LGS system and the binocular system, which are respectively independent, are integrated through a datadriven calibration method based on extreme learning machine (ELM) algorithm. Finally, a projection positioning method is proposed within the framework of the calibrated SLFN system model. A well-designed experiment is conducted to verify the viability and effectiveness of the proposed system. In addition, the accuracy of projection positioning are evaluated to show that the LPP system can achieves the good localization effect.
Li, Dan; Hu, Xiaoguang
2017-03-01
Because of the high availability requirements from weapon equipment, an in-depth study has been conducted on the real-time fault-tolerance of the widely applied Compact PCI (CPCI) bus measurement and control system. A redundancy design method that uses heartbeat detection to connect the primary and alternate devices has been developed. To address the low successful execution rate and relatively large waste of time slices in the primary version of the task software, an improved algorithm for real-time fault-tolerant scheduling is proposed based on the Basic Checking available time Elimination idle time (BCE) algorithm, applying a single-neuron self-adaptive proportion sum differential (PSD) controller. The experimental validation results indicate that this system has excellent redundancy and fault-tolerance, and the newly developed method can effectively improve the system availability. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.
NASA Technical Reports Server (NTRS)
Baumeister, Joseph F.
1990-01-01
Analysis of energy emitted from simple or complex cavity designs can lead to intricate solutions due to nonuniform radiosity and irradiation within a cavity. A numerical ray tracing technique was applied to simulate radiation propagating within and from various cavity designs. To obtain the energy balance relationships between isothermal and nonisothermal cavity surfaces and space, the computer code NEVADA was utilized for its statistical technique applied to numerical ray tracing. The analysis method was validated by comparing results with known theoretical and limiting solutions, and the electrical resistance network method. In general, for nonisothermal cavities the performance (apparent emissivity) is a function of cylinder length-to-diameter ratio, surface emissivity, and cylinder surface temperatures. The extent of nonisothermal conditions in a cylindrical cavity significantly affects the overall cavity performance. Results are presented over a wide range of parametric variables for use as a possible design reference.
Molecular epidemiology for vector research on leishmaniasis.
Kato, Hirotomo; Gomez, Eduardo A; Cáceres, Abraham G; Uezato, Hiroshi; Mimori, Tatsuyuki; Hashiguchi, Yoshihisa
2010-03-01
Leishmaniasis is a protozoan disease caused by the genus Leishmania transmitted by female phlebotomine sand flies. Surveillance of the prevalence of Leishmania and responsive vector species in endemic and surrounding areas is important for predicting the risk and expansion of the disease. Molecular biological methods are now widely applied to epidemiological studies of infectious diseases including leishmaniasis. These techniques are used to detect natural infections of sand fly vectors with Leishmania protozoa and are becoming powerful tools due to their sensitivity and specificity. Recently, genetic analyses have been performed on sand fly species and genotyping using PCR-RFLP has been applied to the sand fly taxonomy. In addition, a molecular mass screening method has been established that enables both sand fly species and natural leishmanial infections to be identified simultaneously in hundreds of sand flies with limited effort. This paper reviews recent advances in the study of sand flies, vectors of leishmaniasis, using molecular biological approaches.
Fuzzy State Transition and Kalman Filter Applied in Short-Term Traffic Flow Forecasting
Ming-jun, Deng; Shi-ru, Qu
2015-01-01
Traffic flow is widely recognized as an important parameter for road traffic state forecasting. Fuzzy state transform and Kalman filter (KF) have been applied in this field separately. But the studies show that the former method has good performance on the trend forecasting of traffic state variation but always involves several numerical errors. The latter model is good at numerical forecasting but is deficient in the expression of time hysteretically. This paper proposed an approach that combining fuzzy state transform and KF forecasting model. In considering the advantage of the two models, a weight combination model is proposed. The minimum of the sum forecasting error squared is regarded as a goal in optimizing the combined weight dynamically. Real detection data are used to test the efficiency. Results indicate that the method has a good performance in terms of short-term traffic forecasting. PMID:26779258
Tailoring particle translocation via dielectrophoresis in pore channels
Tanaka, Shoji; Tsutsui, Makusu; Theodore, Hu; Yuhui, He; Arima, Akihide; Tsuji, Tetsuro; Doi, Kentaro; Kawano, Satoyuki; Taniguchi, Masateru; Kawai, Tomoji
2016-01-01
Understanding and controlling electrophoretic motions of nanoscopic objects in fluidic channels are a central challenge in developing nanopore technology for molecular analyses. Although progress has been made in slowing the translocation velocity to meet the requirement for electrical detections of analytes via picoampere current measurements, there exists no method useful for regulating particle flows in the transverse directions. Here, we report the use of dielectrophoresis to manipulate the single-particle passage through a solid-state pore. We created a trap field by applying AC voltage between electrodes embedded in a low-aspect-ratio micropore. We demonstrated a traffic control of particles to go through center or near side surface via the voltage frequency. We also found enhanced capture efficiency along with faster escaping speed of particles by virtue of the AC-mediated electroosmosis. This method is compatible with nanopore sensing and would be widely applied for reducing off-axis effects to achieve single-molecule identification. PMID:27527126
Molecular Epidemiology for Vector Research on Leishmaniasis
Kato, Hirotomo; Gomez, Eduardo A; Cáceres, Abraham G; Uezato, Hiroshi; Mimori, Tatsuyuki; Hashiguchi, Yoshihisa
2010-01-01
Leishmaniasis is a protozoan disease caused by the genus Leishmania transmitted by female phlebotomine sand flies. Surveillance of the prevalence of Leishmania and responsive vector species in endemic and surrounding areas is important for predicting the risk and expansion of the disease. Molecular biological methods are now widely applied to epidemiological studies of infectious diseases including leishmaniasis. These techniques are used to detect natural infections of sand fly vectors with Leishmania protozoa and are becoming powerful tools due to their sensitivity and specificity. Recently, genetic analyses have been performed on sand fly species and genotyping using PCR-RFLP has been applied to the sand fly taxonomy. In addition, a molecular mass screening method has been established that enables both sand fly species and natural leishmanial infections to be identified simultaneously in hundreds of sand flies with limited effort. This paper reviews recent advances in the study of sand flies, vectors of leishmaniasis, using molecular biological approaches. PMID:20617005
Analysis of cold worked holes for structural life extension
NASA Technical Reports Server (NTRS)
Wieland, David H.; Cutshall, Jon T.; Burnside, O. Hal; Cardinal, Joseph W.
1994-01-01
Cold working holes for improved fatigue life of fastener holes are widely used on aircraft. This paper presents methods used by the authors to determine the percent of cold working to be applied and to analyze fatigue crack growth of cold worked fastener holes. An elastic, perfectly-plastic analysis of a thick-walled tube is used to determine the stress field during the cold working process and the residual stress field after the process is completed. The results of the elastic/plastic analysis are used to determine the amount of cold working to apply to a hole. The residual stress field is then used to perform damage tolerance analysis of a crack growing out of a cold worked fastener hole. This analysis method is easily implemented in existing crack growth computer codes so that the cold worked holes can be used to extend the structural life of aircraft. Analytical results are compared to test data where appropriate.
Fuzzy State Transition and Kalman Filter Applied in Short-Term Traffic Flow Forecasting.
Deng, Ming-jun; Qu, Shi-ru
2015-01-01
Traffic flow is widely recognized as an important parameter for road traffic state forecasting. Fuzzy state transform and Kalman filter (KF) have been applied in this field separately. But the studies show that the former method has good performance on the trend forecasting of traffic state variation but always involves several numerical errors. The latter model is good at numerical forecasting but is deficient in the expression of time hysteretically. This paper proposed an approach that combining fuzzy state transform and KF forecasting model. In considering the advantage of the two models, a weight combination model is proposed. The minimum of the sum forecasting error squared is regarded as a goal in optimizing the combined weight dynamically. Real detection data are used to test the efficiency. Results indicate that the method has a good performance in terms of short-term traffic forecasting.
Direct application of Padé approximant for solving nonlinear differential equations.
Vazquez-Leal, Hector; Benhammouda, Brahim; Filobello-Nino, Uriel; Sarmiento-Reyes, Arturo; Jimenez-Fernandez, Victor Manuel; Garcia-Gervacio, Jose Luis; Huerta-Chua, Jesus; Morales-Mendoza, Luis Javier; Gonzalez-Lee, Mario
2014-01-01
This work presents a direct procedure to apply Padé method to find approximate solutions for nonlinear differential equations. Moreover, we present some cases study showing the strength of the method to generate highly accurate rational approximate solutions compared to other semi-analytical methods. The type of tested nonlinear equations are: a highly nonlinear boundary value problem, a differential-algebraic oscillator problem, and an asymptotic problem. The high accurate handy approximations obtained by the direct application of Padé method shows the high potential if the proposed scheme to approximate a wide variety of problems. What is more, the direct application of the Padé approximant aids to avoid the previous application of an approximative method like Taylor series method, homotopy perturbation method, Adomian Decomposition method, homotopy analysis method, variational iteration method, among others, as tools to obtain a power series solutions to post-treat with the Padé approximant. 34L30.
Eliseyev, Andrey; Aksenova, Tetiana
2016-01-01
In the current paper the decoding algorithms for motor-related BCI systems for continuous upper limb trajectory prediction are considered. Two methods for the smooth prediction, namely Sobolev and Polynomial Penalized Multi-Way Partial Least Squares (PLS) regressions, are proposed. The methods are compared to the Multi-Way Partial Least Squares and Kalman Filter approaches. The comparison demonstrated that the proposed methods combined the prediction accuracy of the algorithms of the PLS family and trajectory smoothness of the Kalman Filter. In addition, the prediction delay is significantly lower for the proposed algorithms than for the Kalman Filter approach. The proposed methods could be applied in a wide range of applications beyond neuroscience. PMID:27196417
Development of fluorescent methods for DNA methyltransferase assay
NASA Astrophysics Data System (ADS)
Li, Yueying; Zou, Xiaoran; Ma, Fei; Tang, Bo; Zhang, Chun-yang
2017-03-01
DNA methylation modified by DNA methyltransferase (MTase) plays an important role in regulating gene transcription, cell growth and proliferation. The aberrant DNA MTase activity may lead to a variety of human diseases including cancers. Therefore, accurate and sensitive detection of DNA MTase activity is crucial to biomedical research, clinical diagnostics and therapy. However, conventional DNA MTase assays often suffer from labor-intensive operations and time-consuming procedures. Alternatively, fluorescent methods have significant advantages of simplicity and high sensitivity, and have been widely applied for DNA MTase assay. In this review, we summarize the recent advances in the development of fluorescent methods for DNA MTase assay. These emerging methods include amplification-free and the amplification-assisted assays. Moreover, we discuss the challenges and future directions of this area.
HMMBinder: DNA-Binding Protein Prediction Using HMM Profile Based Features.
Zaman, Rianon; Chowdhury, Shahana Yasmin; Rashid, Mahmood A; Sharma, Alok; Dehzangi, Abdollah; Shatabda, Swakkhar
2017-01-01
DNA-binding proteins often play important role in various processes within the cell. Over the last decade, a wide range of classification algorithms and feature extraction techniques have been used to solve this problem. In this paper, we propose a novel DNA-binding protein prediction method called HMMBinder. HMMBinder uses monogram and bigram features extracted from the HMM profiles of the protein sequences. To the best of our knowledge, this is the first application of HMM profile based features for the DNA-binding protein prediction problem. We applied Support Vector Machines (SVM) as a classification technique in HMMBinder. Our method was tested on standard benchmark datasets. We experimentally show that our method outperforms the state-of-the-art methods found in the literature.
Braschel, Melissa C; Svec, Ivana; Darlington, Gerarda A; Donner, Allan
2016-04-01
Many investigators rely on previously published point estimates of the intraclass correlation coefficient rather than on their associated confidence intervals to determine the required size of a newly planned cluster randomized trial. Although confidence interval methods for the intraclass correlation coefficient that can be applied to community-based trials have been developed for a continuous outcome variable, fewer methods exist for a binary outcome variable. The aim of this study is to evaluate confidence interval methods for the intraclass correlation coefficient applied to binary outcomes in community intervention trials enrolling a small number of large clusters. Existing methods for confidence interval construction are examined and compared to a new ad hoc approach based on dividing clusters into a large number of smaller sub-clusters and subsequently applying existing methods to the resulting data. Monte Carlo simulation is used to assess the width and coverage of confidence intervals for the intraclass correlation coefficient based on Smith's large sample approximation of the standard error of the one-way analysis of variance estimator, an inverted modified Wald test for the Fleiss-Cuzick estimator, and intervals constructed using a bootstrap-t applied to a variance-stabilizing transformation of the intraclass correlation coefficient estimate. In addition, a new approach is applied in which clusters are randomly divided into a large number of smaller sub-clusters with the same methods applied to these data (with the exception of the bootstrap-t interval, which assumes large cluster sizes). These methods are also applied to a cluster randomized trial on adolescent tobacco use for illustration. When applied to a binary outcome variable in a small number of large clusters, existing confidence interval methods for the intraclass correlation coefficient provide poor coverage. However, confidence intervals constructed using the new approach combined with Smith's method provide nominal or close to nominal coverage when the intraclass correlation coefficient is small (<0.05), as is the case in most community intervention trials. This study concludes that when a binary outcome variable is measured in a small number of large clusters, confidence intervals for the intraclass correlation coefficient may be constructed by dividing existing clusters into sub-clusters (e.g. groups of 5) and using Smith's method. The resulting confidence intervals provide nominal or close to nominal coverage across a wide range of parameters when the intraclass correlation coefficient is small (<0.05). Application of this method should provide investigators with a better understanding of the uncertainty associated with a point estimator of the intraclass correlation coefficient used for determining the sample size needed for a newly designed community-based trial. © The Author(s) 2015.
A survey of kernel-type estimators for copula and their applications
NASA Astrophysics Data System (ADS)
Sumarjaya, I. W.
2017-10-01
Copulas have been widely used to model nonlinear dependence structure. Main applications of copulas include areas such as finance, insurance, hydrology, rainfall to name but a few. The flexibility of copula allows researchers to model dependence structure beyond Gaussian distribution. Basically, a copula is a function that couples multivariate distribution functions to their one-dimensional marginal distribution functions. In general, there are three methods to estimate copula. These are parametric, nonparametric, and semiparametric method. In this article we survey kernel-type estimators for copula such as mirror reflection kernel, beta kernel, transformation method and local likelihood transformation method. Then, we apply these kernel methods to three stock indexes in Asia. The results of our analysis suggest that, albeit variation in information criterion values, the local likelihood transformation method performs better than the other kernel methods.
Chen, C; Li, H; Zhou, X; Wong, S T C
2008-05-01
Image-based, high throughput genome-wide RNA interference (RNAi) experiments are increasingly carried out to facilitate the understanding of gene functions in intricate biological processes. Automated screening of such experiments generates a large number of images with great variations in image quality, which makes manual analysis unreasonably time-consuming. Therefore, effective techniques for automatic image analysis are urgently needed, in which segmentation is one of the most important steps. This paper proposes a fully automatic method for cells segmentation in genome-wide RNAi screening images. The method consists of two steps: nuclei and cytoplasm segmentation. Nuclei are extracted and labelled to initialize cytoplasm segmentation. Since the quality of RNAi image is rather poor, a novel scale-adaptive steerable filter is designed to enhance the image in order to extract long and thin protrusions on the spiky cells. Then, constraint factor GCBAC method and morphological algorithms are combined to be an integrated method to segment tight clustered cells. Compared with the results obtained by using seeded watershed and the ground truth, that is, manual labelling results by experts in RNAi screening data, our method achieves higher accuracy. Compared with active contour methods, our method consumes much less time. The positive results indicate that the proposed method can be applied in automatic image analysis of multi-channel image screening data.
NASA Astrophysics Data System (ADS)
Xie, Yunfei; Li, Pei; Zhang, Jin; Wang, Heya; Qian, He; Yao, Weirong
2013-10-01
Azodicarbonamide is widely applied in the food industry as a new flour gluten fortifier in China, Canada, the United States, and some other countries, whose metabolites of biurea and semicarbazide hydrochloride are reaction products during baking. In this study, IR, Raman and surface-enhanced Raman scattering (SERS) spectra of azodicarbonamide, biurea, and semicarbazide hydrochloride have been studied, and vibrational bands have been assigned on the basis of density functional theory (DFT) calculations. The calculated Raman spectra were in good agreement with experimental Raman spectra. The SERS method coupled with active gold substrates has also been applied for detection of the three chemicals with pure water as solvent, with the limit of detection of this method being as low as 10 μg/mL (less than 45 μg/mL). These results showed that azodicarbonamide and its metabolites could be detected by the vibrational spectra technique, which might be applied as a powerful tool for the rapid detection on these species derived from agents added to flour.
CP-CHARM: segmentation-free image classification made accessible.
Uhlmann, Virginie; Singh, Shantanu; Carpenter, Anne E
2016-01-27
Automated classification using machine learning often relies on features derived from segmenting individual objects, which can be difficult to automate. WND-CHARM is a previously developed classification algorithm in which features are computed on the whole image, thereby avoiding the need for segmentation. The algorithm obtained encouraging results but requires considerable computational expertise to execute. Furthermore, some benchmark sets have been shown to be subject to confounding artifacts that overestimate classification accuracy. We developed CP-CHARM, a user-friendly image-based classification algorithm inspired by WND-CHARM in (i) its ability to capture a wide variety of morphological aspects of the image, and (ii) the absence of requirement for segmentation. In order to make such an image-based classification method easily accessible to the biological research community, CP-CHARM relies on the widely-used open-source image analysis software CellProfiler for feature extraction. To validate our method, we reproduced WND-CHARM's results and ensured that CP-CHARM obtained comparable performance. We then successfully applied our approach on cell-based assay data and on tissue images. We designed these new training and test sets to reduce the effect of batch-related artifacts. The proposed method preserves the strengths of WND-CHARM - it extracts a wide variety of morphological features directly on whole images thereby avoiding the need for cell segmentation, but additionally, it makes the methods easily accessible for researchers without computational expertise by implementing them as a CellProfiler pipeline. It has been demonstrated to perform well on a wide range of bioimage classification problems, including on new datasets that have been carefully selected and annotated to minimize batch effects. This provides for the first time a realistic and reliable assessment of the whole image classification strategy.
[Research on hyperspectral remote sensing in monitoring snow contamination concentration].
Tang, Xu-guang; Liu, Dian-wei; Zhang, Bai; Du, Jia; Lei, Xiao-chun; Zeng, Li-hong; Wang, Yuan-dong; Song, Kai-shan
2011-05-01
Contaminants in the snow can be used to reflect regional and global environmental pollution caused by human activities. However, so far, the research on space-time monitoring of snow contamination concentration for a wide range or areas difficult for human to reach is very scarce. In the present paper, based on the simulated atmospheric deposition experiments, the spectroscopy technique method was applied to analyze the effect of different contamination concentration on the snow reflectance spectra. Then an evaluation of snow contamination concentration (SCC) retrieval methods was conducted using characteristic index method (SDI), principal component analysis (PCA), BP neural network and RBF neural network method, and the estimate effects of four methods were compared. The results showed that the neural network model combined with hyperspectral remote sensing data could estimate the SCC well.
Research on the generation of the background with sea and sky in infrared scene
NASA Astrophysics Data System (ADS)
Dong, Yan-zhi; Han, Yan-li; Lou, Shu-li
2008-03-01
It is important for scene generation to keep the texture of infrared images in simulation of anti-ship infrared imaging guidance. We studied the fractal method and applied it to the infrared scene generation. We adopted the method of horizontal-vertical (HV) partition to encode the original image. Basing on the properties of infrared image with sea-sky background, we took advantage of Local Iteration Function System (LIFS) to decrease the complexity of computation and enhance the processing rate. Some results were listed. The results show that the fractal method can keep the texture of infrared image better and can be used in the infrared scene generation widely in future.
Head-target tracking control of well drilling
NASA Astrophysics Data System (ADS)
Agzamov, Z. V.
2018-05-01
The method of directional drilling trajectory control for oil and gas wells using predictive models is considered in the paper. The developed method does not apply optimization and therefore there is no need for the high-performance computing. Nevertheless, it allows following the well-plan with high precision taking into account process input saturation. Controller output is calculated both from the present target reference point of the well-plan and from well trajectory prediction with using the analytical model. This method allows following a well-plan not only on angular, but also on the Cartesian coordinates. Simulation of the control system has confirmed the high precision and operation performance with a wide range of random disturbance action.
On the convergence of a linesearch based proximal-gradient method for nonconvex optimization
NASA Astrophysics Data System (ADS)
Bonettini, S.; Loris, I.; Porta, F.; Prato, M.; Rebegoldi, S.
2017-05-01
We consider a variable metric linesearch based proximal gradient method for the minimization of the sum of a smooth, possibly nonconvex function plus a convex, possibly nonsmooth term. We prove convergence of this iterative algorithm to a critical point if the objective function satisfies the Kurdyka-Łojasiewicz property at each point of its domain, under the assumption that a limit point exists. The proposed method is applied to a wide collection of image processing problems and our numerical tests show that our algorithm results to be flexible, robust and competitive when compared to recently proposed approaches able to address the optimization problems arising in the considered applications.
Han, Lu; Benseler, Susanne M; Tyrrell, Pascal N
2018-05-01
Rheumatic diseases encompass a wide range of conditions caused by inflammation and dysregulation of the immune system resulting in organ damage. Research in these heterogeneous diseases benefits from multivariate methods. The aim of this review was to describe and evaluate current literature in rheumatology regarding cluster analysis and correspondence analysis. A systematic review showed an increase in studies making use of these 2 methods. However, standardization in how these methods are applied and reported is needed. Researcher expertise was determined to be the main barrier to considering these approaches, whereas education and collaborating with a biostatistician were suggested ways forward. Copyright © 2018 Elsevier Inc. All rights reserved.
Assurance of COTS Boards for Space Flight. Part 1
NASA Technical Reports Server (NTRS)
Plante, Jeannette; Helmold, Norm; Eveland, Clay
1998-01-01
Space Flight hardware and software designers are increasingly turning to Commercial-Off-the-Shelf (COTS) products in hopes of meeting the demands imposed on them by projects with short development cycle times. The Technology Validation Assurance (TVA) team at NASA GSFC has embarked on applying a method for inserting COTS hardware into the Spartan 251 spacecraft. This method includes Procurement, Characterization, Ruggedization/Remediation and Verification Testing process steps which are intended to increase the uses confidence in the hardware's ability to function in the intended application for the required duration. As this method is refined with use, it has the potential for becoming a benchmark for industry-wide use of COTS in high reliability systems.
Leakey, Tatiana I; Zielinski, Jerzy; Siegfried, Rachel N; Siegel, Eric R; Fan, Chun-Yang; Cooney, Craig A
2008-06-01
DNA methylation at cytosines is a widely studied epigenetic modification. Methylation is commonly detected using bisulfite modification of DNA followed by PCR and additional techniques such as restriction digestion or sequencing. These additional techniques are either laborious, require specialized equipment, or are not quantitative. Here we describe a simple algorithm that yields quantitative results from analysis of conventional four-dye-trace sequencing. We call this method Mquant and we compare it with the established laboratory method of combined bisulfite restriction assay (COBRA). This analysis of sequencing electropherograms provides a simple, easily applied method to quantify DNA methylation at specific CpG sites.
Crystallographic Characterization of Extraterrestrial Materials by Energy-Scanning X-ray Diffraction
NASA Technical Reports Server (NTRS)
Hagiya, Kenji; Mikouchi, Takashi; Ohsumi, Kazumasa; Terada, Yasuko; Yagi, Naoto; Komatsu, Mutsumi; Yamaguchi, Shoki; Hirata, Arashi; Kurokawa, Ayaka; Zolensky, Michael E. (Principal Investigator)
2016-01-01
We have continued our long-term project using X-ray diffraction to characterize a wide range of extraterrestrial samples. The stationary sample method with polychromatic X-rays is advantageous because the irradiated area of the sample is always same and fixed, meaning that all diffraction spots occur from the same area of the sample, however, unit cell parameters cannot be directly obtained by this method though they are very important for identification of mineral and for determination of crystal structures. In order to obtain the cell parameters even in the case of the sample stationary method, we apply energy scanning of a micro-beam of monochromatic SR at SPring-8.
Geochemical and mineralogical methods of prospecting for mineral deposits
Fersman, A. Ye; Borovik, S. A.; Gorshkov, G.V.; Popov, S.D.; Sosedko, A.F.; Hartsock, Lydia; Pierce, A.P.
1952-01-01
Fersman's book "Geochemical and mineralogical methods of prospecting for mineral deposits" (Geokhimicheskiye i mineralogicheskiye metody poiskov poleznykh iskopayemykh) covers all petrographic, mineralogical, and geochemical techniques that are used either directly or indirectly in mineral exploration. Chapter IV is of particular interest because it describes certain geochemical methods and principles that have not been widely applied outside of the Soviet Union. The original contained a number of photographs that have been omitted; the titles of the photographs are given in the body of the text. Wherever possible, bibliographic references have been checked, and the full titles given. References given in footnotes in the original have been collected and added at the end of each section as a bibliography.
Bioelectrical Impedance Methods for Noninvasive Health Monitoring: A Review
Bera, Tushar Kanti
2014-01-01
Under the alternating electrical excitation, biological tissues produce a complex electrical impedance which depends on tissue composition, structures, health status, and applied signal frequency, and hence the bioelectrical impedance methods can be utilized for noninvasive tissue characterization. As the impedance responses of these tissue parameters vary with frequencies of the applied signal, the impedance analysis conducted over a wide frequency band provides more information about the tissue interiors which help us to better understand the biological tissues anatomy, physiology, and pathology. Over past few decades, a number of impedance based noninvasive tissue characterization techniques such as bioelectrical impedance analysis (BIA), electrical impedance spectroscopy (EIS), electrical impedance plethysmography (IPG), impedance cardiography (ICG), and electrical impedance tomography (EIT) have been proposed and a lot of research works have been conducted on these methods for noninvasive tissue characterization and disease diagnosis. In this paper BIA, EIS, IPG, ICG, and EIT techniques and their applications in different fields have been reviewed and technical perspective of these impedance methods has been presented. The working principles, applications, merits, and demerits of these methods has been discussed in detail along with their other technical issues followed by present status and future trends. PMID:27006932
Interactive design optimization of magnetorheological-brake actuators using the Taguchi method
NASA Astrophysics Data System (ADS)
Erol, Ozan; Gurocak, Hakan
2011-10-01
This research explored an optimization method that would automate the process of designing a magnetorheological (MR)-brake but still keep the designer in the loop. MR-brakes apply resistive torque by increasing the viscosity of an MR fluid inside the brake. This electronically controllable brake can provide a very large torque-to-volume ratio, which is very desirable for an actuator. However, the design process is quite complex and time consuming due to many parameters. In this paper, we adapted the popular Taguchi method, widely used in manufacturing, to the problem of designing a complex MR-brake. Unlike other existing methods, this approach can automatically identify the dominant parameters of the design, which reduces the search space and the time it takes to find the best possible design. While automating the search for a solution, it also lets the designer see the dominant parameters and make choices to investigate only their interactions with the design output. The new method was applied for re-designing MR-brakes. It reduced the design time from a week or two down to a few minutes. Also, usability experiments indicated significantly better brake designs by novice users.
Matsumoto, Hiroshi; Saito, Fumiyo; Takeyoshi, Masahiro
2015-12-01
Recently, the development of several gene expression-based prediction methods has been attempted in the fields of toxicology. CARCINOscreen® is a gene expression-based screening method to predict carcinogenicity of chemicals which target the liver with high accuracy. In this study, we investigated the applicability of the gene expression-based screening method to SD and Wistar rats by using CARCINOscreen®, originally developed with F344 rats, with two carcinogens, 2,4-diaminotoluen and thioacetamide, and two non-carcinogens, 2,6-diaminotoluen and sodium benzoate. After the 28-day repeated dose test was conducted with each chemical in SD and Wistar rats, microarray analysis was performed using total RNA extracted from each liver. Obtained gene expression data were applied to CARCINOscreen®. Predictive scores obtained by the CARCINOscreen® for known carcinogens were > 2 in all strains of rats, while non-carcinogens gave prediction scores below 0.5. These results suggested that the gene expression based screening method, CARCINOscreen®, can be applied to SD and Wistar rats, widely used strains in toxicological studies, by setting of an appropriate boundary line of prediction score to classify the chemicals into carcinogens and non-carcinogens.
De Geeter, Nele; Crevecoeur, Guillaume; Dupre, Luc
2011-02-01
In many important bioelectromagnetic problem settings, eddy-current simulations are required. Examples are the reduction of eddy-current artifacts in magnetic resonance imaging and techniques, whereby the eddy currents interact with the biological system, like the alteration of the neurophysiology due to transcranial magnetic stimulation (TMS). TMS has become an important tool for the diagnosis and treatment of neurological diseases and psychiatric disorders. A widely applied method for simulating the eddy currents is the impedance method (IM). However, this method has to contend with an ill conditioned problem and consequently a long convergence time. When dealing with optimal design problems and sensitivity control, the convergence rate becomes even more crucial since the eddy-current solver needs to be evaluated in an iterative loop. Therefore, we introduce an independent IM (IIM), which improves the conditionality and speeds up the numerical convergence. This paper shows how IIM is based on IM and what are the advantages. Moreover, the method is applied to the efficient simulation of TMS. The proposed IIM achieves superior convergence properties with high time efficiency, compared to the traditional IM and is therefore a useful tool for accurate and fast TMS simulations.
Hossain, Ahmed; Beyene, Joseph
2014-01-01
This article compares baseline, average, and longitudinal data analysis methods for identifying genetic variants in genome-wide association study using the Genetic Analysis Workshop 18 data. We apply methods that include (a) linear mixed models with baseline measures, (b) random intercept linear mixed models with mean measures outcome, and (c) random intercept linear mixed models with longitudinal measurements. In the linear mixed models, covariates are included as fixed effects, whereas relatedness among individuals is incorporated as the variance-covariance structure of the random effect for the individuals. The overall strategy of applying linear mixed models decorrelate the data is based on Aulchenko et al.'s GRAMMAR. By analyzing systolic and diastolic blood pressure, which are used separately as outcomes, we compare the 3 methods in identifying a known genetic variant that is associated with blood pressure from chromosome 3 and simulated phenotype data. We also analyze the real phenotype data to illustrate the methods. We conclude that the linear mixed model with longitudinal measurements of diastolic blood pressure is the most accurate at identifying the known single-nucleotide polymorphism among the methods, but linear mixed models with baseline measures perform best with systolic blood pressure as the outcome.
Electron-helium S-wave model benchmark calculations. I. Single ionization and single excitation
NASA Astrophysics Data System (ADS)
Bartlett, Philip L.; Stelbovics, Andris T.
2010-02-01
A full four-body implementation of the propagating exterior complex scaling (PECS) method [J. Phys. B 37, L69 (2004)] is developed and applied to the electron-impact of helium in an S-wave model. Time-independent solutions to the Schrödinger equation are found numerically in coordinate space over a wide range of energies and used to evaluate total and differential cross sections for a complete set of three- and four-body processes with benchmark precision. With this model we demonstrate the suitability of the PECS method for the complete solution of the full electron-helium system. Here we detail the theoretical and computational development of the four-body PECS method and present results for three-body channels: single excitation and single ionization. Four-body cross sections are presented in the sequel to this article [Phys. Rev. A 81, 022716 (2010)]. The calculations reveal structure in the total and energy-differential single-ionization cross sections for excited-state targets that is due to interference from autoionization channels and is evident over a wide range of incident electron energies.
[Male hormonal contraception: past, present, future].
Pásztor, Norbert; Hegyi, Borbála Eszter; Badó, Attila; Németh, Gábor
2017-11-01
In certain regions of the world the enormous rate of population growth raises economic and public health concerns and widely accessible contraceptive methods would be desired. In contrast, in other countries the use of effective contraception is a question of individual preferences. Today, most of the reliable contraceptive methods are applied by women, while the options for male methods are quite limited. It is well known that significant portion of pregnancies are still unplanned and several data revealed men's willingness to take part in family planning. Based on these needs, remarkable efforts have been made to develop a suitable hormonal contraceptive agent for men. With the exogenous suppression of follicle stimulating hormone and luteinizing hormone secretion, the inhibition of the testicular testosterone production and the spermatogenesis can be achieved. In the beginning, testosterone-derivatives, or testosterone-progestin combinations were administered, later synthetic androgen agents were developed. Despite of these efforts, unfortunately, there is no safe, widely feasible male hormonal contraception to date, but in the future this goal can be achieved by solving the key hurdles. Orv Hetil. 2017; 158(46): 1819-1830.
Molecular dynamics simulations and novel drug discovery.
Liu, Xuewei; Shi, Danfeng; Zhou, Shuangyan; Liu, Hongli; Liu, Huanxiang; Yao, Xiaojun
2018-01-01
Molecular dynamics (MD) simulations can provide not only plentiful dynamical structural information on biomacromolecules but also a wealth of energetic information about protein and ligand interactions. Such information is very important to understanding the structure-function relationship of the target and the essence of protein-ligand interactions and to guiding the drug discovery and design process. Thus, MD simulations have been applied widely and successfully in each step of modern drug discovery. Areas covered: In this review, the authors review the applications of MD simulations in novel drug discovery, including the pathogenic mechanisms of amyloidosis diseases, virtual screening and the interaction mechanisms between drugs and targets. Expert opinion: MD simulations have been used widely in investigating the pathogenic mechanisms of diseases caused by protein misfolding, in virtual screening, and in investigating drug resistance mechanisms caused by mutations of the target. These issues are very difficult to solve by experimental methods alone. Thus, in the future, MD simulations will have wider application with the further improvement of computational capacity and the development of better sampling methods and more accurate force fields together with more efficient analysis methods.
Signal Detection and Monitoring Based on Longitudinal Healthcare Data
Suling, Marc; Pigeot, Iris
2012-01-01
Post-marketing detection and surveillance of potential safety hazards are crucial tasks in pharmacovigilance. To uncover such safety risks, a wide set of techniques has been developed for spontaneous reporting data and, more recently, for longitudinal data. This paper gives a broad overview of the signal detection process and introduces some types of data sources typically used. The most commonly applied signal detection algorithms are presented, covering simple frequentistic methods like the proportional reporting rate or the reporting odds ratio, more advanced Bayesian techniques for spontaneous and longitudinal data, e.g., the Bayesian Confidence Propagation Neural Network or the Multi-item Gamma-Poisson Shrinker and methods developed for longitudinal data only, like the IC temporal pattern detection. Additionally, the problem of adjustment for underlying confounding is discussed and the most common strategies to automatically identify false-positive signals are addressed. A drug monitoring technique based on Wald’s sequential probability ratio test is presented. For each method, a real-life application is given, and a wide set of literature for further reading is referenced. PMID:24300373
NASA Technical Reports Server (NTRS)
Bayliss, A.; Goldstein, C. I.; Turkel, E.
1984-01-01
The Helmholtz Equation (-delta-K(2)n(2))u=0 with a variable index of refraction, n, and a suitable radiation condition at infinity serves as a model for a wide variety of wave propagation problems. A numerical algorithm was developed and a computer code implemented that can effectively solve this equation in the intermediate frequency range. The equation is discretized using the finite element method, thus allowing for the modeling of complicated geometrices (including interfaces) and complicated boundary conditions. A global radiation boundary condition is imposed at the far field boundary that is exact for an arbitrary number of propagating modes. The resulting large, non-selfadjoint system of linear equations with indefinite symmetric part is solved using the preconditioned conjugate gradient method applied to the normal equations. A new preconditioner is developed based on the multigrid method. This preconditioner is vectorizable and is extremely effective over a wide range of frequencies provided the number of grid levels is reduced for large frequencies. A heuristic argument is given that indicates the superior convergence properties of this preconditioner.
Hardening parts by chrome plating in manufacture and repair
NASA Astrophysics Data System (ADS)
Astanin, V. K.; Pukhov, E. V.; Stekolnikov, Y. A.; Emtsev, V. V.; Golikova, O. A.
2018-03-01
In the engineering industry, galvanic coatings are widely used to prolong the service life of the machines, which contribute to the increase in the strength of the parts and their resistance to environmental influences, temperature and pressure drops, wear and fretting corrosion. Galvanic coatings have been widely applied in engineering, including agriculture, aircraft building, mining, construction, and electronics. The article focuses on the manufacturing methods of new agricultural machinery parts and the repair techniques of worn parts by chrome plating. The main attention is paid to the unstable methods of chromium deposition (in pulsed and reversing modes) in low-concentration electrolytes, which makes it possible to increase the reliability and durability of the hardened parts operation by changing the conditions of electrocrystallization, that is, directed formation of the structure and texture, thickness, roughness and microhardness of chromium plating. The practical recommendations are given on the current and temperature regimes of chromium deposition and composition of baths used for the restoration and hardening of the machine parts. Moreover, the basic methods of machining allowances removal are analysed.
Fabrication of Polyvinylpyrrolidone Fibers by Means of Rotary Forcespinning Method
NASA Astrophysics Data System (ADS)
Andjani, D.; Sriyanti, I.; Fauzi, A.; Edikresnha, D.; Munir, M. M.; Khairurrijal
2018-05-01
Fibers made from polymer materials have been widely developed as a carrier medium of active ingredients in drug delivery systems. In this research, PVP polymer was chosen because of its wide and safe use in the medical field. The purpose of this study was to produce PVP fibers that can later be applied as a carrier of active ingredients in drug delivery systems. The rotary forcespinning (RFS) method was chosen to shorten the time of production and to overcome the limitations of electrospinning method such as the use of high voltage and dielectric solutions. The PVP solution was varied in several concentrations (8 wt%, 10 wt%, 12 wt%, 14 wt%, 16 wt%, and 18 wt%) to achieve the best fibers morphology. The morphology and the diameter of fibers were analyzed using a digital microscope. From the microscope images, it can be shown that beaded fibers were formed when the concentration of polymer in the precursor solution was low. The number of beads decreased as the concentration of polymer increased. Beads-free fibers were fully formed at above certain polymer concentration.
Athermal design and analysis of glass-plastic hybrid lens
NASA Astrophysics Data System (ADS)
Yang, Jian; Cen, Zhaofeng; Li, Xiaotong
2018-01-01
With the rapid development of security market, the glass-plastic hybrid lens has gradually become a choice for the special requirements like high imaging quality in a wide temperature range and low cost. The reduction of spherical aberration is achieved by using aspherical surface instead of increasing the number of lenses. Obviously, plastic aspherical lens plays a great role in the cost reduction. However, the hybrid lens has a priority issue, which is the large thermal coefficient of expansion of plastic, causing focus shift and seriously affecting the imaging quality, so the hybrid lens is highly sensitive to the change of temperature. To ensure the system operates normally in a wide temperature range, it is necessary to eliminate the influence of temperature on the hybrid lens system. A practical design method named the Athermal Material Map is summarized and verified by an athermal design example according to the design index. It includes the distribution of optical power and selection of glass or plastic. The design result shows that the optical system has excellent imaging quality at a wide temperature range from -20 ° to 70 °. The method of athermal design in this paper has generality which could apply to optical system with plastic aspherical surface.
Fundamental frequency estimation of singing voice
NASA Astrophysics Data System (ADS)
de Cheveigné, Alain; Henrich, Nathalie
2002-05-01
A method of fundamental frequency (F0) estimation recently developped for speech [de Cheveigné and Kawahara, J. Acoust. Soc. Am. (to be published)] was applied to singing voice. An electroglottograph signal recorded together with the microphone provided a reference by which estimates could be validated. Using standard parameter settings as for speech, error rates were low despite the wide range of F0s (about 100 to 1600 Hz). Most ``errors'' were due to irregular vibration of the vocal folds, a sharp formant resonance that reduced the waveform to a single harmonic, or fast F0 changes such as in high-amplitude vibrato. Our database (18 singers from baritone to soprano) included examples of diphonic singing for which melody is carried by variations of the frequency of a narrow formant rather than F0. Varying a parameter (ratio of inharmonic to total power) the algorithm could be tuned to follow either frequency. Although the method has not been formally tested on a wide range of instruments, it seems appropriate for musical applications because it is accurate, accepts a wide range of F0s, and can be implemented with low latency for interactive applications. [Work supported by the Cognitique programme of the French Ministry of Research and Technology.
NASA Technical Reports Server (NTRS)
Hubeny, I.; Lanz, T.
1995-01-01
A new munerical method for computing non-Local Thermodynamic Equilibrium (non-LTE) model stellar atmospheres is presented. The method, called the hybird complete linearization/accelerated lambda iretation (CL/ALI) method, combines advantages of both its constituents. Its rate of convergence is virtually as high as for the standard CL method, while the computer time per iteration is almost as low as for the standard ALI method. The method is formulated as the standard complete lineariation, the only difference being that the radiation intensity at selected frequency points is not explicity linearized; instead, it is treated by means of the ALI approach. The scheme offers a wide spectrum of options, ranging from the full CL to the full ALI method. We deonstrate that the method works optimally if the majority of frequency points are treated in the ALI mode, while the radiation intensity at a few (typically two to 30) frequency points is explicity linearized. We show how this method can be applied to calculate metal line-blanketed non-LTE model atmospheres, by using the idea of 'superlevels' and 'superlines' introduced originally by Anderson (1989). We calculate several illustrative models taking into accont several tens of thosands of lines of Fe III to Fe IV and show that the hybrid CL/ALI method provides a robust method for calculating non-LTE line-blanketed model atmospheres for a wide range of stellar parameters. The results for individual stellar types will be presented in subsequent papers in this series.
Anguera, M. Teresa; Camerino, Oleguer; Castañer, Marta; Sánchez-Algarra, Pedro; Onwuegbuzie, Anthony J.
2017-01-01
Mixed methods studies are been increasingly applied to a diversity of fields. In this paper, we discuss the growing use—and enormous potential—of mixed methods research in the field of sport and physical activity. A second aim is to contribute to strengthening the characteristics of mixed methods research by showing how systematic observation offers rigor within a flexible framework that can be applied to a wide range of situations. Observational methodology is characterized by high scientific rigor and flexibility throughout its different stages and allows the objective study of spontaneous behavior in natural settings, with no external influence. Mixed methods researchers need to take bold yet thoughtful decisions regarding both substantive and procedural issues. We present three fundamental and complementary ideas to guide researchers in this respect: we show why studies of sport and physical activity that use a mixed methods research approach should be included in the field of mixed methods research, we highlight the numerous possibilities offered by observational methodology in this field through the transformation of descriptive data into quantifiable code matrices, and we discuss possible solutions for achieving true integration of qualitative and quantitative findings. PMID:29312061
Light field rendering with omni-directional camera
NASA Astrophysics Data System (ADS)
Todoroki, Hiroshi; Saito, Hideo
2003-06-01
This paper presents an approach to capture visual appearance of a real environment such as an interior of a room. We propose the method for generating arbitrary viewpoint images by building light field with the omni-directional camera, which can capture the wide circumferences. Omni-directional camera used in this technique is a special camera with the hyperbolic mirror in the upper part of a camera, so that we can capture luminosity in the environment in the range of 360 degree of circumferences in one image. We apply the light field method, which is one technique of Image-Based-Rendering(IBR), for generating the arbitrary viewpoint images. The light field is a kind of the database that records the luminosity information in the object space. We employ the omni-directional camera for constructing the light field, so that we can collect many view direction images in the light field. Thus our method allows the user to explore the wide scene, that can acheive realistic representation of virtual enviroment. For demonstating the proposed method, we capture image sequence in our lab's interior environment with an omni-directional camera, and succesfully generate arbitray viewpoint images for virual tour of the environment.
An enzyme-mediated protein-fragment complementation assay for substrate screening of sortase A.
Li, Ning; Yu, Zheng; Ji, Qun; Sun, Jingying; Liu, Xiao; Du, Mingjuan; Zhang, Wei
2017-04-29
Enzyme-mediated protein conjugation has gained great attention recently due to the remarkable site-selectivity and mild reaction condition affected by the nature of enzyme. Among all sorts of enzymes reported, sortase A from Staphylococcus aureus (SaSrtA) is the most popular enzyme due to its selectivity and well-demonstrated applications. Position scanning has been widely applied to understand enzyme substrate specificity, but the low throughput of chemical synthesis of peptide substrates and analytical methods (HPLC, LC-ESI-MS) have been the major hurdle to fully decode enzyme substrate profile. We have developed a simple high-throughput substrate profiling method to reveal novel substrates of SaSrtA 7M, a widely used hyperactive peptide ligase, by modified protein-fragment complementation assay (PCA). A small library targeting the LPATG motif recognized by SaSrtA 7M was generated and screened against proteins carrying N-terminal glycine. Using this method, we have confirmed all currently known substrates of the enzyme, and moreover identified some previously unknown substrates with varying activities. The method provides an easy, fast and highly-sensitive way to determine substrate profile of a peptide ligase in a high-throughput manner. Copyright © 2017 Elsevier Inc. All rights reserved.
Demi, Libertario; Verweij, Martin D; Van Dongen, Koen W A
2012-11-01
Real-time 2-D or 3-D ultrasound imaging systems are currently used for medical diagnosis. To achieve the required data acquisition rate, these systems rely on parallel beamforming, i.e., a single wide-angled beam is used for transmission and several narrow parallel beams are used for reception. When applied to harmonic imaging, the demand for high-amplitude pressure wave fields, necessary to generate the harmonic components, conflicts with the use of a wide-angled beam in transmission because this results in a large spatial decay of the acoustic pressure. To enhance the amplitude of the harmonics, it is preferable to do the reverse: transmit several narrow parallel beams and use a wide-angled beam in reception. Here, this concept is investigated to determine whether it can be used for harmonic imaging. The method proposed in this paper relies on orthogonal frequency division multiplexing (OFDM), which is used to create distinctive parallel beams in transmission. To test the proposed method, a numerical study has been performed, in which the transmit, receive, and combined beam profiles generated by a linear array have been simulated for the second-harmonic component. Compared with standard parallel beamforming, application of the proposed technique results in a gain of 12 dB for the main beam and in a reduction of the side lobes. Experimental verification in water has also been performed. Measurements obtained with a single-element emitting transducer and a hydrophone receiver confirm the possibility of exciting a practical ultrasound transducer with multiple Gaussian modulated pulses, each having a different center frequency, and the capability to generate distinguishable second-harmonic components.
3D geometric phase analysis and its application in 3D microscopic morphology measurement
NASA Astrophysics Data System (ADS)
Zhu, Ronghua; Shi, Wenxiong; Cao, Quankun; Liu, Zhanwei; Guo, Baoqiao; Xie, Huimin
2018-04-01
Although three-dimensional (3D) morphology measurement has been widely applied on the macro-scale, there is still a lack of 3D measurement technology on the microscopic scale. In this paper, a microscopic 3D measurement technique based on the 3D-geometric phase analysis (GPA) method is proposed. In this method, with machine vision and phase matching, the traditional GPA method is extended to three dimensions. Using this method, 3D deformation measurement on the micro-scale can be realized using a light microscope. Simulation experiments were conducted in this study, and the results demonstrate that the proposed method has a good anti-noise ability. In addition, the 3D morphology of the necking zone in a tensile specimen was measured, and the results demonstrate that this method is feasible.
Prior knowledge driven Granger causality analysis on gene regulatory network discovery
Yao, Shun; Yoo, Shinjae; Yu, Dantong
2015-08-28
Our study focuses on discovering gene regulatory networks from time series gene expression data using the Granger causality (GC) model. However, the number of available time points (T) usually is much smaller than the number of target genes (n) in biological datasets. The widely applied pairwise GC model (PGC) and other regularization strategies can lead to a significant number of false identifications when n>>T. In this study, we proposed a new method, viz., CGC-2SPR (CGC using two-step prior Ridge regularization) to resolve the problem by incorporating prior biological knowledge about a target gene data set. In our simulation experiments, themore » propose new methodology CGC-2SPR showed significant performance improvement in terms of accuracy over other widely used GC modeling (PGC, Ridge and Lasso) and MI-based (MRNET and ARACNE) methods. In addition, we applied CGC-2SPR to a real biological dataset, i.e., the yeast metabolic cycle, and discovered more true positive edges with CGC-2SPR than with the other existing methods. In our research, we noticed a “ 1+1>2” effect when we combined prior knowledge and gene expression data to discover regulatory networks. Based on causality networks, we made a functional prediction that the Abm1 gene (its functions previously were unknown) might be related to the yeast’s responses to different levels of glucose. In conclusion, our research improves causality modeling by combining heterogeneous knowledge, which is well aligned with the future direction in system biology. Furthermore, we proposed a method of Monte Carlo significance estimation (MCSE) to calculate the edge significances which provide statistical meanings to the discovered causality networks. All of our data and source codes will be available under the link https://bitbucket.org/dtyu/granger-causality/wiki/Home.« less
A wide range real-time synchronous demodulation system for the dispersion interferometer on HL-2M
NASA Astrophysics Data System (ADS)
Wu, Tongyu; Zhang, Wei; Yin, Zejie
2017-09-01
A real-time synchronous demodulation system has been developed for the dispersion interferometer on a HL-2M tokamak. The system is based on the phase extraction method which uses a ratio of modulation amplitudes. A high-performance field programmable gate array with pipeline process capabilities is used to realize the real time synchronous demodulation algorithm. A fringe jump correction algorithm is applied to follow the fast density changes of the plasma. By using the Peripheral Component Interconnect Express protocol, the electronics can perform real-time density feedback with a temporal resolution of 100 ns. Some experimental results presented show that the electronics can obtain a wide measurement range of 2.28 × 1022 m-2 with high precision.
Distributed nuclear medicine applications using World Wide Web and Java technology.
Knoll, P; Höll, K; Mirzaei, S; Koriska, K; Köhn, H
2000-01-01
At present, medical applications applying World Wide Web (WWW) technology are mainly used to view static images and to retrieve some information. The Java platform is a relative new way of computing, especially designed for network computing and distributed applications which enables interactive connection between user and information via the WWW. The Java 2 Software Development Kit (SDK) including Java2D API, Java Remote Method Invocation (RMI) technology, Object Serialization and the Java Advanced Imaging (JAI) extension was used to achieve a robust, platform independent and network centric solution. Medical image processing software based on this technology is presented and adequate performance capability of Java is demonstrated by an iterative reconstruction algorithm for single photon emission computerized tomography (SPECT).
A field evaluation of a SO 2 passive sampler in tropical industrial and urban air
NASA Astrophysics Data System (ADS)
Cruz, Lícia P. S.; Campos, Vânia P.; Silva, Adriana M. C.; Tavares, Tania M.
Passive samplers have been widely used for over 30 years in the measurement of personal exposure to vapours and gases in the workplace. These samplers have just recently been applied in the monitoring of ambient air, which presents concentrations that are normally much smaller than those found in occupational environments. The locally constructed passive sampler was based on gas molecular diffusion through static air layer. The design used minimizes particle interference and turbulent diffusion. After exposure, the SO 2 trapped in impregnated filters with Na 2CO 3 was extracted by means of an ultrasonic bath, for 15 min, using 1.0×10 -2 mol L -1 H 2O 2. It was determined as SO 4-2 by ion chromatography. The performance of the passive sampler was evaluated at different exposure periods, being applied in industrial and urban areas. Method precision as relative standard deviation for three simultaneously applied passive samplers was within 10%. Passive sampling, when compared to active monitoring methods under real conditions, used in urban and industrial areas, showed an overall accuracy of 15%. A statistical comparison with an active method was performed to demonstrate the validity of the passive method. Sampler capacity varied between 98 and 421 μg SO 2 m -3 for exposure periods of one month and one week, respectively, which allows its use in highly polluted areas.
A new algorithm for modeling friction in dynamic mechanical systems
NASA Technical Reports Server (NTRS)
Hill, R. E.
1988-01-01
A method of modeling friction forces that impede the motion of parts of dynamic mechanical systems is described. Conventional methods in which the friction effect is assumed a constant force, or torque, in a direction opposite to the relative motion, are applicable only to those cases where applied forces are large in comparison to the friction, and where there is little interest in system behavior close to the times of transitions through zero velocity. An algorithm is described that provides accurate determination of friction forces over a wide range of applied force and velocity conditions. The method avoids the simulation errors resulting from a finite integration interval used in connection with a conventional friction model, as is the case in many digital computer-based simulations. The algorithm incorporates a predictive calculation based on initial conditions of motion, externally applied forces, inertia, and integration step size. The predictive calculation in connection with an external integration process provides an accurate determination of both static and Coulomb friction forces and resulting motions in dynamic simulations. Accuracy of the results is improved over that obtained with conventional methods and a relatively large integration step size is permitted. A function block for incorporation in a specific simulation program is described. The general form of the algorithm facilitates implementation with various programming languages such as FORTRAN or C, as well as with other simulation programs.
Methodological flaws introduce strong bias into molecular analysis of microbial populations.
Krakat, N; Anjum, R; Demirel, B; Schröder, P
2017-02-01
In this study, we report how different cell disruption methods, PCR primers and in silico analyses can seriously bias results from microbial population studies, with consequences for the credibility and reproducibility of the findings. Our results emphasize the pitfalls of commonly used experimental methods that can seriously weaken the interpretation of results. Four different cell lysis methods, three commonly used primer pairs and various computer-based analyses were applied to investigate the microbial diversity of a fermentation sample composed of chicken dung. The fault-prone, but still frequently used, amplified rRNA gene restriction analysis was chosen to identify common weaknesses. In contrast to other studies, we focused on the complete analytical process, from cell disruption to in silico analysis, and identified potential error rates. This identified a wide disagreement of results between applied experimental approaches leading to very different community structures depending on the chosen approach. The interpretation of microbial diversity data remains a challenge. In order to accurately investigate the taxonomic diversity and structure of prokaryotic communities, we suggest a multi-level approach combining DNA-based and DNA-independent techniques. The identified weaknesses of commonly used methods to study microbial diversity can be overcome by a multi-level approach, which produces more reliable data about the fate and behaviour of microbial communities of engineered habitats such as biogas plants, so that the best performance can be ensured. © 2016 The Society for Applied Microbiology.
Fast and accurate imputation of summary statistics enhances evidence of functional enrichment
Pasaniuc, Bogdan; Zaitlen, Noah; Shi, Huwenbo; Bhatia, Gaurav; Gusev, Alexander; Pickrell, Joseph; Hirschhorn, Joel; Strachan, David P.; Patterson, Nick; Price, Alkes L.
2014-01-01
Motivation: Imputation using external reference panels (e.g. 1000 Genomes) is a widely used approach for increasing power in genome-wide association studies and meta-analysis. Existing hidden Markov models (HMM)-based imputation approaches require individual-level genotypes. Here, we develop a new method for Gaussian imputation from summary association statistics, a type of data that is becoming widely available. Results: In simulations using 1000 Genomes (1000G) data, this method recovers 84% (54%) of the effective sample size for common (>5%) and low-frequency (1–5%) variants [increasing to 87% (60%) when summary linkage disequilibrium information is available from target samples] versus the gold standard of 89% (67%) for HMM-based imputation, which cannot be applied to summary statistics. Our approach accounts for the limited sample size of the reference panel, a crucial step to eliminate false-positive associations, and it is computationally very fast. As an empirical demonstration, we apply our method to seven case–control phenotypes from the Wellcome Trust Case Control Consortium (WTCCC) data and a study of height in the British 1958 birth cohort (1958BC). Gaussian imputation from summary statistics recovers 95% (105%) of the effective sample size (as quantified by the ratio of χ2 association statistics) compared with HMM-based imputation from individual-level genotypes at the 227 (176) published single nucleotide polymorphisms (SNPs) in the WTCCC (1958BC height) data. In addition, for publicly available summary statistics from large meta-analyses of four lipid traits, we publicly release imputed summary statistics at 1000G SNPs, which could not have been obtained using previously published methods, and demonstrate their accuracy by masking subsets of the data. We show that 1000G imputation using our approach increases the magnitude and statistical evidence of enrichment at genic versus non-genic loci for these traits, as compared with an analysis without 1000G imputation. Thus, imputation of summary statistics will be a valuable tool in future functional enrichment analyses. Availability and implementation: Publicly available software package available at http://bogdan.bioinformatics.ucla.edu/software/. Contact: bpasaniuc@mednet.ucla.edu or aprice@hsph.harvard.edu Supplementary information: Supplementary materials are available at Bioinformatics online. PMID:24990607
NASA Technical Reports Server (NTRS)
Pagnutti, Mary; Holekamp, Kara; Ryan, Robert E.; Vaughan, Ronand; Russell, Jeff; Prados, Don; Stanley, Thomas
2005-01-01
Remotely sensed ground reflectance is the foundation of any interoperability or change detection technique. Satellite intercomparisons and accurate vegetation indices, such as the Normalized Difference Vegetation Index (NDVI), require the generation of accurate reflectance maps (NDVI is used to describe or infer a wide variety of biophysical parameters and is defined in terms of near-infrared (NIR) and red band reflectances). Accurate reflectance-map generation from satellite imagery relies on the removal of solar and satellite geometry and of atmospheric effects and is generally referred to as atmospheric correction. Atmospheric correction of remotely sensed imagery to ground reflectance has been widely applied to a few systems only. The ability to obtain atmospherically corrected imagery and products from various satellites is essential to enable widescale use of remotely sensed, multitemporal imagery for a variety of applications. An atmospheric correction approach derived from the Moderate Resolution Imaging Spectroradiometer (MODIS) that can be applied to high-spatial-resolution satellite imagery under many conditions was evaluated to demonstrate a reliable, effective reflectance map generation method. Additional information is included in the original extended abstract.
Using self-organizing maps to classify humpback whale song units and quantify their similarity.
Allen, Jenny A; Murray, Anita; Noad, Michael J; Dunlop, Rebecca A; Garland, Ellen C
2017-10-01
Classification of vocal signals can be undertaken using a wide variety of qualitative and quantitative techniques. Using east Australian humpback whale song from 2002 to 2014, a subset of vocal signals was acoustically measured and then classified using a Self-Organizing Map (SOM). The SOM created (1) an acoustic dictionary of units representing the song's repertoire, and (2) Cartesian distance measurements among all unit types (SOM nodes). Utilizing the SOM dictionary as a guide, additional song recordings from east Australia were rapidly (manually) transcribed. To assess the similarity in song sequences, the Cartesian distance output from the SOM was applied in Levenshtein distance similarity analyses as a weighting factor to better incorporate unit similarity in the calculation (previously a qualitative process). SOMs provide a more robust and repeatable means of categorizing acoustic signals along with a clear quantitative measurement of sound type similarity based on acoustic features. This method can be utilized for a wide variety of acoustic databases especially those containing very large datasets and can be applied across the vocalization research community to help address concerns surrounding inconsistency in manual classification.
Reconstructing Past Admixture Processes from Local Genomic Ancestry Using Wavelet Transformation
Sanderson, Jean; Sudoyo, Herawati; Karafet, Tatiana M.; Hammer, Michael F.; Cox, Murray P.
2015-01-01
Admixture between long-separated populations is a defining feature of the genomes of many species. The mosaic block structure of admixed genomes can provide information about past contact events, including the time and extent of admixture. Here, we describe an improved wavelet-based technique that better characterizes ancestry block structure from observed genomic patterns. principal components analysis is first applied to genomic data to identify the primary population structure, followed by wavelet decomposition to develop a new characterization of local ancestry information along the chromosomes. For testing purposes, this method is applied to human genome-wide genotype data from Indonesia, as well as virtual genetic data generated using genome-scale sequential coalescent simulations under a wide range of admixture scenarios. Time of admixture is inferred using an approximate Bayesian computation framework, providing robust estimates of both admixture times and their associated levels of uncertainty. Crucially, we demonstrate that this revised wavelet approach, which we have released as the R package adwave, provides improved statistical power over existing wavelet-based techniques and can be used to address a broad range of admixture questions. PMID:25852078
Lee, Rebecca E.; Galavíz, Karla I.; Soltero, Erica G.; Rosales Chavez, Jose; Jauregui, Edtna; Lévesque, Lucie; Hernández, Luis Ortiz; Lopez y Taylor, Juan; Estabrooks, Paul A.
2017-01-01
ABSTRACT Objective: the RE-AIM framework has been widely used to evaluate internal and external validity of interventions aimed to promote physical activity, helping to provide comprehensive evaluation of the reach, efficacy, adoption, implementation and maintenance of research and programming. Despite this progress, the RE-AIM framework has not been used widely in Latin America. The purpose of this manuscript is to describe the RE-AIM framework, the process and materials developed for a one-day workshop in Guadalajara, and the acceptability and satisfaction of participants that attended the workshop. Methods: lecture, interactive examples and an agenda were developed for a one-day RE-AIM workshop over a three month period. Results: thirty two health care practitioners (M age = 30.6, SD=9.9 years) attended the workshop. Most (100%) rated the workshop as credible, useful (100%) and intended to apply it in current or future research (95%). Conclusion: results suggest intuitive appeal of the RE-AIM framework, and provide a strategy for introducing the utility and practical application of the framework in practice settings in Mexico and Latin America.
Compatible Spatial Discretizations for Partial Differential Equations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Arnold, Douglas, N, ed.
From May 11--15, 2004, the Institute for Mathematics and its Applications held a hot topics workshop on Compatible Spatial Discretizations for Partial Differential Equations. The numerical solution of partial differential equations (PDE) is a fundamental task in science and engineering. The goal of the workshop was to bring together a spectrum of scientists at the forefront of the research in the numerical solution of PDEs to discuss compatible spatial discretizations. We define compatible spatial discretizations as those that inherit or mimic fundamental properties of the PDE such as topology, conservation, symmetries, and positivity structures and maximum principles. A wide varietymore » of discretization methods applied across a wide range of scientific and engineering applications have been designed to or found to inherit or mimic intrinsic spatial structure and reproduce fundamental properties of the solution of the continuous PDE model at the finite dimensional level. A profusion of such methods and concepts relevant to understanding them have been developed and explored: mixed finite element methods, mimetic finite differences, support operator methods, control volume methods, discrete differential forms, Whitney forms, conservative differencing, discrete Hodge operators, discrete Helmholtz decomposition, finite integration techniques, staggered grid and dual grid methods, etc. This workshop seeks to foster communication among the diverse groups of researchers designing, applying, and studying such methods as well as researchers involved in practical solution of large scale problems that may benefit from advancements in such discretizations; to help elucidate the relations between the different methods and concepts; and to generally advance our understanding in the area of compatible spatial discretization methods for PDE. Particular points of emphasis included: + Identification of intrinsic properties of PDE models that are critical for the fidelity of numerical simulations. + Identification and design of compatible spatial discretizations of PDEs, their classification, analysis, and relations. + Relationships between different compatible spatial discretization methods and concepts which have been developed; + Impact of compatible spatial discretizations upon physical fidelity, verification and validation of simulations, especially in large-scale, multiphysics settings. + How solvers address the demands placed upon them by compatible spatial discretizations. This report provides information about the program and abstracts of all the presentations.« less
Three novel approaches to structural identifiability analysis in mixed-effects models.
Janzén, David L I; Jirstrand, Mats; Chappell, Michael J; Evans, Neil D
2016-05-06
Structural identifiability is a concept that considers whether the structure of a model together with a set of input-output relations uniquely determines the model parameters. In the mathematical modelling of biological systems, structural identifiability is an important concept since biological interpretations are typically made from the parameter estimates. For a system defined by ordinary differential equations, several methods have been developed to analyse whether the model is structurally identifiable or otherwise. Another well-used modelling framework, which is particularly useful when the experimental data are sparsely sampled and the population variance is of interest, is mixed-effects modelling. However, established identifiability analysis techniques for ordinary differential equations are not directly applicable to such models. In this paper, we present and apply three different methods that can be used to study structural identifiability in mixed-effects models. The first method, called the repeated measurement approach, is based on applying a set of previously established statistical theorems. The second method, called the augmented system approach, is based on augmenting the mixed-effects model to an extended state-space form. The third method, called the Laplace transform mixed-effects extension, is based on considering the moment invariants of the systems transfer function as functions of random variables. To illustrate, compare and contrast the application of the three methods, they are applied to a set of mixed-effects models. Three structural identifiability analysis methods applicable to mixed-effects models have been presented in this paper. As method development of structural identifiability techniques for mixed-effects models has been given very little attention, despite mixed-effects models being widely used, the methods presented in this paper provides a way of handling structural identifiability in mixed-effects models previously not possible. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Browning, Brian L.; Yu, Zhaoxia
2009-01-01
We present a novel method for simultaneous genotype calling and haplotype-phase inference. Our method employs the computationally efficient BEAGLE haplotype-frequency model, which can be applied to large-scale studies with millions of markers and thousands of samples. We compare genotype calls made with our method to genotype calls made with the BIRDSEED, CHIAMO, GenCall, and ILLUMINUS genotype-calling methods, using genotype data from the Illumina 550K and Affymetrix 500K arrays. We show that our method has higher genotype-call accuracy and yields fewer uncalled genotypes than competing methods. We perform single-marker analysis of data from the Wellcome Trust Case Control Consortium bipolar disorder and type 2 diabetes studies. For bipolar disorder, the genotype calls in the original study yield 25 markers with apparent false-positive association with bipolar disorder at a p < 10−7 significance level, whereas genotype calls made with our method yield no associated markers at this significance threshold. Conversely, for markers with replicated association with type 2 diabetes, there is good concordance between genotype calls used in the original study and calls made by our method. Results from single-marker and haplotypic analysis of our method's genotype calls for the bipolar disorder study indicate that our method is highly effective at eliminating genotyping artifacts that cause false-positive associations in genome-wide association studies. Our new genotype-calling methods are implemented in the BEAGLE and BEAGLECALL software packages. PMID:19931040
NASA Astrophysics Data System (ADS)
Wang, Gaochao; Tse, Peter W.; Yuan, Maodan
2018-02-01
Visual inspection and assessment of the condition of metal structures are essential for safety. Pulse thermography produces visible infrared images, which have been widely applied to detect and characterize defects in structures and materials. When active thermography, a non-destructive testing tool, is applied, the necessity of considerable manual checking can be avoided. However, detecting an internal crack with active thermography remains difficult, since it is usually invisible in the collected sequence of infrared images, which makes the automatic detection of internal cracks even harder. In addition, the detection of an internal crack can be hindered by a complicated inspection environment. With the purpose of putting forward a robust and automatic visual inspection method, a computer vision-based thresholding method is proposed. In this paper, the image signals are a sequence of infrared images collected from the experimental setup with a thermal camera and two flash lamps as stimulus. The contrast of pixels in each frame is enhanced by the Canny operator and then reconstructed by a triple-threshold system. Two features, mean value in the time domain and maximal amplitude in the frequency domain, are extracted from the reconstructed signal to help distinguish the crack pixels from others. Finally, a binary image indicating the location of the internal crack is generated by a K-means clustering method. The proposed procedure has been applied to an iron pipe, which contains two internal cracks and surface abrasion. Some improvements have been made for the computer vision-based automatic crack detection methods. In the future, the proposed method can be applied to realize the automatic detection of internal cracks from many infrared images for the industry.
CisMiner: Genome-Wide In-Silico Cis-Regulatory Module Prediction by Fuzzy Itemset Mining
Navarro, Carmen; Lopez, Francisco J.; Cano, Carlos; Garcia-Alcalde, Fernando; Blanco, Armando
2014-01-01
Eukaryotic gene control regions are known to be spread throughout non-coding DNA sequences which may appear distant from the gene promoter. Transcription factors are proteins that coordinately bind to these regions at transcription factor binding sites to regulate gene expression. Several tools allow to detect significant co-occurrences of closely located binding sites (cis-regulatory modules, CRMs). However, these tools present at least one of the following limitations: 1) scope limited to promoter or conserved regions of the genome; 2) do not allow to identify combinations involving more than two motifs; 3) require prior information about target motifs. In this work we present CisMiner, a novel methodology to detect putative CRMs by means of a fuzzy itemset mining approach able to operate at genome-wide scale. CisMiner allows to perform a blind search of CRMs without any prior information about target CRMs nor limitation in the number of motifs. CisMiner tackles the combinatorial complexity of genome-wide cis-regulatory module extraction using a natural representation of motif combinations as itemsets and applying the Top-Down Fuzzy Frequent- Pattern Tree algorithm to identify significant itemsets. Fuzzy technology allows CisMiner to better handle the imprecision and noise inherent to regulatory processes. Results obtained for a set of well-known binding sites in the S. cerevisiae genome show that our method yields highly reliable predictions. Furthermore, CisMiner was also applied to putative in-silico predicted transcription factor binding sites to identify significant combinations in S. cerevisiae and D. melanogaster, proving that our approach can be further applied genome-wide to more complex genomes. CisMiner is freely accesible at: http://genome2.ugr.es/cisminer. CisMiner can be queried for the results presented in this work and can also perform a customized cis-regulatory module prediction on a query set of transcription factor binding sites provided by the user. PMID:25268582
2009-01-01
In high-dimensional studies such as genome-wide association studies, the correction for multiple testing in order to control total type I error results in decreased power to detect modest effects. We present a new analytical approach based on the higher criticism statistic that allows identification of the presence of modest effects. We apply our method to the genome-wide study of rheumatoid arthritis provided in the Genetic Analysis Workshop 16 Problem 1 data set. There is evidence for unknown bias in this study that could be explained by the presence of undetected modest effects. We compared the asymptotic and empirical thresholds for the higher criticism statistic. Using the asymptotic threshold we detected the presence of modest effects genome-wide. We also detected modest effects using 90th percentile of the empirical null distribution as a threshold; however, there is no such evidence when the 95th and 99th percentiles were used. While the higher criticism method suggests that there is some evidence for modest effects, interpreting individual single-nucleotide polymorphisms with significant higher criticism statistics is of undermined value. The goal of higher criticism is to alert the researcher that genetic effects remain to be discovered and to promote the use of more targeted and powerful studies to detect the remaining effects. PMID:20018032
1978-01-17
approach to designing computers: Formal mathematical methods were applied and computers themselves began to be widely used in designing other...capital, labor resources and the funds of consumers. Analysis of the model indicates that at the present time the average complexity of production of...ALGORITHMIC COMPLETENESS AND COMPLEXITY OF MICROPROGRAMS Kiev KIBERNETIKA in Russian No 3, May/Jun 77 pp 1-15 manuscript received 22 Dec 76 G0LUNK0V
Two Paradoxes in Linear Regression Analysis
FENG, Ge; PENG, Jing; TU, Dongke; ZHENG, Julia Z.; FENG, Changyong
2016-01-01
Summary Regression is one of the favorite tools in applied statistics. However, misuse and misinterpretation of results from regression analysis are common in biomedical research. In this paper we use statistical theory and simulation studies to clarify some paradoxes around this popular statistical method. In particular, we show that a widely used model selection procedure employed in many publications in top medical journals is wrong. Formal procedures based on solid statistical theory should be used in model selection. PMID:28638214
Native sulfur/chlorine SAD phasing for serial femtosecond crystallography.
Nakane, Takanori; Song, Changyong; Suzuki, Mamoru; Nango, Eriko; Kobayashi, Jun; Masuda, Tetsuya; Inoue, Shigeyuki; Mizohata, Eiichi; Nakatsu, Toru; Tanaka, Tomoyuki; Tanaka, Rie; Shimamura, Tatsuro; Tono, Kensuke; Joti, Yasumasa; Kameshima, Takashi; Hatsui, Takaki; Yabashi, Makina; Nureki, Osamu; Iwata, So; Sugahara, Michihiro
2015-12-01
Serial femtosecond crystallography (SFX) allows structures to be determined with minimal radiation damage. However, phasing native crystals in SFX is not very common. Here, the structure determination of native lysozyme from single-wavelength anomalous diffraction (SAD) by utilizing the anomalous signal of sulfur and chlorine at a wavelength of 1.77 Å is successfully demonstrated. This sulfur SAD method can be applied to a wide range of proteins, which will improve the determination of native crystal structures.
NASA Astrophysics Data System (ADS)
Martin, Guillermo; Heidmann, Samuel; Rauch, Jean-Yves; Jocou, Laurent; Courjal, Nadège
2014-03-01
We present an optimization process to improve the rejection ratio in integrated beam combiners by locking the dark fringe and then monitoring its intensity. The method proposed here uses the electro-optic effect of lithium niobate in order to lock the dark fringe and to real-time balance the photometric flux by means of a two-stage Mach-Zehnder interferometer waveguide. By applying a control voltage on the output Y-junction, we are able to lock the phase and stay in the dark fringe, while an independent second voltage is applied on the first-stage intensity modulator, to finely balance the photometries. We have obtained a rejection ratio of 4600 (36.6 dB) at 3.39 μm in transverse electric polarization, corresponding to 99.98% fringe contrast, and shown that the system can compensate external phase perturbations (a piston variation of 100 nm) up to around 1 kHz. We also show the preliminary results of this process on wide-band modulation, where a contrast of 38% in 3.25- to 3.65-μm spectral range is obtained. These preliminary results on wide-band need to be optimized, in particular, for reducing scattered light of the device at the Y-junction. We expect this active method to be useful in high-contrast interferometry, in particular, for astronomical spatial projects actually under study.
Kang, Hyun-Wook
2012-01-01
Tissue engineering, which is the study of generating biological substitutes to restore or replace tissues or organs, has the potential to meet current needs for organ transplantation and medical interventions. Various approaches have been attempted to apply three-dimensional (3D) solid freeform fabrication technologies to tissue engineering for scaffold fabrication. Among these, the stereolithography (SL) technology not only has the highest resolution, but also offers quick fabrication. However, a lack of suitable biomaterials is a barrier to applying the SL technology to tissue engineering. In this study, an indirect SL method that combines the SL technology and a sacrificial molding process was developed to address this challenge. A sacrificial mold with an inverse porous shape was fabricated from an alkali-soluble photopolymer by the SL technology. A sacrificial molding process was then developed for scaffold construction using a variety of biomaterials. The results indicated a wide range of biomaterial selectivity and a high resolution. Achievable minimum pore and strut sizes were as large as 50 and 65 μm, respectively. This technology can also be used to fabricate three-dimensional organ shapes, and combined with traditional fabrication methods to construct a new type of scaffold with a dual-pore size. Cytotoxicity tests, as well as nuclear magnetic resonance and gel permeation chromatography analyses, showed that this technology has great potential for tissue engineering applications. PMID:22443315
ICA model order selection of task co-activation networks.
Ray, Kimberly L; McKay, D Reese; Fox, Peter M; Riedel, Michael C; Uecker, Angela M; Beckmann, Christian F; Smith, Stephen M; Fox, Peter T; Laird, Angela R
2013-01-01
Independent component analysis (ICA) has become a widely used method for extracting functional networks in the brain during rest and task. Historically, preferred ICA dimensionality has widely varied within the neuroimaging community, but typically varies between 20 and 100 components. This can be problematic when comparing results across multiple studies because of the impact ICA dimensionality has on the topology of its resultant components. Recent studies have demonstrated that ICA can be applied to peak activation coordinates archived in a large neuroimaging database (i.e., BrainMap Database) to yield whole-brain task-based co-activation networks. A strength of applying ICA to BrainMap data is that the vast amount of metadata in BrainMap can be used to quantitatively assess tasks and cognitive processes contributing to each component. In this study, we investigated the effect of model order on the distribution of functional properties across networks as a method for identifying the most informative decompositions of BrainMap-based ICA components. Our findings suggest dimensionality of 20 for low model order ICA to examine large-scale brain networks, and dimensionality of 70 to provide insight into how large-scale networks fractionate into sub-networks. We also provide a functional and organizational assessment of visual, motor, emotion, and interoceptive task co-activation networks as they fractionate from low to high model-orders.
ICA model order selection of task co-activation networks
Ray, Kimberly L.; McKay, D. Reese; Fox, Peter M.; Riedel, Michael C.; Uecker, Angela M.; Beckmann, Christian F.; Smith, Stephen M.; Fox, Peter T.; Laird, Angela R.
2013-01-01
Independent component analysis (ICA) has become a widely used method for extracting functional networks in the brain during rest and task. Historically, preferred ICA dimensionality has widely varied within the neuroimaging community, but typically varies between 20 and 100 components. This can be problematic when comparing results across multiple studies because of the impact ICA dimensionality has on the topology of its resultant components. Recent studies have demonstrated that ICA can be applied to peak activation coordinates archived in a large neuroimaging database (i.e., BrainMap Database) to yield whole-brain task-based co-activation networks. A strength of applying ICA to BrainMap data is that the vast amount of metadata in BrainMap can be used to quantitatively assess tasks and cognitive processes contributing to each component. In this study, we investigated the effect of model order on the distribution of functional properties across networks as a method for identifying the most informative decompositions of BrainMap-based ICA components. Our findings suggest dimensionality of 20 for low model order ICA to examine large-scale brain networks, and dimensionality of 70 to provide insight into how large-scale networks fractionate into sub-networks. We also provide a functional and organizational assessment of visual, motor, emotion, and interoceptive task co-activation networks as they fractionate from low to high model-orders. PMID:24339802
An evaluation of two-channel ChIP-on-chip and DNA methylation microarray normalization strategies
2012-01-01
Background The combination of chromatin immunoprecipitation with two-channel microarray technology enables genome-wide mapping of binding sites of DNA-interacting proteins (ChIP-on-chip) or sites with methylated CpG di-nucleotides (DNA methylation microarray). These powerful tools are the gateway to understanding gene transcription regulation. Since the goals of such studies, the sample preparation procedures, the microarray content and study design are all different from transcriptomics microarrays, the data pre-processing strategies traditionally applied to transcriptomics microarrays may not be appropriate. Particularly, the main challenge of the normalization of "regulation microarrays" is (i) to make the data of individual microarrays quantitatively comparable and (ii) to keep the signals of the enriched probes, representing DNA sequences from the precipitate, as distinguishable as possible from the signals of the un-enriched probes, representing DNA sequences largely absent from the precipitate. Results We compare several widely used normalization approaches (VSN, LOWESS, quantile, T-quantile, Tukey's biweight scaling, Peng's method) applied to a selection of regulation microarray datasets, ranging from DNA methylation to transcription factor binding and histone modification studies. Through comparison of the data distributions of control probes and gene promoter probes before and after normalization, and assessment of the power to identify known enriched genomic regions after normalization, we demonstrate that there are clear differences in performance between normalization procedures. Conclusion T-quantile normalization applied separately on the channels and Tukey's biweight scaling outperform other methods in terms of the conservation of enriched and un-enriched signal separation, as well as in identification of genomic regions known to be enriched. T-quantile normalization is preferable as it additionally improves comparability between microarrays. In contrast, popular normalization approaches like quantile, LOWESS, Peng's method and VSN normalization alter the data distributions of regulation microarrays to such an extent that using these approaches will impact the reliability of the downstream analysis substantially. PMID:22276688
NASA Astrophysics Data System (ADS)
Klomp, Sander; van der Sommen, Fons; Swager, Anne-Fré; Zinger, Svitlana; Schoon, Erik J.; Curvers, Wouter L.; Bergman, Jacques J.; de With, Peter H. N.
2017-03-01
Volumetric Laser Endomicroscopy (VLE) is a promising technique for the detection of early neoplasia in Barrett's Esophagus (BE). VLE generates hundreds of high resolution, grayscale, cross-sectional images of the esophagus. However, at present, classifying these images is a time consuming and cumbersome effort performed by an expert using a clinical prediction model. This paper explores the feasibility of using computer vision techniques to accurately predict the presence of dysplastic tissue in VLE BE images. Our contribution is threefold. First, a benchmarking is performed for widely applied machine learning techniques and feature extraction methods. Second, three new features based on the clinical detection model are proposed, having superior classification accuracy and speed, compared to earlier work. Third, we evaluate automated parameter tuning by applying simple grid search and feature selection methods. The results are evaluated on a clinically validated dataset of 30 dysplastic and 30 non-dysplastic VLE images. Optimal classification accuracy is obtained by applying a support vector machine and using our modified Haralick features and optimal image cropping, obtaining an area under the receiver operating characteristic of 0.95 compared to the clinical prediction model at 0.81. Optimal execution time is achieved using a proposed mean and median feature, which is extracted at least factor 2.5 faster than alternative features with comparable performance.
Rapid assessment procedures in injury control.
Klevens, Joanne; Anderson, Mark
2004-03-01
Injuries are among the leading causes of death and disability worldwide. The burden caused by injuries is even greater among the poorer nations and is projected to increase. Very often the lack of technical and financial resources, as well as the urgency of the problem, preclude applying sophisticated surveillance and research methods for generating relevant information to develop effective interventions. In these settings, it is necessary to consider more rapid and less costly methods in applying the public health approach to the problem of injury prevention and control. Rapid Assessment Procedures (RAP), developed within the fields of epidemiology, anthropology and health administration, can provide valid information in a manner that is quicker, simpler, and less costly than standard data collection methods. RAP have been applied widely and successfully to infectious and chronic disease issues, but have not been used extensively, if at all, as tools in injury control. This paper describes Rapid Assessment Procedures that (1) are useful for understanding the scope of the problem and for identifying potential risk factors, (2) can assist practitioners in determining intervention priorities, (3) can provide in-depth knowledge about a specific injury-related problem, and (4) can be used in surveillance systems to monitor outcomes. Finally, the paper describes some of the caveats in using RAP.
Multivariate generalized multifactor dimensionality reduction to detect gene-gene interactions
2013-01-01
Background Recently, one of the greatest challenges in genome-wide association studies is to detect gene-gene and/or gene-environment interactions for common complex human diseases. Ritchie et al. (2001) proposed multifactor dimensionality reduction (MDR) method for interaction analysis. MDR is a combinatorial approach to reduce multi-locus genotypes into high-risk and low-risk groups. Although MDR has been widely used for case-control studies with binary phenotypes, several extensions have been proposed. One of these methods, a generalized MDR (GMDR) proposed by Lou et al. (2007), allows adjusting for covariates and applying to both dichotomous and continuous phenotypes. GMDR uses the residual score of a generalized linear model of phenotypes to assign either high-risk or low-risk group, while MDR uses the ratio of cases to controls. Methods In this study, we propose multivariate GMDR, an extension of GMDR for multivariate phenotypes. Jointly analysing correlated multivariate phenotypes may have more power to detect susceptible genes and gene-gene interactions. We construct generalized estimating equations (GEE) with multivariate phenotypes to extend generalized linear models. Using the score vectors from GEE we discriminate high-risk from low-risk groups. We applied the multivariate GMDR method to the blood pressure data of the 7,546 subjects from the Korean Association Resource study: systolic blood pressure (SBP) and diastolic blood pressure (DBP). We compare the results of multivariate GMDR for SBP and DBP to the results from separate univariate GMDR for SBP and DBP, respectively. We also applied the multivariate GMDR method to the repeatedly measured hypertension status from 5,466 subjects and compared its result with those of univariate GMDR at each time point. Results Results from the univariate GMDR and multivariate GMDR in two-locus model with both blood pressures and hypertension phenotypes indicate best combinations of SNPs whose interaction has significant association with risk for high blood pressures or hypertension. Although the test balanced accuracy (BA) of multivariate analysis was not always greater than that of univariate analysis, the multivariate BAs were more stable with smaller standard deviations. Conclusions In this study, we have developed multivariate GMDR method using GEE approach. It is useful to use multivariate GMDR with correlated multiple phenotypes of interests. PMID:24565370
Ultrafast and Wide Range Analysis of DNA Molecules Using Rigid Network Structure of Solid Nanowires
Rahong, Sakon; Yasui, Takao; Yanagida, Takeshi; Nagashima, Kazuki; Kanai, Masaki; Klamchuen, Annop; Meng, Gang; He, Yong; Zhuge, Fuwei; Kaji, Noritada; Kawai, Tomoji; Baba, Yoshinobu
2014-01-01
Analyzing sizes of DNA via electrophoresis using a gel has played an important role in the recent, rapid progress of biology and biotechnology. Although analyzing DNA over a wide range of sizes in a short time is desired, no existing electrophoresis methods have been able to fully satisfy these two requirements. Here we propose a novel method using a rigid 3D network structure composed of solid nanowires within a microchannel. This rigid network structure enables analysis of DNA under applied DC electric fields for a large DNA size range (100 bp–166 kbp) within 13 s, which are much wider and faster conditions than those of any existing methods. The network density is readily varied for the targeted DNA size range by tailoring the number of cycles of the nanowire growth only at the desired spatial position within the microchannel. The rigid dense 3D network structure with spatial density control plays an important role in determining the capability for analyzing DNA. Since the present method allows the spatial location and density of the nanostructure within the microchannels to be defined, this unique controllability offers a new strategy to develop an analytical method not only for DNA but also for other biological molecules. PMID:24918865
Dakos, Vasilis; Carpenter, Stephen R.; Brock, William A.; Ellison, Aaron M.; Guttal, Vishwesha; Ives, Anthony R.; Kéfi, Sonia; Livina, Valerie; Seekell, David A.; van Nes, Egbert H.; Scheffer, Marten
2012-01-01
Many dynamical systems, including lakes, organisms, ocean circulation patterns, or financial markets, are now thought to have tipping points where critical transitions to a contrasting state can happen. Because critical transitions can occur unexpectedly and are difficult to manage, there is a need for methods that can be used to identify when a critical transition is approaching. Recent theory shows that we can identify the proximity of a system to a critical transition using a variety of so-called ‘early warning signals’, and successful empirical examples suggest a potential for practical applicability. However, while the range of proposed methods for predicting critical transitions is rapidly expanding, opinions on their practical use differ widely, and there is no comparative study that tests the limitations of the different methods to identify approaching critical transitions using time-series data. Here, we summarize a range of currently available early warning methods and apply them to two simulated time series that are typical of systems undergoing a critical transition. In addition to a methodological guide, our work offers a practical toolbox that may be used in a wide range of fields to help detect early warning signals of critical transitions in time series data. PMID:22815897
Accurate prediction of protein–protein interactions from sequence alignments using a Bayesian method
Burger, Lukas; van Nimwegen, Erik
2008-01-01
Accurate and large-scale prediction of protein–protein interactions directly from amino-acid sequences is one of the great challenges in computational biology. Here we present a new Bayesian network method that predicts interaction partners using only multiple alignments of amino-acid sequences of interacting protein domains, without tunable parameters, and without the need for any training examples. We first apply the method to bacterial two-component systems and comprehensively reconstruct two-component signaling networks across all sequenced bacteria. Comparisons of our predictions with known interactions show that our method infers interaction partners genome-wide with high accuracy. To demonstrate the general applicability of our method we show that it also accurately predicts interaction partners in a recent dataset of polyketide synthases. Analysis of the predicted genome-wide two-component signaling networks shows that cognates (interacting kinase/regulator pairs, which lie adjacent on the genome) and orphans (which lie isolated) form two relatively independent components of the signaling network in each genome. In addition, while most genes are predicted to have only a small number of interaction partners, we find that 10% of orphans form a separate class of ‘hub' nodes that distribute and integrate signals to and from up to tens of different interaction partners. PMID:18277381
Molitor, John
2012-03-01
Bayesian methods have seen an increase in popularity in a wide variety of scientific fields, including epidemiology. One of the main reasons for their widespread application is the power of the Markov chain Monte Carlo (MCMC) techniques generally used to fit these models. As a result, researchers often implicitly associate Bayesian models with MCMC estimation procedures. However, Bayesian models do not always require Markov-chain-based methods for parameter estimation. This is important, as MCMC estimation methods, while generally quite powerful, are complex and computationally expensive and suffer from convergence problems related to the manner in which they generate correlated samples used to estimate probability distributions for parameters of interest. In this issue of the Journal, Cole et al. (Am J Epidemiol. 2012;175(5):368-375) present an interesting paper that discusses non-Markov-chain-based approaches to fitting Bayesian models. These methods, though limited, can overcome some of the problems associated with MCMC techniques and promise to provide simpler approaches to fitting Bayesian models. Applied researchers will find these estimation approaches intuitively appealing and will gain a deeper understanding of Bayesian models through their use. However, readers should be aware that other non-Markov-chain-based methods are currently in active development and have been widely published in other fields.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bridge, Joanna S.; Zeimann, Gregory R.; Trump, Jonathan R.
2016-08-01
Differentiating between active galactic nucleus (AGN) activity and star formation in z ∼ 2 galaxies is difficult because traditional methods, such as line-ratio diagnostics, change with redshift, while multi-wavelength methods (X-ray, radio, IR) are sensitive to only the brightest AGNs. We have developed a new method for spatially resolving emission lines using the Hubble Space Telescope /Wide Field Camera 3 G141 grism spectra and quantifying AGN activity through the spatial gradient of the [O iii]/H β line ratio. Through detailed simulations, we show that our novel line-ratio gradient approach identifies ∼40% more low-mass and obscured AGNs than obtained by classicalmore » methods. Based on our simulations, we developed a relationship that maps the stellar mass, star formation rate, and measured [O iii]/H β gradient to the AGN Eddington ratio. We apply our technique to previously studied stacked samples of galaxies at z ∼ 2 and find that our results are consistent with these studies. This gradient method will also be able to inform other areas of galaxy evolution science, such as inside-out quenching and metallicity gradients, and will be widely applicable to future spatially resolved James Webb Space Telescope data.« less
Ultrafast and Wide Range Analysis of DNA Molecules Using Rigid Network Structure of Solid Nanowires
NASA Astrophysics Data System (ADS)
Rahong, Sakon; Yasui, Takao; Yanagida, Takeshi; Nagashima, Kazuki; Kanai, Masaki; Klamchuen, Annop; Meng, Gang; He, Yong; Zhuge, Fuwei; Kaji, Noritada; Kawai, Tomoji; Baba, Yoshinobu
2014-06-01
Analyzing sizes of DNA via electrophoresis using a gel has played an important role in the recent, rapid progress of biology and biotechnology. Although analyzing DNA over a wide range of sizes in a short time is desired, no existing electrophoresis methods have been able to fully satisfy these two requirements. Here we propose a novel method using a rigid 3D network structure composed of solid nanowires within a microchannel. This rigid network structure enables analysis of DNA under applied DC electric fields for a large DNA size range (100 bp-166 kbp) within 13 s, which are much wider and faster conditions than those of any existing methods. The network density is readily varied for the targeted DNA size range by tailoring the number of cycles of the nanowire growth only at the desired spatial position within the microchannel. The rigid dense 3D network structure with spatial density control plays an important role in determining the capability for analyzing DNA. Since the present method allows the spatial location and density of the nanostructure within the microchannels to be defined, this unique controllability offers a new strategy to develop an analytical method not only for DNA but also for other biological molecules.
An Analysis of the Origin and Propagation of the Multiple Coronal Mass Ejections of 2010 August 1
NASA Technical Reports Server (NTRS)
Harrison, R. A.; Davies, J. A.; Moestl, C.; Liu, Y.; Temmer, M.; Bisi, M. M.; Eastwood, J. P.; DeKoning, C. A.; Nitta, N.; Rollett, T.;
2012-01-01
On 2010 August 1, the northern solar hemisphere underwent significant activity that involved a complex set of active regions near central meridian with, nearby, two large prominences and other more distant active regions. This activity culminated in the eruption of four major coronal mass ejections (CMEs), effects of which were detected at Earth and other solar system bodies. Recognizing the unprecedented wealth of data from the wide range of spacecraft that were available-providing the potential for us to explore methods for CME identification and tracking, and to assess issues regarding onset and planetary impact-we present a comprehensive analysis of this sequence of CMEs.We show that, for three of the four major CMEs, onset is associated with prominence eruption, while the remaining CME appears to be closely associated with a flare. Using instrumentation on board the Solar Terrestrial Relations Observatory spacecraft, three of the CMEs could be tracked out to elongations beyond 50?; their directions and speeds have been determined by various methods, not least to assess their potential for Earth impact. The analysis techniques that can be applied to the other CME, the first to erupt, are more limited since that CME was obscured by the subsequent, much faster event before it had propagated far from the Sun; we discuss the speculation that these two CMEs interact. The consistency of the results, derived from the wide variety of methods applied to such an extraordinarily complete data set, has allowed us to converge on robust interpretations of the CME onsets and their arrivals at 1 AU.