Karasawa, N; Mitsutake, A; Takano, H
2017-12-01
Proteins implement their functionalities when folded into specific three-dimensional structures, and their functions are related to the protein structures and dynamics. Previously, we applied a relaxation mode analysis (RMA) method to protein systems; this method approximately estimates the slow relaxation modes and times via simulation and enables investigation of the dynamic properties underlying the protein structural fluctuations. Recently, two-step RMA with multiple evolution times has been proposed and applied to a slightly complex homopolymer system, i.e., a single [n]polycatenane. This method can be applied to more complex heteropolymer systems, i.e., protein systems, to estimate the relaxation modes and times more accurately. In two-step RMA, we first perform RMA and obtain rough estimates of the relaxation modes and times. Then, we apply RMA with multiple evolution times to a small number of the slowest relaxation modes obtained in the previous calculation. Herein, we apply this method to the results of principal component analysis (PCA). First, PCA is applied to a 2-μs molecular dynamics simulation of hen egg-white lysozyme in aqueous solution. Then, the two-step RMA method with multiple evolution times is applied to the obtained principal components. The slow relaxation modes and corresponding relaxation times for the principal components are much improved by the second RMA.
NASA Astrophysics Data System (ADS)
Karasawa, N.; Mitsutake, A.; Takano, H.
2017-12-01
Proteins implement their functionalities when folded into specific three-dimensional structures, and their functions are related to the protein structures and dynamics. Previously, we applied a relaxation mode analysis (RMA) method to protein systems; this method approximately estimates the slow relaxation modes and times via simulation and enables investigation of the dynamic properties underlying the protein structural fluctuations. Recently, two-step RMA with multiple evolution times has been proposed and applied to a slightly complex homopolymer system, i.e., a single [n ] polycatenane. This method can be applied to more complex heteropolymer systems, i.e., protein systems, to estimate the relaxation modes and times more accurately. In two-step RMA, we first perform RMA and obtain rough estimates of the relaxation modes and times. Then, we apply RMA with multiple evolution times to a small number of the slowest relaxation modes obtained in the previous calculation. Herein, we apply this method to the results of principal component analysis (PCA). First, PCA is applied to a 2-μ s molecular dynamics simulation of hen egg-white lysozyme in aqueous solution. Then, the two-step RMA method with multiple evolution times is applied to the obtained principal components. The slow relaxation modes and corresponding relaxation times for the principal components are much improved by the second RMA.
2009-06-01
3. Previous Navy CRM Assessments ....................................................24 4. Applying Kirkpatrick’s Topology of Evaluation...development within each aviation community. Kirkpatrick’s (1976) hierarchy of training evaluation technique was applied to examine three levels of... Applying methods and techniques used in previous CRM evaluation research, this thesis provided an updated evaluation of the Naval CRM program to fill
The application of contraction theory to an iterative formulation of electromagnetic scattering
NASA Technical Reports Server (NTRS)
Brand, J. C.; Kauffman, J. F.
1985-01-01
Contraction theory is applied to an iterative formulation of electromagnetic scattering from periodic structures and a computational method for insuring convergence is developed. A short history of spectral (or k-space) formulation is presented with an emphasis on application to periodic surfaces. To insure a convergent solution of the iterative equation, a process called the contraction corrector method is developed. Convergence properties of previously presented iterative solutions to one-dimensional problems are examined utilizing contraction theory and the general conditions for achieving a convergent solution are explored. The contraction corrector method is then applied to several scattering problems including an infinite grating of thin wires with the solution data compared to previous works.
3D temporal subtraction on multislice CT images using nonlinear warping technique
NASA Astrophysics Data System (ADS)
Ishida, Takayuki; Katsuragawa, Shigehiko; Kawashita, Ikuo; Kim, Hyounseop; Itai, Yoshinori; Awai, Kazuo; Li, Qiang; Doi, Kunio
2007-03-01
The detection of very subtle lesions and/or lesions overlapped with vessels on CT images is a time consuming and difficult task for radiologists. In this study, we have developed a 3D temporal subtraction method to enhance interval changes between previous and current multislice CT images based on a nonlinear image warping technique. Our method provides a subtraction CT image which is obtained by subtraction of a previous CT image from a current CT image. Reduction of misregistration artifacts is important in the temporal subtraction method. Therefore, our computerized method includes global and local image matching techniques for accurate registration of current and previous CT images. For global image matching, we selected the corresponding previous section image for each current section image by using 2D cross-correlation between a blurred low-resolution current CT image and a blurred previous CT image. For local image matching, we applied the 3D template matching technique with translation and rotation of volumes of interests (VOIs) which were selected in the current and the previous CT images. The local shift vector for each VOI pair was determined when the cross-correlation value became the maximum in the 3D template matching. The local shift vectors at all voxels were determined by interpolation of shift vectors of VOIs, and then the previous CT image was nonlinearly warped according to the shift vector for each voxel. Finally, the warped previous CT image was subtracted from the current CT image. The 3D temporal subtraction method was applied to 19 clinical cases. The normal background structures such as vessels, ribs, and heart were removed without large misregistration artifacts. Thus, interval changes due to lung diseases were clearly enhanced as white shadows on subtraction CT images.
Passive wireless strain monitoring of tyres using capacitance and tuning frequency changes
NASA Astrophysics Data System (ADS)
Matsuzaki, Ryosuke; Todoroki, Akira
2005-08-01
In-service strain monitoring of tyres of automobiles is quite effective for improving the reliability of tyres and anti-lock braking systems (ABS). Conventional strain gauges have high stiffness and require lead wires. Therefore, they are cumbersome for tyre strain measurements. In a previous study, the authors proposed a new wireless strain monitoring method that adopts the tyre itself as a sensor, with an oscillating circuit. This method is very simple and useful, but it requires a battery to activate the oscillating circuit. In the present study, the previous method for wireless tyre monitoring is improved to produce a passive wireless sensor. A specimen made from a commercially available tyre is connected to a tuning circuit comprising an inductance and a capacitance as a condenser. The capacitance change of the tyre alters the tuning frequency. This change of the tuned radio wave facilitates wireless measurement of the applied strain of the specimen without any power supply. This passive wireless method is applied to a specimen and the static applied strain is measured. Experiments demonstrate that the method is effective for passive wireless strain monitoring of tyres.
Methodes entropiques appliquees au probleme inverse en magnetoencephalographie
NASA Astrophysics Data System (ADS)
Lapalme, Ervig
2005-07-01
This thesis is devoted to biomagnetic source localization using magnetoencephalography. This problem is known to have an infinite number of solutions. So methods are required to take into account anatomical and functional information on the solution. The work presented in this thesis uses the maximum entropy on the mean method to constrain the solution. This method originates from statistical mechanics and information theory. This thesis is divided into two main parts containing three chapters each. The first part reviews the magnetoencephalographic inverse problem: the theory needed to understand its context and the hypotheses for simplifying the problem. In the last chapter of this first part, the maximum entropy on the mean method is presented: its origins are explained and also how it is applied to our problem. The second part is the original work of this thesis presenting three articles; one of them already published and two others submitted for publication. In the first article, a biomagnetic source model is developed and applied in a theoretical con text but still demonstrating the efficiency of the method. In the second article, we go one step further towards a realistic modelization of the cerebral activation. The main priors are estimated using the magnetoencephalographic data. This method proved to be very efficient in realistic simulations. In the third article, the previous method is extended to deal with time signals thus exploiting the excellent time resolution offered by magnetoencephalography. Compared with our previous work, the temporal method is applied to real magnetoencephalographic data coming from a somatotopy experience and results agree with previous physiological knowledge about this kind of cognitive process.
NASA Astrophysics Data System (ADS)
Zia, Haider
2017-06-01
This paper describes an updated exponential Fourier based split-step method that can be applied to a greater class of partial differential equations than previous methods would allow. These equations arise in physics and engineering, a notable example being the generalized derivative non-linear Schrödinger equation that arises in non-linear optics with self-steepening terms. These differential equations feature terms that were previously inaccessible to model accurately with low computational resources. The new method maintains a 3rd order error even with these additional terms and models the equation in all three spatial dimensions and time. The class of non-linear differential equations that this method applies to is shown. The method is fully derived and implementation of the method in the split-step architecture is shown. This paper lays the mathematical ground work for an upcoming paper employing this method in white-light generation simulations in bulk material.
Linear algebraic methods applied to intensity modulated radiation therapy.
Crooks, S M; Xing, L
2001-10-01
Methods of linear algebra are applied to the choice of beam weights for intensity modulated radiation therapy (IMRT). It is shown that the physical interpretation of the beam weights, target homogeneity and ratios of deposited energy can be given in terms of matrix equations and quadratic forms. The methodology of fitting using linear algebra as applied to IMRT is examined. Results are compared with IMRT plans that had been prepared using a commercially available IMRT treatment planning system and previously delivered to cancer patients.
Laterally constrained inversion for CSAMT data interpretation
NASA Astrophysics Data System (ADS)
Wang, Ruo; Yin, Changchun; Wang, Miaoyue; Di, Qingyun
2015-10-01
Laterally constrained inversion (LCI) has been successfully applied to the inversion of dc resistivity, TEM and airborne EM data. However, it hasn't been yet applied to the interpretation of controlled-source audio-frequency magnetotelluric (CSAMT) data. In this paper, we apply the LCI method for CSAMT data inversion by preconditioning the Jacobian matrix. We apply a weighting matrix to Jacobian to balance the sensitivity of model parameters, so that the resolution with respect to different model parameters becomes more uniform. Numerical experiments confirm that this can improve the convergence of the inversion. We first invert a synthetic dataset with and without noise to investigate the effect of LCI applications to CSAMT data, for the noise free data, the results show that the LCI method can recover the true model better compared to the traditional single-station inversion; and for the noisy data, the true model is recovered even with a noise level of 8%, indicating that LCI inversions are to some extent noise insensitive. Then, we re-invert two CSAMT datasets collected respectively in a watershed and a coal mine area in Northern China and compare our results with those from previous inversions. The comparison with the previous inversion in a coal mine shows that LCI method delivers smoother layer interfaces that well correlate to seismic data, while comparison with a global searching algorithm of simulated annealing (SA) in a watershed shows that though both methods deliver very similar good results, however, LCI algorithm presented in this paper runs much faster. The inversion results for the coal mine CSAMT survey show that a conductive water-bearing zone that was not revealed by the previous inversions has been identified by the LCI. This further demonstrates that the method presented in this paper works for CSAMT data inversion.
Using cluster ensemble and validation to identify subtypes of pervasive developmental disorders.
Shen, Jess J; Lee, Phil-Hyoun; Holden, Jeanette J A; Shatkay, Hagit
2007-10-11
Pervasive Developmental Disorders (PDD) are neurodevelopmental disorders characterized by impairments in social interaction, communication and behavior. Given the diversity and varying severity of PDD, diagnostic tools attempt to identify homogeneous subtypes within PDD. Identifying subtypes can lead to targeted etiology studies and to effective type-specific intervention. Cluster analysis can suggest coherent subsets in data; however, different methods and assumptions lead to different results. Several previous studies applied clustering to PDD data, varying in number and characteristics of the produced subtypes. Most studies used a relatively small dataset (fewer than 150 subjects), and all applied only a single clustering method. Here we study a relatively large dataset (358 PDD patients), using an ensemble of three clustering methods. The results are evaluated using several validation methods, and consolidated through an integration step. Four clusters are identified, analyzed and compared to subtypes previously defined by the widely used diagnostic tool DSM-IV.
Using Cluster Ensemble and Validation to Identify Subtypes of Pervasive Developmental Disorders
Shen, Jess J.; Lee, Phil Hyoun; Holden, Jeanette J.A.; Shatkay, Hagit
2007-01-01
Pervasive Developmental Disorders (PDD) are neurodevelopmental disorders characterized by impairments in social interaction, communication and behavior.1 Given the diversity and varying severity of PDD, diagnostic tools attempt to identify homogeneous subtypes within PDD. Identifying subtypes can lead to targeted etiology studies and to effective type-specific intervention. Cluster analysis can suggest coherent subsets in data; however, different methods and assumptions lead to different results. Several previous studies applied clustering to PDD data, varying in number and characteristics of the produced subtypes19. Most studies used a relatively small dataset (fewer than 150 subjects), and all applied only a single clustering method. Here we study a relatively large dataset (358 PDD patients), using an ensemble of three clustering methods. The results are evaluated using several validation methods, and consolidated through an integration step. Four clusters are identified, analyzed and compared to subtypes previously defined by the widely used diagnostic tool DSM-IV.2 PMID:18693920
Matsumoto, Hirotaka; Kiryu, Hisanori
2016-06-08
Single-cell technologies make it possible to quantify the comprehensive states of individual cells, and have the power to shed light on cellular differentiation in particular. Although several methods have been developed to fully analyze the single-cell expression data, there is still room for improvement in the analysis of differentiation. In this paper, we propose a novel method SCOUP to elucidate differentiation process. Unlike previous dimension reduction-based approaches, SCOUP describes the dynamics of gene expression throughout differentiation directly, including the degree of differentiation of a cell (in pseudo-time) and cell fate. SCOUP is superior to previous methods with respect to pseudo-time estimation, especially for single-cell RNA-seq. SCOUP also successfully estimates cell lineage more accurately than previous method, especially for cells at an early stage of bifurcation. In addition, SCOUP can be applied to various downstream analyses. As an example, we propose a novel correlation calculation method for elucidating regulatory relationships among genes. We apply this method to a single-cell RNA-seq data and detect a candidate of key regulator for differentiation and clusters in a correlation network which are not detected with conventional correlation analysis. We develop a stochastic process-based method SCOUP to analyze single-cell expression data throughout differentiation. SCOUP can estimate pseudo-time and cell lineage more accurately than previous methods. We also propose a novel correlation calculation method based on SCOUP. SCOUP is a promising approach for further single-cell analysis and available at https://github.com/hmatsu1226/SCOUP.
Passive wireless strain monitoring of tire using capacitance change
NASA Astrophysics Data System (ADS)
Matsuzaki, Ryosuke; Todoroki, Akira
2004-07-01
In-service strain monitoring of tires of automobile is quite effective for improving the reliability of tires and Anti-lock Braking System (ABS). Since conventional strain gages have high stiffness and require lead wires, the conventional strain gages are cumbersome for the strain measurements of the tires. In a previous study, the authors proposed a new wireless strain monitoring method that adopts the tire itself as a sensor, with an oscillating circuit. This method is very simple and useful, but it requires a battery to activate the oscillating circuit. In the present study, the previous method for wireless tire monitoring is improved to produce a passive wireless sensor. A specimen made from a commercially available tire is connected to a tuning circuit comprising an inductance and a capacitance as a condenser. The capacitance change of tire causes change of the tuning frequency. This change of the tuned radio wave enables us to measure the applied strain of the specimen wirelessly, without any power supply from outside. This new passive wireless method is applied to a specimen and the static applied strain is measured. As a result, the method is experimentally shown to be effective as a passive wireless strain monitoring of tires.
Experiences Using Formal Methods for Requirements Modeling
NASA Technical Reports Server (NTRS)
Easterbrook, Steve; Lutz, Robyn; Covington, Rick; Kelly, John; Ampo, Yoko; Hamilton, David
1996-01-01
This paper describes three cases studies in the lightweight application of formal methods to requirements modeling for spacecraft fault protection systems. The case studies differ from previously reported applications of formal methods in that formal methods were applied very early in the requirements engineering process, to validate the evolving requirements. The results were fed back into the projects, to improve the informal specifications. For each case study, we describe what methods were applied, how they were applied, how much effort was involved, and what the findings were. In all three cases, the formal modeling provided a cost effective enhancement of the existing verification and validation processes. We conclude that the benefits gained from early modeling of unstable requirements more than outweigh the effort needed to maintain multiple representations.
NASA Astrophysics Data System (ADS)
Crawford, I.; Ruske, S.; Topping, D. O.; Gallagher, M. W.
2015-07-01
In this paper we present improved methods for discriminating and quantifying Primary Biological Aerosol Particles (PBAP) by applying hierarchical agglomerative cluster analysis to multi-parameter ultra violet-light induced fluorescence (UV-LIF) spectrometer data. The methods employed in this study can be applied to data sets in excess of 1×106 points on a desktop computer, allowing for each fluorescent particle in a dataset to be explicitly clustered. This reduces the potential for misattribution found in subsampling and comparative attribution methods used in previous approaches, improving our capacity to discriminate and quantify PBAP meta-classes. We evaluate the performance of several hierarchical agglomerative cluster analysis linkages and data normalisation methods using laboratory samples of known particle types and an ambient dataset. Fluorescent and non-fluorescent polystyrene latex spheres were sampled with a Wideband Integrated Bioaerosol Spectrometer (WIBS-4) where the optical size, asymmetry factor and fluorescent measurements were used as inputs to the analysis package. It was found that the Ward linkage with z-score or range normalisation performed best, correctly attributing 98 and 98.1 % of the data points respectively. The best performing methods were applied to the BEACHON-RoMBAS ambient dataset where it was found that the z-score and range normalisation methods yield similar results with each method producing clusters representative of fungal spores and bacterial aerosol, consistent with previous results. The z-score result was compared to clusters generated with previous approaches (WIBS AnalysiS Program, WASP) where we observe that the subsampling and comparative attribution method employed by WASP results in the overestimation of the fungal spore concentration by a factor of 1.5 and the underestimation of bacterial aerosol concentration by a factor of 5. We suggest that this likely due to errors arising from misatrribution due to poor centroid definition and failure to assign particles to a cluster as a result of the subsampling and comparative attribution method employed by WASP. The methods used here allow for the entire fluorescent population of particles to be analysed yielding an explict cluster attribution for each particle, improving cluster centroid definition and our capacity to discriminate and quantify PBAP meta-classes compared to previous approaches.
A Study on AR 3D Objects Shading Method Using Electronic Compass Sensor
NASA Astrophysics Data System (ADS)
Jung, Sungmo; Kim, Seoksoo
More effective communications can be offered to users by applying NPR (Non-Photorealistic Rendering) methods to 3D graphics. Thus, there has been much research on how to apply NPR to mobile contents. However, previous studies only propose cartoon rendering for pre-treatment with no consideration for directions of light in the surrounding environment. In this study, therefore, ECS(Electronic Compass Sensor) is applied to AR 3D objects shading in order to define directions of light as per time slots for assimilation with the surrounding environment.
Experiences Using Lightweight Formal Methods for Requirements Modeling
NASA Technical Reports Server (NTRS)
Easterbrook, Steve; Lutz, Robyn; Covington, Rick; Kelly, John; Ampo, Yoko; Hamilton, David
1997-01-01
This paper describes three case studies in the lightweight application of formal methods to requirements modeling for spacecraft fault protection systems. The case studies differ from previously reported applications of formal methods in that formal methods were applied very early in the requirements engineering process, to validate the evolving requirements. The results were fed back into the projects, to improve the informal specifications. For each case study, we describe what methods were applied, how they were applied, how much effort was involved, and what the findings were. In all three cases, formal methods enhanced the existing verification and validation processes, by testing key properties of the evolving requirements, and helping to identify weaknesses. We conclude that the benefits gained from early modeling of unstable requirements more than outweigh the effort needed to maintain multiple representations.
THE CHEMICAL ANALYSIS OF TERNARY ALLOYS OF PLUTONIUM WITH MOLYBDENUM AND URANIUM
DOE Office of Scientific and Technical Information (OSTI.GOV)
Phillips, G.; Woodhead, J.; Jenkins, E.N.
1958-09-01
It is shown that the absorptiometric determination of molybdenum as thiocyanate may be used in the presence of plutonium. Molybdenum interferes with previously published methods for determining uranium and plutonium but conditlons have been established for its complete removal by solvent extraction of the compound with alpha -benzoin oxime. The previous methods for uranium and plutonium are satisfactory when applied to the residual aqueous phase following this solvent extraction. (auth)
We previously described our collective judgment methods to engage expert stakeholders in the Comprehensive Environmental Assessment (CEA) workshop process applied to nano-TiO2 and nano-Ag research planning. We identified several lessons learned in engaging stakeholders to identif...
Thomas, Freddy; Jamin, Eric
2009-09-01
An international collaborative study of isotopic methods applied to control the authenticity of vinegar was organized in order to support the recognition of these procedures as official methods. The determination of the 2H/1H ratio of the methyl site of acetic acid by SNIF-NMR (site-specific natural isotopic fractionation-nuclear magnetic resonance) and the determination of the 13C/12C ratio, by IRMS (isotope ratio mass spectrometry) provide complementary information to characterize the botanical origin of acetic acid and to detect adulterations of vinegar using synthetic acetic acid. Both methods use the same initial steps to recover pure acetic acid from vinegar. In the case of wine vinegar, the determination of the 18O/16O ratio of water by IRMS allows to differentiate wine vinegar from vinegars made from dried grapes. The same set of vinegar samples was used to validate these three determinations. The precision parameters of the method for measuring delta13C (carbon isotopic deviation) were found to be similar to the values previously obtained for similar methods applied to wine ethanol or sugars extracted from fruit juices: the average repeatability (r) was 0.45 per thousand, and the average reproducibility (R) was 0.91 per thousand. As expected from previous in-house study of the uncertainties, the precision parameters of the method for measuring the 2H/1H ratio of the methyl site were found to be slightly higher than the values previously obtained for similar methods applied to wine ethanol or fermentation ethanol in fruit juices: the average repeatability was 1.34 ppm, and the average reproducibility was 1.62 ppm. This precision is still significantly smaller than the differences between various acetic acid sources (delta13C and delta18O) and allows a satisfactory discrimination of vinegar types. The precision parameters of the method for measuring delta18O were found to be similar to the values previously obtained for other methods applied to wine and fruit juices: the average repeatability was 0.15 per thousand, and the average reproducibility was 0.59 per thousand. The above values are proposed as repeatability and reproducibility limits in the current state of the art. On the basis of this satisfactory inter-laboratory precision and on the accuracy demonstrated by a spiking experiment, the authors recommend the adoption of the three isotopic determinations included in this study as official methods for controlling the authenticity of vinegar.
Photoactivated methods for enabling cartilage-to-cartilage tissue fixation
NASA Astrophysics Data System (ADS)
Sitterle, Valerie B.; Roberts, David W.
2003-06-01
The present study investigates whether photoactivated attachment of cartilage can provide a viable method for more effective repair of damaged articular surfaces by providing an alternative to sutures, barbs, or fibrin glues for initial fixation. Unlike artificial materials, biological constructs do not possess the initial strength for press-fitting and are instead sutured or pinned in place, typically inducing even more tissue trauma. A possible alternative involves the application of a photosensitive material, which is then photoactivated with a laser source to attach the implant and host tissues together in either a photothermal or photochemical process. The photothermal version of this method shows potential, but has been almost entirely applied to vascularized tissues. Cartilage, however, exhibits several characteristics that produce appreciable differences between applying and refining these techniques when compared to previous efforts involving vascularized tissues. Preliminary investigations involving photochemical photosensitizers based on singlet oxygen and electron transfer mechanisms are discussed, and characterization of the photodynamic effects on bulk collagen gels as a simplified model system using FTIR is performed. Previous efforts using photothermal welding applied to cartilaginous tissues are reviewed.
Out, Astrid A; van Minderhout, Ivonne J H M; van der Stoep, Nienke; van Bommel, Lysette S R; Kluijt, Irma; Aalfs, Cora; Voorendt, Marsha; Vossen, Rolf H A M; Nielsen, Maartje; Vasen, Hans F A; Morreau, Hans; Devilee, Peter; Tops, Carli M J; Hes, Frederik J
2015-06-01
Familial adenomatous polyposis is most frequently caused by pathogenic variants in either the APC gene or the MUTYH gene. The detection rate of pathogenic variants depends on the severity of the phenotype and sensitivity of the screening method, including sensitivity for mosaic variants. For 171 patients with multiple colorectal polyps without previously detectable pathogenic variant, APC was reanalyzed in leukocyte DNA by one uniform technique: high-resolution melting (HRM) analysis. Serial dilution of heterozygous DNA resulted in a lowest detectable allelic fraction of 6% for the majority of variants. HRM analysis and subsequent sequencing detected pathogenic fully heterozygous APC variants in 10 (6%) of the patients and pathogenic mosaic variants in 2 (1%). All these variants were previously missed by various conventional scanning methods. In parallel, HRM APC scanning was applied to DNA isolated from polyp tissue of two additional patients with apparently sporadic polyposis and without detectable pathogenic APC variant in leukocyte DNA. In both patients a pathogenic mosaic APC variant was present in multiple polyps. The detection of pathogenic APC variants in 7% of the patients, including mosaics, illustrates the usefulness of a complete APC gene reanalysis of previously tested patients, by a supplementary scanning method. HRM is a sensitive and fast pre-screening method for reliable detection of heterozygous and mosaic variants, which can be applied to leukocyte and polyp derived DNA.
Applying Item Response Theory Methods to Examine the Impact of Different Response Formats
ERIC Educational Resources Information Center
Hohensinn, Christine; Kubinger, Klaus D.
2011-01-01
In aptitude and achievement tests, different response formats are usually used. A fundamental distinction must be made between the class of multiple-choice formats and the constructed response formats. Previous studies have examined the impact of different response formats applying traditional statistical approaches, but these influences can also…
Updated generalized biomass equations for North American tree species
David C. Chojnacky; Linda S. Heath; Jennifer C. Jenkins
2014-01-01
Historically, tree biomass at large scales has been estimated by applying dimensional analysis techniques and field measurements such as diameter at breast height (dbh) in allometric regression equations. Equations often have been developed using differing methods and applied only to certain species or isolated areas. We previously had compiled and combined (in meta-...
NASA Technical Reports Server (NTRS)
Brand, J. C.
1985-01-01
Contraction theory is applied to an iterative formulation of electromagnetic scattering from periodic structures and a computational method for insuring convergence is developed. A short history of spectral (or k-space) formulation is presented with an emphasis on application to periodic surfaces. The mathematical background for formulating an iterative equation is covered using straightforward single variable examples including an extension to vector spaces. To insure a convergent solution of the iterative equation, a process called the contraction corrector method is developed. Convergence properties of previously presented iterative solutions to one-dimensional problems are examined utilizing contraction theory and the general conditions for achieving a convergent solution are explored. The contraction corrector method is then applied to several scattering problems including an infinite grating of thin wires with the solution data compared to previous works.
Sequence of eruptive events in the Vesuvio area recorded in shallow-water Ionian Sea sediments
NASA Astrophysics Data System (ADS)
Taricco, C.; Alessio, S.; Vivaldo, G.
2008-01-01
The dating of the cores we drilled from the Gallipoli terrace in the Gulf of Taranto (Ionian Sea), previously obtained by tephroanalysis, is checked by applying a method to objectively recognize volcanic events. This automatic statistical procedure allows identifying pulse-like features in a series and evaluating quantitatively the confidence level at which the significant peaks are detected. We applied it to the 2000-years-long pyroxenes series of the GT89-3 core, on which the dating is based. The method confirms the dating previously performed by detecting at a high confidence level the peaks originally used and indicates a few possible undocumented eruptions. Moreover, a spectral analysis, focussed on the long-term variability of the pyroxenes series and performed by several advanced methods, reveals that the volcanic pulses are superimposed to a millennial trend and a 400 years oscillation.
A Thermodynamically-consistent FBA-based Approach to Biogeochemical Reaction Modeling
NASA Astrophysics Data System (ADS)
Shapiro, B.; Jin, Q.
2015-12-01
Microbial rates are critical to understanding biogeochemical processes in natural environments. Recently, flux balance analysis (FBA) has been applied to predict microbial rates in aquifers and other settings. FBA is a genome-scale constraint-based modeling approach that computes metabolic rates and other phenotypes of microorganisms. This approach requires a prior knowledge of substrate uptake rates, which is not available for most natural microbes. Here we propose to constrain substrate uptake rates on the basis of microbial kinetics. Specifically, we calculate rates of respiration (and fermentation) using a revised Monod equation; this equation accounts for both the kinetics and thermodynamics of microbial catabolism. Substrate uptake rates are then computed from the rates of respiration, and applied to FBA to predict rates of microbial growth. We implemented this method by linking two software tools, PHREEQC and COBRA Toolbox. We applied this method to acetotrophic methanogenesis by Methanosarcina barkeri, and compared the simulation results to previous laboratory observations. The new method constrains acetate uptake by accounting for the kinetics and thermodynamics of methanogenesis, and predicted well the observations of previous experiments. In comparison, traditional methods of dynamic-FBA constrain acetate uptake on the basis of enzyme kinetics, and failed to reproduce the experimental results. These results show that microbial rate laws may provide a better constraint than enzyme kinetics for applying FBA to biogeochemical reaction modeling.
Optimal Stratification of Item Pools in a-Stratified Computerized Adaptive Testing.
ERIC Educational Resources Information Center
Chang, Hua-Hua; van der Linden, Wim J.
2003-01-01
Developed a method based on 0-1 linear programming to stratify an item pool optimally for use in alpha-stratified adaptive testing. Applied the method to a previous item pool from the computerized adaptive test of the Graduate Record Examinations. Results show the new method performs well in practical situations. (SLD)
Photoionization of Atoms and Molecules using a Configuration-Average Distorted-Wave Method
NASA Astrophysics Data System (ADS)
Pindzola, M. S.; Balance, C. P.; Loch, S. D.; Ludlow, J. A.
2011-05-01
A configuration-average distorted-wave method is applied to calculate the photoionization cross section for the outer subshells of the C atom and the C2 diatomic molecule. Comparisions are made with previous R-matrix and Hartree- Fock distorted-wave calculations.
An improved method of measuring heart rate using a webcam
NASA Astrophysics Data System (ADS)
Liu, Yi; Ouyang, Jianfei; Yan, Yonggang
2014-09-01
Measuring heart rate traditionally requires special equipment and physical contact with the subject. Reliable non-contact and low-cost measurements are highly desirable for convenient and comfortable physiological self-assessment. Previous work has shown that consumer-grade cameras can provide useful signals for remote heart rate measurements. In this paper a simple and robust method of measuring the heart rate using low-cost webcam is proposed. Blood volume pulse is extracted by proper Region of Interest (ROI) and color channel selection from image sequences of human faces without complex computation. Heart rate is subsequently quantified by spectrum analysis. The method is successfully applied under natural lighting conditions. Results of experiments show that it takes less time, is much simpler, and has similar accuracy to the previously published and widely used method of Independent Component Analysis (ICA). Benefitting from non-contact, convenience, and low-costs, it provides great promise for popularization of home healthcare and can further be applied to biomedical research.
A strategy for evaluating pathway analysis methods.
Yu, Chenggang; Woo, Hyung Jun; Yu, Xueping; Oyama, Tatsuya; Wallqvist, Anders; Reifman, Jaques
2017-10-13
Researchers have previously developed a multitude of methods designed to identify biological pathways associated with specific clinical or experimental conditions of interest, with the aim of facilitating biological interpretation of high-throughput data. Before practically applying such pathway analysis (PA) methods, we must first evaluate their performance and reliability, using datasets where the pathways perturbed by the conditions of interest have been well characterized in advance. However, such 'ground truths' (or gold standards) are often unavailable. Furthermore, previous evaluation strategies that have focused on defining 'true answers' are unable to systematically and objectively assess PA methods under a wide range of conditions. In this work, we propose a novel strategy for evaluating PA methods independently of any gold standard, either established or assumed. The strategy involves the use of two mutually complementary metrics, recall and discrimination. Recall measures the consistency of the perturbed pathways identified by applying a particular analysis method to an original large dataset and those identified by the same method to a sub-dataset of the original dataset. In contrast, discrimination measures specificity-the degree to which the perturbed pathways identified by a particular method to a dataset from one experiment differ from those identifying by the same method to a dataset from a different experiment. We used these metrics and 24 datasets to evaluate six widely used PA methods. The results highlighted the common challenge in reliably identifying significant pathways from small datasets. Importantly, we confirmed the effectiveness of our proposed dual-metric strategy by showing that previous comparative studies corroborate the performance evaluations of the six methods obtained by our strategy. Unlike any previously proposed strategy for evaluating the performance of PA methods, our dual-metric strategy does not rely on any ground truth, either established or assumed, of the pathways perturbed by a specific clinical or experimental condition. As such, our strategy allows researchers to systematically and objectively evaluate pathway analysis methods by employing any number of datasets for a variety of conditions.
NASA Astrophysics Data System (ADS)
Crawford, I.; Ruske, S.; Topping, D. O.; Gallagher, M. W.
2015-11-01
In this paper we present improved methods for discriminating and quantifying primary biological aerosol particles (PBAPs) by applying hierarchical agglomerative cluster analysis to multi-parameter ultraviolet-light-induced fluorescence (UV-LIF) spectrometer data. The methods employed in this study can be applied to data sets in excess of 1 × 106 points on a desktop computer, allowing for each fluorescent particle in a data set to be explicitly clustered. This reduces the potential for misattribution found in subsampling and comparative attribution methods used in previous approaches, improving our capacity to discriminate and quantify PBAP meta-classes. We evaluate the performance of several hierarchical agglomerative cluster analysis linkages and data normalisation methods using laboratory samples of known particle types and an ambient data set. Fluorescent and non-fluorescent polystyrene latex spheres were sampled with a Wideband Integrated Bioaerosol Spectrometer (WIBS-4) where the optical size, asymmetry factor and fluorescent measurements were used as inputs to the analysis package. It was found that the Ward linkage with z-score or range normalisation performed best, correctly attributing 98 and 98.1 % of the data points respectively. The best-performing methods were applied to the BEACHON-RoMBAS (Bio-hydro-atmosphere interactions of Energy, Aerosols, Carbon, H2O, Organics and Nitrogen-Rocky Mountain Biogenic Aerosol Study) ambient data set, where it was found that the z-score and range normalisation methods yield similar results, with each method producing clusters representative of fungal spores and bacterial aerosol, consistent with previous results. The z-score result was compared to clusters generated with previous approaches (WIBS AnalysiS Program, WASP) where we observe that the subsampling and comparative attribution method employed by WASP results in the overestimation of the fungal spore concentration by a factor of 1.5 and the underestimation of bacterial aerosol concentration by a factor of 5. We suggest that this likely due to errors arising from misattribution due to poor centroid definition and failure to assign particles to a cluster as a result of the subsampling and comparative attribution method employed by WASP. The methods used here allow for the entire fluorescent population of particles to be analysed, yielding an explicit cluster attribution for each particle and improving cluster centroid definition and our capacity to discriminate and quantify PBAP meta-classes compared to previous approaches.
Indispensable finite time corrections for Fokker-Planck equations from time series data.
Ragwitz, M; Kantz, H
2001-12-17
The reconstruction of Fokker-Planck equations from observed time series data suffers strongly from finite sampling rates. We show that previously published results are degraded considerably by such effects. We present correction terms which yield a robust estimation of the diffusion terms, together with a novel method for one-dimensional problems. We apply these methods to time series data of local surface wind velocities, where the dependence of the diffusion constant on the state variable shows a different behavior than previously suggested.
A Lagrangian meshfree method applied to linear and nonlinear elasticity.
Walker, Wade A
2017-01-01
The repeated replacement method (RRM) is a Lagrangian meshfree method which we have previously applied to the Euler equations for compressible fluid flow. In this paper we present new enhancements to RRM, and we apply the enhanced method to both linear and nonlinear elasticity. We compare the results of ten test problems to those of analytic solvers, to demonstrate that RRM can successfully simulate these elastic systems without many of the requirements of traditional numerical methods such as numerical derivatives, equation system solvers, or Riemann solvers. We also show the relationship between error and computational effort for RRM on these systems, and compare RRM to other methods to highlight its strengths and weaknesses. And to further explain the two elastic equations used in the paper, we demonstrate the mathematical procedure used to create Riemann and Sedov-Taylor solvers for them, and detail the numerical techniques needed to embody those solvers in code.
A Lagrangian meshfree method applied to linear and nonlinear elasticity
2017-01-01
The repeated replacement method (RRM) is a Lagrangian meshfree method which we have previously applied to the Euler equations for compressible fluid flow. In this paper we present new enhancements to RRM, and we apply the enhanced method to both linear and nonlinear elasticity. We compare the results of ten test problems to those of analytic solvers, to demonstrate that RRM can successfully simulate these elastic systems without many of the requirements of traditional numerical methods such as numerical derivatives, equation system solvers, or Riemann solvers. We also show the relationship between error and computational effort for RRM on these systems, and compare RRM to other methods to highlight its strengths and weaknesses. And to further explain the two elastic equations used in the paper, we demonstrate the mathematical procedure used to create Riemann and Sedov-Taylor solvers for them, and detail the numerical techniques needed to embody those solvers in code. PMID:29045443
Applied Use Value of Scientific Information for Management of Ecosystem Services
NASA Astrophysics Data System (ADS)
Raunikar, R. P.; Forney, W.; Bernknopf, R.; Mishra, S.
2012-12-01
The U.S. Geological Survey has developed and applied methods for quantifying the value of scientific information (VOI) that are based on the applied use value of the information. In particular the applied use value of U.S. Geological Survey information often includes efficient management of ecosystem services. The economic nature of U.S. Geological Survey scientific information is largely equivalent to that of any information, but we focus application of our VOI quantification methods on the information products provided freely to the public by the U.S. Geological Survey. We describe VOI economics in general and illustrate by referring to previous studies that use the evolving applied use value methods, which includes examples of the siting of landfills in Louden County, the mineral exploration efficiencies of finer resolution geologic maps in Canada, and improved agricultural production and groundwater protection in Eastern Iowa possible with Landsat moderate resolution satellite imagery. Finally, we describe the adaptation of the applied use value method to the case of streamgage information used to improve the efficiency of water markets in New Mexico.
Simulation-Based Rule Generation Considering Readability
Yahagi, H.; Shimizu, S.; Ogata, T.; Hara, T.; Ota, J.
2015-01-01
Rule generation method is proposed for an aircraft control problem in an airport. Designing appropriate rules for motion coordination of taxiing aircraft in the airport is important, which is conducted by ground control. However, previous studies did not consider readability of rules, which is important because it should be operated and maintained by humans. Therefore, in this study, using the indicator of readability, we propose a method of rule generation based on parallel algorithm discovery and orchestration (PADO). By applying our proposed method to the aircraft control problem, the proposed algorithm can generate more readable and more robust rules and is found to be superior to previous methods. PMID:27347501
Dynamic estimator for determining operating conditions in an internal combustion engine
Hellstrom, Erik; Stefanopoulou, Anna; Jiang, Li; Larimore, Jacob
2016-01-05
Methods and systems are provided for estimating engine performance information for a combustion cycle of an internal combustion engine. Estimated performance information for a previous combustion cycle is retrieved from memory. The estimated performance information includes an estimated value of at least one engine performance variable. Actuator settings applied to engine actuators are also received. The performance information for the current combustion cycle is then estimated based, at least in part, on the estimated performance information for the previous combustion cycle and the actuator settings applied during the previous combustion cycle. The estimated performance information for the current combustion cycle is then stored to the memory to be used in estimating performance information for a subsequent combustion cycle.
New technology in postfire rehab
Joe Sabel
2007-01-01
PAM-12⢠is a recycled office paper byproduct made into a spreadable mulch with added Water Soluble Polyacrylamide (WSPAM), a previously difficult polymer to apply. PAM-12 is extremely versatile and can be applied through several methods. In a field test, PAM-12 outperformed straw in every targeted performance area: erosion control, improving soil hydrophobicity, and...
A comparison of high-frequency cross-correlation measures
NASA Astrophysics Data System (ADS)
Precup, Ovidiu V.; Iori, Giulia
2004-12-01
On a high-frequency scale the time series are not homogeneous, therefore standard correlation measures cannot be directly applied to the raw data. There are two ways to deal with this problem. The time series can be homogenised through an interpolation method (An Introduction to High-Frequency Finance, Academic Press, NY, 2001) (linear or previous tick) and then the Pearson correlation statistic computed. Recently, methods that can handle raw non-synchronous time series have been developed (Int. J. Theor. Appl. Finance 6(1) (2003) 87; J. Empirical Finance 4 (1997) 259). This paper compares two traditional methods that use interpolation with an alternative method applied directly to the actual time series.
Improvement of the Owner Distinction Method for Healing-Type Pet Robots
NASA Astrophysics Data System (ADS)
Nambo, Hidetaka; Kimura, Haruhiko; Hara, Mirai; Abe, Koji; Tajima, Takuya
In order to decrease human stress, Animal Assisted Therapy which applies pets to heal humans is attracted. However, since animals are insanitary and unsafe, it is difficult to practically apply animal pets in hospitals. For the reason, on behalf of animal pets, pet robots have been attracted. Since pet robots would have no problems in sanitation and safety, they are able to be applied as a substitute for animal pets in the therapy. In our previous study where pet robots distinguish their owners like an animal pet, we used a puppet type pet robot which has pressure type touch sensors. However, the accuracy of our method was not sufficient to practical use. In this paper, we propose a method to improve the accuracy of the distinction. The proposed method can be applied for capacitive touch sensors such as installed in AIBO in addition to pressure type touch sensors. Besides, this paper shows performance of the proposed method from experimental results and confirms the proposed method has improved performance of the distinction in the conventional method.
A Particle Batch Smoother Approach to Snow Water Equivalent Estimation
NASA Technical Reports Server (NTRS)
Margulis, Steven A.; Girotto, Manuela; Cortes, Gonzalo; Durand, Michael
2015-01-01
This paper presents a newly proposed data assimilation method for historical snow water equivalent SWE estimation using remotely sensed fractional snow-covered area fSCA. The newly proposed approach consists of a particle batch smoother (PBS), which is compared to a previously applied Kalman-based ensemble batch smoother (EnBS) approach. The methods were applied over the 27-yr Landsat 5 record at snow pillow and snow course in situ verification sites in the American River basin in the Sierra Nevada (United States). This basin is more densely vegetated and thus more challenging for SWE estimation than the previous applications of the EnBS. Both data assimilation methods provided significant improvement over the prior (modeling only) estimates, with both able to significantly reduce prior SWE biases. The prior RMSE values at the snow pillow and snow course sites were reduced by 68%-82% and 60%-68%, respectively, when applying the data assimilation methods. This result is encouraging for a basin like the American where the moderate to high forest cover will necessarily obscure more of the snow-covered ground surface than in previously examined, less-vegetated basins. The PBS generally outperformed the EnBS: for snow pillows the PBSRMSE was approx.54%of that seen in the EnBS, while for snow courses the PBSRMSE was approx.79%of the EnBS. Sensitivity tests show relative insensitivity for both the PBS and EnBS results to ensemble size and fSCA measurement error, but a higher sensitivity for the EnBS to the mean prior precipitation input, especially in the case where significant prior biases exist.
Pressure algorithm for elliptic flow calculations with the PDF method
NASA Technical Reports Server (NTRS)
Anand, M. S.; Pope, S. B.; Mongia, H. C.
1991-01-01
An algorithm to determine the mean pressure field for elliptic flow calculations with the probability density function (PDF) method is developed and applied. The PDF method is a most promising approach for the computation of turbulent reacting flows. Previous computations of elliptic flows with the method were in conjunction with conventional finite volume based calculations that provided the mean pressure field. The algorithm developed and described here permits the mean pressure field to be determined within the PDF calculations. The PDF method incorporating the pressure algorithm is applied to the flow past a backward-facing step. The results are in good agreement with data for the reattachment length, mean velocities, and turbulence quantities including triple correlations.
Comparing ensemble learning methods based on decision tree classifiers for protein fold recognition.
Bardsiri, Mahshid Khatibi; Eftekhari, Mahdi
2014-01-01
In this paper, some methods for ensemble learning of protein fold recognition based on a decision tree (DT) are compared and contrasted against each other over three datasets taken from the literature. According to previously reported studies, the features of the datasets are divided into some groups. Then, for each of these groups, three ensemble classifiers, namely, random forest, rotation forest and AdaBoost.M1 are employed. Also, some fusion methods are introduced for combining the ensemble classifiers obtained in the previous step. After this step, three classifiers are produced based on the combination of classifiers of types random forest, rotation forest and AdaBoost.M1. Finally, the three different classifiers achieved are combined to make an overall classifier. Experimental results show that the overall classifier obtained by the genetic algorithm (GA) weighting fusion method, is the best one in comparison to previously applied methods in terms of classification accuracy.
NASA Astrophysics Data System (ADS)
Modegi, Toshio
We are developing audio watermarking techniques which enable extraction of embedded data by cell phones. For that we have to embed data onto frequency ranges, where our auditory response is prominent, therefore data embedding will cause much auditory noises. Previously we have proposed applying a two-channel stereo play-back feature, where noises generated by a data embedded left-channel signal will be reduced by the other right-channel signal. However, this proposal has practical problems of restricting extracting terminal location. In this paper, we propose synthesizing the noise reducing right-channel signal with the left-signal and reduces noises completely by generating an auditory stream segregation phenomenon to users. This newly proposed makes the noise reducing right-channel signal unnecessary and supports monaural play-back operations. Moreover, we propose a wide-band embedding method causing dual auditory stream segregation phenomena, which enables data embedding on whole public phone frequency ranges and stable extractions with 3-G mobile phones. From these proposals, extraction precisions become higher than those by the previously proposed method whereas the quality damages of embedded signals become smaller. In this paper we present an abstract of our newly proposed method and experimental results comparing with those by the previously proposed method.
The Effect of Explanations on Mathematical Reasoning Tasks
ERIC Educational Resources Information Center
Norqvist, Mathias
2018-01-01
Studies in mathematics education often point to the necessity for students to engage in more cognitively demanding activities than just solving tasks by applying given solution methods. Previous studies have shown that students that engage in creative mathematically founded reasoning to construct a solution method, perform significantly better in…
Integrative eQTL analysis of tumor and host omics data in individuals with bladder cancer.
Pineda, Silvia; Van Steen, Kristel; Malats, Núria
2017-09-01
Integrative analyses of several omics data are emerging. The data are usually generated from the same source material (i.e., tumor sample) representing one level of regulation. However, integrating different regulatory levels (i.e., blood) with those from tumor may also reveal important knowledge about the human genetic architecture. To model this multilevel structure, an integrative-expression quantitative trait loci (eQTL) analysis applying two-stage regression (2SR) was proposed. This approach first regressed tumor gene expression levels with tumor markers and the adjusted residuals from the previous model were then regressed with the germline genotypes measured in blood. Previously, we demonstrated that penalized regression methods in combination with a permutation-based MaxT method (Global-LASSO) is a promising tool to fix some of the challenges that high-throughput omics data analysis imposes. Here, we assessed whether Global-LASSO can also be applied when tumor and blood omics data are integrated. We further compared our strategy with two 2SR-approaches, one using multiple linear regression (2SR-MLR) and other using LASSO (2SR-LASSO). We applied the three models to integrate genomic, epigenomic, and transcriptomic data from tumor tissue with blood germline genotypes from 181 individuals with bladder cancer included in the TCGA Consortium. Global-LASSO provided a larger list of eQTLs than the 2SR methods, identified a previously reported eQTLs in prostate stem cell antigen (PSCA), and provided further clues on the complexity of APBEC3B loci, with a minimal false-positive rate not achieved by 2SR-MLR. It also represents an important contribution for omics integrative analysis because it is easy to apply and adaptable to any type of data. © 2017 WILEY PERIODICALS, INC.
12 CFR Appendix G to Part 226 - Open-End Model Forms and Clauses
Code of Federal Regulations, 2010 CFR
2010-01-01
... the “adjusted balance” by taking the balance you owed at the end of the previous billing cycle and... cycle. (b) Previous balance method We figure [a portion of] the finance charge on your account by applying the periodic rate to the amount you owe at the beginning of each billing cycle [minus any unpaid...
ERIC Educational Resources Information Center
Barroso-Hurtado, Domingo; Mendo-Lázaro, Santiago
2016-01-01
Introduction: The present study analyzes differences in university students' opinions towards persons with mental disorder, as a function of whether they have had previous contact with them and whether they have received training about them. Method: The Opinions about Mental Illness Scale for Spanish population (OMI-S) was applied to a sample of…
Iwasaki, Yoichiro; Misumi, Masato; Nakamiya, Toshiyuki
2013-06-17
We have already proposed a method for detecting vehicle positions and their movements (henceforth referred to as "our previous method") using thermal images taken with an infrared thermal camera. Our experiments have shown that our previous method detects vehicles robustly under four different environmental conditions which involve poor visibility conditions in snow and thick fog. Our previous method uses the windshield and its surroundings as the target of the Viola-Jones detector. Some experiments in winter show that the vehicle detection accuracy decreases because the temperatures of many windshields approximate those of the exterior of the windshields. In this paper, we propose a new vehicle detection method (henceforth referred to as "our new method"). Our new method detects vehicles based on tires' thermal energy reflection. We have done experiments using three series of thermal images for which the vehicle detection accuracies of our previous method are low. Our new method detects 1,417 vehicles (92.8%) out of 1,527 vehicles, and the number of false detection is 52 in total. Therefore, by combining our two methods, high vehicle detection accuracies are maintained under various environmental conditions. Finally, we apply the traffic information obtained by our two methods to traffic flow automatic monitoring, and show the effectiveness of our proposal.
Local regression type methods applied to the study of geophysics and high frequency financial data
NASA Astrophysics Data System (ADS)
Mariani, M. C.; Basu, K.
2014-09-01
In this work we applied locally weighted scatterplot smoothing techniques (Lowess/Loess) to Geophysical and high frequency financial data. We first analyze and apply this technique to the California earthquake geological data. A spatial analysis was performed to show that the estimation of the earthquake magnitude at a fixed location is very accurate up to the relative error of 0.01%. We also applied the same method to a high frequency data set arising in the financial sector and obtained similar satisfactory results. The application of this approach to the two different data sets demonstrates that the overall method is accurate and efficient, and the Lowess approach is much more desirable than the Loess method. The previous works studied the time series analysis; in this paper our local regression models perform a spatial analysis for the geophysics data providing different information. For the high frequency data, our models estimate the curve of best fit where data are dependent on time.
NASA Astrophysics Data System (ADS)
Zhou, Yu; Walker, Richard T.; Elliott, John R.; Parsons, Barry
2016-04-01
Fault dips are usually measured from outcrops in the field or inferred through geodetic or seismological modeling. Here we apply the classic structural geology approach of calculating dip from a fault's 3-D surface trace using recent, high-resolution topography. A test study applied to the 2010 El Mayor-Cucapah earthquake shows very good agreement between our results and those previously determined from field measurements. To obtain a reliable estimate, a fault segment ≥120 m long with a topographic variation ≥15 m is suggested. We then applied this method to the 2013 Balochistan earthquake, getting dips similar to previous estimates. Our dip estimates show a switch from north to south dipping at the southern end of the main trace, which appears to be a response to local extension within a stepover. We suggest that this previously unidentified geometrical complexity may act as the endpoint of earthquake ruptures for the southern end of the Hoshab fault.
NASA Astrophysics Data System (ADS)
Ogawa, Kazuhisa; Kobayashi, Hirokazu; Tomita, Akihisa
2018-02-01
The quantum interference of entangled photons forms a key phenomenon underlying various quantum-optical technologies. It is known that the quantum interference patterns of entangled photon pairs can be reconstructed classically by the time-reversal method; however, the time-reversal method has been applied only to time-frequency-entangled two-photon systems in previous experiments. Here, we apply the time-reversal method to the position-wave-vector-entangled two-photon systems: the two-photon Young interferometer and the two-photon beam focusing system. We experimentally demonstrate that the time-reversed systems classically reconstruct the same interference patterns as the position-wave-vector-entangled two-photon systems.
Determining the Depth of Infinite Horizontal Cylindrical Sources from Spontaneous Polarization Data
NASA Astrophysics Data System (ADS)
Cooper, G. R. J.; Stettler, E. H.
2017-03-01
Previously published semi-automatic interpretation methods that use ratios of analytic signal amplitudes of orders that differ by one to determine the distance to potential field sources are shown also to apply to self-potential (S.P.) data when the source is a horizontal cylinder. Local minima of the distance (when it becomes closest to zero) give the source depth. The method was applied to an S.P. anomaly from the Bourkes Luck potholes district in Mpumalanga Province, South Africa, and gave results that were confirmed by drilling.
A method of extracting speed-dependent vector correlations from 2 + 1 REMPI ion images.
Wei, Wei; Wallace, Colin J; Grubb, Michael P; North, Simon W
2017-07-07
We present analytical expressions for extracting Dixon's bipolar moments in the semi-classical limit from experimental anisotropy parameters of sliced or reconstructed non-sliced images. The current method focuses on images generated by 2 + 1 REMPI (Resonance Enhanced Multi-photon Ionization) and is a necessary extension of our previously published 1 + 1 REMPI equations. Two approaches for applying the new equations, direct inversion and forward convolution, are presented. As demonstration of the new method, bipolar moments were extracted from images of carbonyl sulfide (OCS) photodissociation at 230 nm and NO 2 photodissociation at 355 nm, and the results are consistent with previous publications.
Huh, Yong; Yu, Kiyun; Park, Woojin
2016-01-01
This paper proposes a method to detect corresponding vertex pairs between planar tessellation datasets. Applying an agglomerative hierarchical co-clustering, the method finds geometrically corresponding cell-set pairs from which corresponding vertex pairs are detected. Then, the map transformation is performed with the vertex pairs. Since these pairs are independently detected for each corresponding cell-set pairs, the method presents improved matching performance regardless of locally uneven positional discrepancies between dataset. The proposed method was applied to complicated synthetic cell datasets assumed as a cadastral map and a topographical map, and showed an improved result with the F-measures of 0.84 comparing to a previous matching method with the F-measure of 0.48.
Porra, Luke; Swan, Hans; Ho, Chien
2015-08-01
Introduction: Acoustic Radiation Force Impulse (ARFI) Quantification measures shear wave velocities (SWVs) within the liver. It is a reliable method for predicting the severity of liver fibrosis and has the potential to assess fibrosis in any part of the liver, but previous research has found ARFI quantification in the right lobe more accurate than in the left lobe. A lack of standardised applied transducer force when performing ARFI quantification in the left lobe of the liver may account for some of this inaccuracy. The research hypothesis of this present study predicted that an increase in applied transducer force would result in an increase in SWVs measured. Methods: ARFI quantification within the left lobe of the liver was performed within a group of healthy volunteers (n = 28). During each examination, each participant was subjected to ARFI quantification at six different levels of transducer force applied to the epigastric abdominal wall. Results: A repeated measures ANOVA test showed that ARFI quantification was significantly affected by applied transducer force (p = 0.002). Significant pairwise comparisons using Bonferroni correction for multiple comparisons showed that with an increase in applied transducer force, there was a decrease in SWVs. Conclusion: Applied transducer force has a significant effect on SWVs within the left lobe of the liver and it may explain some of the less accurate and less reliable results in previous studies where transducer force was not taken into consideration. Future studies in the left lobe of the liver should take this into account and control for applied transducer force.
puma: a Bioconductor package for propagating uncertainty in microarray analysis.
Pearson, Richard D; Liu, Xuejun; Sanguinetti, Guido; Milo, Marta; Lawrence, Neil D; Rattray, Magnus
2009-07-09
Most analyses of microarray data are based on point estimates of expression levels and ignore the uncertainty of such estimates. By determining uncertainties from Affymetrix GeneChip data and propagating these uncertainties to downstream analyses it has been shown that we can improve results of differential expression detection, principal component analysis and clustering. Previously, implementations of these uncertainty propagation methods have only been available as separate packages, written in different languages. Previous implementations have also suffered from being very costly to compute, and in the case of differential expression detection, have been limited in the experimental designs to which they can be applied. puma is a Bioconductor package incorporating a suite of analysis methods for use on Affymetrix GeneChip data. puma extends the differential expression detection methods of previous work from the 2-class case to the multi-factorial case. puma can be used to automatically create design and contrast matrices for typical experimental designs, which can be used both within the package itself but also in other Bioconductor packages. The implementation of differential expression detection methods has been parallelised leading to significant decreases in processing time on a range of computer architectures. puma incorporates the first R implementation of an uncertainty propagation version of principal component analysis, and an implementation of a clustering method based on uncertainty propagation. All of these techniques are brought together in a single, easy-to-use package with clear, task-based documentation. For the first time, the puma package makes a suite of uncertainty propagation methods available to a general audience. These methods can be used to improve results from more traditional analyses of microarray data. puma also offers improvements in terms of scope and speed of execution over previously available methods. puma is recommended for anyone working with the Affymetrix GeneChip platform for gene expression analysis and can also be applied more generally.
50 CFR 224.101 - Enumeration of endangered marine and anadromous species.
Code of Federal Regulations, 2012 CFR
2012-10-01
... institutions) and which are identified as fish belonging to the NYB DPS based on genetics analyses, previously... genetics analyses, previously applied tags, previously applied marks, or documentation to verify that the... Carolina DPS based on genetics analyses, previously applied tags, previously applied marks, or...
50 CFR 224.101 - Enumeration of endangered marine and anadromous species.
Code of Federal Regulations, 2013 CFR
2013-10-01
... institutions) and which are identified as fish belonging to the NYB DPS based on genetics analyses, previously... genetics analyses, previously applied tags, previously applied marks, or documentation to verify that the... Carolina DPS based on genetics analyses, previously applied tags, previously applied marks, or...
Three novel approaches to structural identifiability analysis in mixed-effects models.
Janzén, David L I; Jirstrand, Mats; Chappell, Michael J; Evans, Neil D
2016-05-06
Structural identifiability is a concept that considers whether the structure of a model together with a set of input-output relations uniquely determines the model parameters. In the mathematical modelling of biological systems, structural identifiability is an important concept since biological interpretations are typically made from the parameter estimates. For a system defined by ordinary differential equations, several methods have been developed to analyse whether the model is structurally identifiable or otherwise. Another well-used modelling framework, which is particularly useful when the experimental data are sparsely sampled and the population variance is of interest, is mixed-effects modelling. However, established identifiability analysis techniques for ordinary differential equations are not directly applicable to such models. In this paper, we present and apply three different methods that can be used to study structural identifiability in mixed-effects models. The first method, called the repeated measurement approach, is based on applying a set of previously established statistical theorems. The second method, called the augmented system approach, is based on augmenting the mixed-effects model to an extended state-space form. The third method, called the Laplace transform mixed-effects extension, is based on considering the moment invariants of the systems transfer function as functions of random variables. To illustrate, compare and contrast the application of the three methods, they are applied to a set of mixed-effects models. Three structural identifiability analysis methods applicable to mixed-effects models have been presented in this paper. As method development of structural identifiability techniques for mixed-effects models has been given very little attention, despite mixed-effects models being widely used, the methods presented in this paper provides a way of handling structural identifiability in mixed-effects models previously not possible. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Transonic Unsteady Aerodynamics and Aeroelasticity 1987, part 1
NASA Technical Reports Server (NTRS)
Bland, Samuel R. (Compiler)
1989-01-01
Computational fluid dynamics methods have been widely accepted for transonic aeroelastic analysis. Previously, calculations with the TSD methods were used for 2-D airfoils, but now the TSD methods are applied to the aeroelastic analysis of the complete aircraft. The Symposium papers are grouped into five subject areas, two of which are covered in this part: (1) Transonic Small Disturbance (TSD) theory for complete aircraft configurations; and (2) Full potential and Euler equation methods.
Kernel Partial Least Squares for Nonlinear Regression and Discrimination
NASA Technical Reports Server (NTRS)
Rosipal, Roman; Clancy, Daniel (Technical Monitor)
2002-01-01
This paper summarizes recent results on applying the method of partial least squares (PLS) in a reproducing kernel Hilbert space (RKHS). A previously proposed kernel PLS regression model was proven to be competitive with other regularized regression methods in RKHS. The family of nonlinear kernel-based PLS models is extended by considering the kernel PLS method for discrimination. Theoretical and experimental results on a two-class discrimination problem indicate usefulness of the method.
Image restoration by the method of convex projections: part 2 applications and numerical results.
Sezan, M I; Stark, H
1982-01-01
The image restoration theory discussed in a previous paper by Youla and Webb [1] is applied to a simulated image and the results compared with the well-known method known as the Gerchberg-Papoulis algorithm. The results show that the method of image restoration by projection onto convex sets, by providing a convenient technique for utilizing a priori information, performs significantly better than the Gerchberg-Papoulis method.
A Pragmatic Smoothing Method for Improving the Quality of the Results in Atomic Spectroscopy
NASA Astrophysics Data System (ADS)
Bennun, Leonardo
2017-07-01
A new smoothing method for the improvement on the identification and quantification of spectral functions based on the previous knowledge of the signals that are expected to be quantified, is presented. These signals are used as weighted coefficients in the smoothing algorithm. This smoothing method was conceived to be applied in atomic and nuclear spectroscopies preferably to these techniques where net counts are proportional to acquisition time, such as particle induced X-ray emission (PIXE) and other X-ray fluorescence spectroscopic methods, etc. This algorithm, when properly applied, does not distort the form nor the intensity of the signal, so it is well suited for all kind of spectroscopic techniques. This method is extremely effective at reducing high-frequency noise in the signal much more efficient than a single rectangular smooth of the same width. As all of smoothing techniques, the proposed method improves the precision of the results, but in this case we found also a systematic improvement on the accuracy of the results. We still have to evaluate the improvement on the quality of the results when this method is applied over real experimental results. We expect better characterization of the net area quantification of the peaks, and smaller Detection and Quantification Limits. We have applied this method to signals that obey Poisson statistics, but with the same ideas and criteria, it could be applied to time series. In a general case, when this algorithm is applied over experimental results, also it would be required that the sought characteristic functions, required for this weighted smoothing method, should be obtained from a system with strong stability. If the sought signals are not perfectly clean, this method should be carefully applied
An Evaluation of a Computer-Based Training on the Visual Analysis of Single-Subject Data
ERIC Educational Resources Information Center
Snyder, Katie
2013-01-01
Visual analysis is the primary method of analyzing data in single-subject methodology, which is the predominant research method used in the fields of applied behavior analysis and special education. Previous research on the reliability of visual analysis suggests that judges often disagree about what constitutes an intervention effect. Considering…
Figure-ground segmentation based on class-independent shape priors
NASA Astrophysics Data System (ADS)
Li, Yang; Liu, Yang; Liu, Guojun; Guo, Maozu
2018-01-01
We propose a method to generate figure-ground segmentation by incorporating shape priors into the graph-cuts algorithm. Given an image, we first obtain a linear representation of an image and then apply directional chamfer matching to generate class-independent, nonparametric shape priors, which provide shape clues for the graph-cuts algorithm. We then enforce shape priors in a graph-cuts energy function to produce object segmentation. In contrast to previous segmentation methods, the proposed method shares shape knowledge for different semantic classes and does not require class-specific model training. Therefore, the approach obtains high-quality segmentation for objects. We experimentally validate that the proposed method outperforms previous approaches using the challenging PASCAL VOC 2010/2012 and Berkeley (BSD300) segmentation datasets.
Matched-filtering line search methods applied to Suzaku data
NASA Astrophysics Data System (ADS)
Miyazaki, Naoto; Yamada, Shin'ya; Enoto, Teruaki; Axelsson, Magnus; Ohashi, Takaya
2016-12-01
A detailed search for emission and absorption lines and an assessment of their upper limits are performed for Suzaku data. The method utilizes a matched-filtering approach to maximize the signal-to-noise ratio for a given energy resolution, which could be applicable to many types of line search. We first applied it to well-known active galactic nuclei spectra that have been reported to have ultra-fast outflows, and find that our results are consistent with previous findings at the ˜3σ level. We proceeded to search for emission and absorption features in two bright magnetars 4U 0142+61 and 1RXS J1708-4009, applying the filtering method to Suzaku data. We found that neither source showed any significant indication of line features, even using long-term Suzaku observations or dividing their spectra into spin phases. The upper limits on the equivalent width of emission/absorption lines are constrained to be a few eV at ˜1 keV and a few hundreds of eV at ˜10 keV. This strengthens previous reports that persistently bright magnetars do not show proton cyclotron absorption features in soft X-rays and, even if they exist, they would be broadened or much weaker than below the detection limit of X-ray CCD.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shadid, John Nicolas; Fish, Jacob; Waisman, Haim
Two heuristic strategies intended to enhance the performance of the generalized global basis (GGB) method [H. Waisman, J. Fish, R.S. Tuminaro, J. Shadid, The Generalized Global Basis (GGB) method, International Journal for Numerical Methods in Engineering 61(8), 1243-1269] applied to nonlinear systems are presented. The standard GGB accelerates a multigrid scheme by an additional coarse grid correction that filters out slowly converging modes. This correction requires a potentially costly eigen calculation. This paper considers reusing previously computed eigenspace information. The GGB? scheme enriches the prolongation operator with new eigenvectors while the modified method (MGGB) selectively reuses the same prolongation. Bothmore » methods use the criteria of principal angles between subspaces spanned between the previous and current prolongation operators. Numerical examples clearly indicate significant time savings in particular for the MGGB scheme.« less
NASA Astrophysics Data System (ADS)
Peñaloza-Murillo, Marcos A.; Pasachoff, Jay M.
2015-04-01
We analyze mathematically air temperature measurements made near the ground by the Williams College expedition to observe the first total occultation of the Sun [TOS (commonly known as a total solar eclipse)] of the 21st century in Lusaka, Zambia, in the afternoon of June 21, 2001. To do so, we have revisited some earlier and contemporary methods to test their usefulness for this analysis. Two of these methods, based on a radiative scheme for solar radiation modeling and that has been originally applied to a morning occultation, have successfully been combined to obtain the delay function for an afternoon occultation, via derivation of the so-called instantaneous temperature profiles. For this purpose, we have followed the suggestion given by the third of these previously applied methods to calculate this function, although by itself it failed to do so at least for this occultation. The analysis has taken into account the limb-darkening, occultation and obscuration functions. The delay function obtained describes quite fairly the lag between the solar radiation variation and the delayed air temperature measured. Also, in this investigation, a statistical study has been carried out to get information on the convection activity produced during this event. For that purpose, the fluctuations generated by turbulence has been studied by analyzing variance and residuals. The results, indicating an irreversible steady decrease of this activity, are consistent with those published by other studies. Finally, the air temperature drop due to this event is well estimated by applying the empirical scheme given by the fourth of the previously applied methods, based on the daily temperature amplitude and the standardized middle time of the occultation. It is demonstrated then that by using a simple set of air temperature measurements obtained during solar occultations, along with some supplementary data, a simple mathematical analysis can be achieved by applying of the four methods reviewed here.
Adult Learning Principles and Presentation Pearls
Palis, Ana G.; Quiros, Peter A.
2014-01-01
Although lectures are one of the most common methods of knowledge transfer in medicine, their effectiveness has been questioned. Passive formats, lack of relevance and disconnection from the student's needs are some of the arguments supporting this apparent lack of efficacy. However, many authors have suggested that applying adult learning principles (i.e., relevance, congruence with student's needs, interactivity, connection to student's previous knowledge and experience) to this method increases learning by lectures and the effectiveness of lectures. This paper presents recommendations for applying adult learning principles during planning, creation and development of lectures to make them more effective. PMID:24791101
The Schwinger Variational Method
NASA Technical Reports Server (NTRS)
Huo, Winifred M.
1995-01-01
Variational methods have proven invaluable in theoretical physics and chemistry, both for bound state problems and for the study of collision phenomena. For collisional problems they can be grouped into two types: those based on the Schroedinger equation and those based on the Lippmann-Schwinger equation. The application of the Schwinger variational (SV) method to e-molecule collisions and photoionization has been reviewed previously. The present chapter discusses the implementation of the SV method as applied to e-molecule collisions.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gao Yajun
A previously established Hauser-Ernst-type extended double-complex linear system is slightly modified and used to develop an inverse scattering method for the stationary axisymmetric general symplectic gravity model. The reduction procedures in this inverse scattering method are found to be fairly simple, which makes the inverse scattering method applied fine and effective. As an application, a concrete family of soliton double solutions for the considered theory is obtained.
Water quality assessment with hierarchical cluster analysis based on Mahalanobis distance.
Du, Xiangjun; Shao, Fengjing; Wu, Shunyao; Zhang, Hanlin; Xu, Si
2017-07-01
Water quality assessment is crucial for assessment of marine eutrophication, prediction of harmful algal blooms, and environment protection. Previous studies have developed many numeric modeling methods and data driven approaches for water quality assessment. The cluster analysis, an approach widely used for grouping data, has also been employed. However, there are complex correlations between water quality variables, which play important roles in water quality assessment but have always been overlooked. In this paper, we analyze correlations between water quality variables and propose an alternative method for water quality assessment with hierarchical cluster analysis based on Mahalanobis distance. Further, we cluster water quality data collected form coastal water of Bohai Sea and North Yellow Sea of China, and apply clustering results to evaluate its water quality. To evaluate the validity, we also cluster the water quality data with cluster analysis based on Euclidean distance, which are widely adopted by previous studies. The results show that our method is more suitable for water quality assessment with many correlated water quality variables. To our knowledge, it is the first attempt to apply Mahalanobis distance for coastal water quality assessment.
Lee, Won-Joon; Wilkinson, Caroline M; Hwang, Hyeon-Shik; Lee, Sang-Mi
2015-05-01
Accuracy is the most important factor supporting the reliability of forensic facial reconstruction (FFR) comparing to the corresponding actual face. A number of methods have been employed to evaluate objective accuracy of FFR. Recently, it has been attempted that the degree of resemblance between computer-generated FFR and actual face is measured by geometric surface comparison method. In this study, three FFRs were produced employing live adult Korean subjects and three-dimensional computerized modeling software. The deviations of the facial surfaces between the FFR and the head scan CT of the corresponding subject were analyzed in reverse modeling software. The results were compared with those from a previous study which applied the same methodology as this study except average facial soft tissue depth dataset. Three FFRs of this study that applied updated dataset demonstrated lesser deviation errors between the facial surfaces of the FFR and corresponding subject than those from the previous study. The results proposed that appropriate average tissue depth data are important to increase quantitative accuracy of FFR. © 2015 American Academy of Forensic Sciences.
NASA Astrophysics Data System (ADS)
Li, Xiongwei; Wang, Zhe; Lui, Siu-Lung; Fu, Yangting; Li, Zheng; Liu, Jianming; Ni, Weidou
2013-10-01
A bottleneck of the wide commercial application of laser-induced breakdown spectroscopy (LIBS) technology is its relatively high measurement uncertainty. A partial least squares (PLS) based normalization method was proposed to improve pulse-to-pulse measurement precision for LIBS based on our previous spectrum standardization method. The proposed model utilized multi-line spectral information of the measured element and characterized the signal fluctuations due to the variation of plasma characteristic parameters (plasma temperature, electron number density, and total number density) for signal uncertainty reduction. The model was validated by the application of copper concentration prediction in 29 brass alloy samples. The results demonstrated an improvement on both measurement precision and accuracy over the generally applied normalization as well as our previously proposed simplified spectrum standardization method. The average relative standard deviation (RSD), average of the standard error (error bar), the coefficient of determination (R2), the root-mean-square error of prediction (RMSEP), and average value of the maximum relative error (MRE) were 1.80%, 0.23%, 0.992, 1.30%, and 5.23%, respectively, while those for the generally applied spectral area normalization were 3.72%, 0.71%, 0.973, 1.98%, and 14.92%, respectively.
Iwasaki, Yoichiro; Misumi, Masato; Nakamiya, Toshiyuki
2013-01-01
We have already proposed a method for detecting vehicle positions and their movements (henceforth referred to as “our previous method”) using thermal images taken with an infrared thermal camera. Our experiments have shown that our previous method detects vehicles robustly under four different environmental conditions which involve poor visibility conditions in snow and thick fog. Our previous method uses the windshield and its surroundings as the target of the Viola-Jones detector. Some experiments in winter show that the vehicle detection accuracy decreases because the temperatures of many windshields approximate those of the exterior of the windshields. In this paper, we propose a new vehicle detection method (henceforth referred to as “our new method”). Our new method detects vehicles based on tires' thermal energy reflection. We have done experiments using three series of thermal images for which the vehicle detection accuracies of our previous method are low. Our new method detects 1,417 vehicles (92.8%) out of 1,527 vehicles, and the number of false detection is 52 in total. Therefore, by combining our two methods, high vehicle detection accuracies are maintained under various environmental conditions. Finally, we apply the traffic information obtained by our two methods to traffic flow automatic monitoring, and show the effectiveness of our proposal. PMID:23774988
Application of the Probabilistic Dynamic Synthesis Method to Realistic Structures
NASA Technical Reports Server (NTRS)
Brown, Andrew M.; Ferri, Aldo A.
1998-01-01
The Probabilistic Dynamic Synthesis method is a technique for obtaining the statistics of a desired response engineering quantity for a structure with non-deterministic parameters. The method uses measured data from modal testing of the structure as the input random variables, rather than more "primitive" quantities like geometry or material variation. This modal information is much more comprehensive and easily measured than the "primitive" information. The probabilistic analysis is carried out using either response surface reliability methods or Monte Carlo simulation. In previous work, the feasibility of the PDS method applied to a simple seven degree-of-freedom spring-mass system was verified. In this paper, extensive issues involved with applying the method to a realistic three-substructure system are examined, and free and forced response analyses are performed. The results from using the method are promising, especially when the lack of alternatives for obtaining quantitative output for probabilistic structures is considered.
Limited-memory trust-region methods for sparse relaxation
NASA Astrophysics Data System (ADS)
Adhikari, Lasith; DeGuchy, Omar; Erway, Jennifer B.; Lockhart, Shelby; Marcia, Roummel F.
2017-08-01
In this paper, we solve the l2-l1 sparse recovery problem by transforming the objective function of this problem into an unconstrained differentiable function and applying a limited-memory trust-region method. Unlike gradient projection-type methods, which uses only the current gradient, our approach uses gradients from previous iterations to obtain a more accurate Hessian approximation. Numerical experiments show that our proposed approach eliminates spurious solutions more effectively while improving computational time.
Jet production in the CoLoRFulNNLO method: Event shapes in electron-positron collisions
NASA Astrophysics Data System (ADS)
Del Duca, Vittorio; Duhr, Claude; Kardos, Adam; Somogyi, Gábor; Szőr, Zoltán; Trócsányi, Zoltán; Tulipánt, Zoltán
2016-10-01
We present the CoLoRFulNNLO method to compute higher order radiative corrections to jet cross sections in perturbative QCD. We apply our method to the computation of event shape observables in electron-positron collisions at NNLO accuracy and validate our code by comparing our predictions to previous results in the literature. We also calculate for the first time jet cone energy fraction at NNLO.
NASA Astrophysics Data System (ADS)
Fujimoto, Kazuhiro J.
2012-07-01
A transition-density-fragment interaction (TDFI) combined with a transfer integral (TI) method is proposed. The TDFI method was previously developed for describing electronic Coulomb interaction, which was applied to excitation-energy transfer (EET) [K. J. Fujimoto and S. Hayashi, J. Am. Chem. Soc. 131, 14152 (2009)] and exciton-coupled circular dichroism spectra [K. J. Fujimoto, J. Chem. Phys. 133, 124101 (2010)]. In the present study, the TDFI method is extended to the exchange interaction, and hence it is combined with the TI method for applying to the EET via charge-transfer (CT) states. In this scheme, the overlap correction is also taken into account. To check the TDFI-TI accuracy, several test calculations are performed to an ethylene dimer. As a result, the TDFI-TI method gives a much improved description of the electronic coupling, compared with the previous TDFI method. Based on the successful description of the electronic coupling, the decomposition analysis is also performed with the TDFI-TI method. The present analysis clearly shows a large contribution from the Coulomb interaction in most of the cases, and a significant influence of the CT states at the small separation. In addition, the exchange interaction is found to be small in this system. The present approach is useful for analyzing and understanding the mechanism of EET.
Image enhancement in positron emission mammography
NASA Astrophysics Data System (ADS)
Slavine, Nikolai V.; Seiler, Stephen; McColl, Roderick W.; Lenkinski, Robert E.
2017-02-01
Purpose: To evaluate an efficient iterative deconvolution method (RSEMD) for improving the quantitative accuracy of previously reconstructed breast images by commercial positron emission mammography (PEM) scanner. Materials and Methods: The RSEMD method was tested on breast phantom data and clinical PEM imaging data. Data acquisition was performed on a commercial Naviscan Flex Solo II PEM camera. This method was applied to patient breast images previously reconstructed with Naviscan software (MLEM) to determine improvements in resolution, signal to noise ratio (SNR) and contrast to noise ratio (CNR.) Results: In all of the patients' breast studies the post-processed images proved to have higher resolution and lower noise as compared with images reconstructed by conventional methods. In general, the values of SNR reached a plateau at around 6 iterations with an improvement factor of about 2 for post-processed Flex Solo II PEM images. Improvements in image resolution after the application of RSEMD have also been demonstrated. Conclusions: A rapidly converging, iterative deconvolution algorithm with a novel resolution subsets-based approach RSEMD that operates on patient DICOM images has been used for quantitative improvement in breast imaging. The RSEMD method can be applied to clinical PEM images to improve image quality to diagnostically acceptable levels and will be crucial in order to facilitate diagnosis of tumor progression at the earliest stages. The RSEMD method can be considered as an extended Richardson-Lucy algorithm with multiple resolution levels (resolution subsets).
Ballistics-Electron-Microscopy and Spectroscopy of Metal/GaN Interfaces
NASA Technical Reports Server (NTRS)
Bell, L. D.; Smith, R. P.; McDermott, B. T.; Gertner, E. R.; Pittman, R.; Pierson, R. L.; Sullivan, G. J.
1997-01-01
BEEM spectroscopy and imaging have been applied to the Au/GaN interface. In contrast to previous BEEM measurements, spectra yield a Schottky barrier height of 1.04eV that agrees well with the highest values measured by conventional methods.
Local discretization method for overdamped Brownian motion on a potential with multiple deep wells.
Nguyen, P T T; Challis, K J; Jack, M W
2016-11-01
We present a general method for transforming the continuous diffusion equation describing overdamped Brownian motion on a time-independent potential with multiple deep wells to a discrete master equation. The method is based on an expansion in localized basis states of local metastable potentials that match the full potential in the region of each potential well. Unlike previous basis methods for discretizing Brownian motion on a potential, this approach is valid for periodic potentials with varying multiple deep wells per period and can also be applied to nonperiodic systems. We apply the method to a range of potentials and find that potential wells that are deep compared to five times the thermal energy can be associated with a discrete localized state while shallower wells are better incorporated into the local metastable potentials of neighboring deep potential wells.
NASA Technical Reports Server (NTRS)
Stolzer, Alan J.; Halford, Carl
2007-01-01
In a previous study, multiple regression techniques were applied to Flight Operations Quality Assurance-derived data to develop parsimonious model(s) for fuel consumption on the Boeing 757 airplane. The present study examined several data mining algorithms, including neural networks, on the fuel consumption problem and compared them to the multiple regression results obtained earlier. Using regression methods, parsimonious models were obtained that explained approximately 85% of the variation in fuel flow. In general data mining methods were more effective in predicting fuel consumption. Classification and Regression Tree methods reported correlation coefficients of .91 to .92, and General Linear Models and Multilayer Perceptron neural networks reported correlation coefficients of about .99. These data mining models show great promise for use in further examining large FOQA databases for operational and safety improvements.
Local discretization method for overdamped Brownian motion on a potential with multiple deep wells
NASA Astrophysics Data System (ADS)
Nguyen, P. T. T.; Challis, K. J.; Jack, M. W.
2016-11-01
We present a general method for transforming the continuous diffusion equation describing overdamped Brownian motion on a time-independent potential with multiple deep wells to a discrete master equation. The method is based on an expansion in localized basis states of local metastable potentials that match the full potential in the region of each potential well. Unlike previous basis methods for discretizing Brownian motion on a potential, this approach is valid for periodic potentials with varying multiple deep wells per period and can also be applied to nonperiodic systems. We apply the method to a range of potentials and find that potential wells that are deep compared to five times the thermal energy can be associated with a discrete localized state while shallower wells are better incorporated into the local metastable potentials of neighboring deep potential wells.
1984-03-01
difference calcula- tion, would result in erroneously lower productivity ratios. Only two topics are not adequately addressed with the method --the first...for determination of this term. 3. CAIDQS P.22li y §easuI~gZ._ Jelbo The next method differs significantly from the previous two in that it deals with...Chasen’s Method (as applied by Lonq Beach M.S.) . . . . . . . o . o . . . . . 31 2. Shah G Yans Method . . . . . . . . . . . 34 3. CARDOS Productivity
Some practical observations on the predictor jump method for solving the Laplace equation
NASA Astrophysics Data System (ADS)
Duque-Carrillo, J. F.; Vega-Fernández, J. M.; Peña-Bernal, J. J.; Rossell-Bueno, M. A.
1986-01-01
The best conditions for the application of the predictor jump (PJ) method in the solution of the Laplace equation are discussed and some practical considerations for applying this new iterative technique are presented. The PJ method was remarked on in a previous article entitled ``A new way for solving Laplace's problem (the predictor jump method)'' [J. M. Vega-Fernández, J. F. Duque-Carrillo, and J. J. Peña-Bernal, J. Math. Phys. 26, 416 (1985)].
TLE uncertainty estimation using robust weighted differencing
NASA Astrophysics Data System (ADS)
Geul, Jacco; Mooij, Erwin; Noomen, Ron
2017-05-01
Accurate knowledge of satellite orbit errors is essential for many types of analyses. Unfortunately, for two-line elements (TLEs) this is not available. This paper presents a weighted differencing method using robust least-squares regression for estimating many important error characteristics. The method is applied to both classic and enhanced TLEs, compared to previous implementations, and validated using Global Positioning System (GPS) solutions for the GOCE satellite in Low-Earth Orbit (LEO), prior to its re-entry. The method is found to be more accurate than previous TLE differencing efforts in estimating initial uncertainty, as well as error growth. The method also proves more reliable and requires no data filtering (such as outlier removal). Sensitivity analysis shows a strong relationship between argument of latitude and covariance (standard deviations and correlations), which the method is able to approximate. Overall, the method proves accurate, computationally fast, and robust, and is applicable to any object in the satellite catalogue (SATCAT).
Som, Nicholas A.; Goodman, Damon H.; Perry, Russell W.; Hardy, Thomas B.
2016-01-01
Previous methods for constructing univariate habitat suitability criteria (HSC) curves have ranged from professional judgement to kernel-smoothed density functions or combinations thereof. We present a new method of generating HSC curves that applies probability density functions as the mathematical representation of the curves. Compared with previous approaches, benefits of our method include (1) estimation of probability density function parameters directly from raw data, (2) quantitative methods for selecting among several candidate probability density functions, and (3) concise methods for expressing estimation uncertainty in the HSC curves. We demonstrate our method with a thorough example using data collected on the depth of water used by juvenile Chinook salmon (Oncorhynchus tschawytscha) in the Klamath River of northern California and southern Oregon. All R code needed to implement our example is provided in the appendix. Published 2015. This article is a U.S. Government work and is in the public domain in the USA.
Risk analysis theory applied to fishing operations: A new approach on the decision-making problem
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cunha, J.C.S.
1994-12-31
In the past the decisions concerning whether to continue or interrupt a fishing operation were based primarily on the operator`s previous experience. This procedure often led to wrong decisions and unnecessary loss of money and time. This paper describes a decision-making method based on risk analysis theory and previous operation results from a field under study. The method leads to more accurate decisions on a daily basis allowing the operator to verify each day of the operation if the decision being carried out is the one with the highest probability to conduct to the best economical result. An example ofmore » the method application is provided at the end of the paper.« less
ERIC Educational Resources Information Center
Rubenking, Bridget; Dodd, Melissa
2018-01-01
Previous research suggests that undergraduate research methods students doubt the utility of course content and experience math and research anxiety. Research also suggests involving students in hands-on, applied research activities, although empirical data on the scope and nature of these activities are lacking. This study compared academic…
Space vehicle engine and heat shield environment review. Volume 1: Engineering analysis
NASA Technical Reports Server (NTRS)
Mcanelly, W. B.; Young, C. T. K.
1973-01-01
Methods for predicting the base heating characteristics of a multiple rocket engine installation are discussed. The environmental data is applied to the design of adequate protection system for the engine components. The methods for predicting the base region thermal environment are categorized as: (1) scale model testing, (2) extrapolation of previous and related flight test results, and (3) semiempirical analytical techniques.
Advanced Feedback Methods in Information Retrieval.
ERIC Educational Resources Information Center
Salton, G.; And Others
1985-01-01
In this study, automatic feedback techniques are applied to Boolean query statements in online information retrieval to generate improved query statements based on information contained in previously retrieved documents. Feedback operations are carried out using conventional Boolean logic and extended logic. Experimental output is included to…
NASA Astrophysics Data System (ADS)
Jeong, Mira; Nam, Jae-Yeal; Ko, Byoung Chul
2017-09-01
In this paper, we focus on pupil center detection in various video sequences that include head poses and changes in illumination. To detect the pupil center, we first find four eye landmarks in each eye by using cascade local regression based on a regression forest. Based on the rough location of the pupil, a fast radial symmetric transform is applied using the previously found pupil location to rearrange the fine pupil center. As the final step, the pupil displacement is estimated between the previous frame and the current frame to maintain the level of accuracy against a false locating result occurring in a particular frame. We generated a new face dataset, called Keimyung University pupil detection (KMUPD), with infrared camera. The proposed method was successfully applied to the KMUPD dataset, and the results indicate that its pupil center detection capability is better than that of other methods and with a shorter processing time.
A Bayesian estimate of the concordance correlation coefficient with skewed data.
Feng, Dai; Baumgartner, Richard; Svetnik, Vladimir
2015-01-01
Concordance correlation coefficient (CCC) is one of the most popular scaled indices used to evaluate agreement. Most commonly, it is used under the assumption that data is normally distributed. This assumption, however, does not apply to skewed data sets. While methods for the estimation of the CCC of skewed data sets have been introduced and studied, the Bayesian approach and its comparison with the previous methods has been lacking. In this study, we propose a Bayesian method for the estimation of the CCC of skewed data sets and compare it with the best method previously investigated. The proposed method has certain advantages. It tends to outperform the best method studied before when the variation of the data is mainly from the random subject effect instead of error. Furthermore, it allows for greater flexibility in application by enabling incorporation of missing data, confounding covariates, and replications, which was not considered previously. The superiority of this new approach is demonstrated using simulation as well as real-life biomarker data sets used in an electroencephalography clinical study. The implementation of the Bayesian method is accessible through the Comprehensive R Archive Network. Copyright © 2015 John Wiley & Sons, Ltd.
Gaussian basis functions for highly oscillatory scattering wavefunctions
NASA Astrophysics Data System (ADS)
Mant, B. P.; Law, M. M.
2018-04-01
We have applied a basis set of distributed Gaussian functions within the S-matrix version of the Kohn variational method to scattering problems involving deep potential energy wells. The Gaussian positions and widths are tailored to the potential using the procedure of Bačić and Light (1986 J. Chem. Phys. 85 4594) which has previously been applied to bound-state problems. The placement procedure is shown to be very efficient and gives scattering wavefunctions and observables in agreement with direct numerical solutions. We demonstrate the basis function placement method with applications to hydrogen atom–hydrogen atom scattering and antihydrogen atom–hydrogen atom scattering.
Text mining by Tsallis entropy
NASA Astrophysics Data System (ADS)
Jamaati, Maryam; Mehri, Ali
2018-01-01
Long-range correlations between the elements of natural languages enable them to convey very complex information. Complex structure of human language, as a manifestation of natural languages, motivates us to apply nonextensive statistical mechanics in text mining. Tsallis entropy appropriately ranks the terms' relevance to document subject, taking advantage of their spatial correlation length. We apply this statistical concept as a new powerful word ranking metric in order to extract keywords of a single document. We carry out an experimental evaluation, which shows capability of the presented method in keyword extraction. We find that, Tsallis entropy has reliable word ranking performance, at the same level of the best previous ranking methods.
NASA Technical Reports Server (NTRS)
Tsang, L.; Brown, R.; Kong, J. A.; Simmons, G.
1974-01-01
Two numerical methods are used to evaluate the integrals that express the em fields due to dipole antennas radiating in the presence of a stratified medium. The first method is a direct integration by means of Simpson's rule. The second method is indirect and approximates the kernel of the integral by means of the fast Fourier transform. In contrast to previous analytical methods that applied only to two-layer cases the numerical methods can be used for any arbitrary number of layers with general properties.
Separable Ernst-Shakin-Thaler expansions of local potentials
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bund, G.W.
The boundary condition Ernst-Shakin-Thaler method, introduced previously to generate separable expansions of local potentials of finite range, is applied to the study of the triplet s-wave Malfliet-Tjon potential. The effect of varying the radius where the boundary condition is applied on the T matrix is analyzed. Further, we compare the convergence of the n-d scattering cross sections in the quartet state below the breakup threshold for expansions corresponding to two different boundaries.
NASA Astrophysics Data System (ADS)
Kim, D.; Lee, H.; Yu, H.; Beighley, E.; Durand, M. T.; Alsdorf, D. E.; Hwang, E.
2017-12-01
River discharge is a prerequisite for an understanding of flood hazard and water resource management, yet we have poor knowledge of it, especially over remote basins. Previous studies have successfully used a classic hydraulic geometry, at-many-stations hydraulic geometry (AMHG), and Manning's equation to estimate the river discharge. Theoretical bases of these empirical methods were introduced by Leopold and Maddock (1953) and Manning (1889), and those have been long used in the field of hydrology, water resources, and geomorphology. However, the methods to estimate the river discharge from remotely sensed data essentially require bathymetric information of the river or are not applicable to braided rivers. Furthermore, the methods used in the previous studies adopted assumptions of river conditions to be steady and uniform. Consequently, those methods have limitations in estimating the river discharge in complex and unsteady flow in nature. In this study, we developed a novel approach to estimating river discharges by applying the weak learner method (here termed WLQ), which is one of the ensemble methods using multiple classifiers, to the remotely sensed measurements of water levels from Envisat altimetry, effective river widths from PALSAR images, and multi-temporal surface water slopes over a part of the mainstem Congo. Compared with the methods used in the previous studies, the root mean square error (RMSE) decreased from 5,089 m3s-1 to 3,701 m3s-1, and the relative RMSE (RRMSE) improved from 12% to 8%. It is expected that our method can provide improved estimates of river discharges in complex and unsteady flow conditions based on the data-driven prediction model by machine learning (i.e. WLQ), even when the bathymetric data is not available or in case of the braided rivers. Moreover, it is also expected that the WLQ can be applied to the measurements of river levels, slopes and widths from the future Surface Water Ocean Topography (SWOT) mission to be launched in 2021.
A pseudospectral Legendre method for hyperbolic equations with an improved stability condition
NASA Technical Reports Server (NTRS)
Tal-Ezer, Hillel
1986-01-01
A new pseudospectral method is introduced for solving hyperbolic partial differential equations. This method uses different grid points than previously used pseudospectral methods: in fact the grid points are related to the zeroes of the Legendre polynomials. The main advantage of this method is that the allowable time step is proportional to the inverse of the number of grid points 1/N rather than to 1/n(2) (as in the case of other pseudospectral methods applied to mixed initial boundary value problems). A highly accurate time discretization suitable for these spectral methods is discussed.
A pseudospectral Legendre method for hyperbolic equations with an improved stability condition
NASA Technical Reports Server (NTRS)
Tal-Ezer, H.
1984-01-01
A new pseudospectral method is introduced for solving hyperbolic partial differential equations. This method uses different grid points than previously used pseudospectral methods: in fact the grid are related to the zeroes of the Legendre polynomials. The main advantage of this method is that the allowable time step is proportional to the inverse of the number of grid points 1/N rather than to 1/n(2) (as in the case of other pseudospectral methods applied to mixed initial boundary value problems). A highly accurate time discretization suitable for these spectral methods is discussed.
NASA Astrophysics Data System (ADS)
Mundis, Nathan L.; Mavriplis, Dimitri J.
2017-09-01
The time-spectral method applied to the Euler and coupled aeroelastic equations theoretically offers significant computational savings for purely periodic problems when compared to standard time-implicit methods. However, attaining superior efficiency with time-spectral methods over traditional time-implicit methods hinges on the ability rapidly to solve the large non-linear system resulting from time-spectral discretizations which become larger and stiffer as more time instances are employed or the period of the flow becomes especially short (i.e. the maximum resolvable wave-number increases). In order to increase the efficiency of these solvers, and to improve robustness, particularly for large numbers of time instances, the Generalized Minimal Residual Method (GMRES) is used to solve the implicit linear system over all coupled time instances. The use of GMRES as the linear solver makes time-spectral methods more robust, allows them to be applied to a far greater subset of time-accurate problems, including those with a broad range of harmonic content, and vastly improves the efficiency of time-spectral methods. In previous work, a wave-number independent preconditioner that mitigates the increased stiffness of the time-spectral method when applied to problems with large resolvable wave numbers has been developed. This preconditioner, however, directly inverts a large matrix whose size increases in proportion to the number of time instances. As a result, the computational time of this method scales as the cube of the number of time instances. In the present work, this preconditioner has been reworked to take advantage of an approximate-factorization approach that effectively decouples the spatial and temporal systems. Once decoupled, the time-spectral matrix can be inverted in frequency space, where it has entries only on the main diagonal and therefore can be inverted quite efficiently. This new GMRES/preconditioner combination is shown to be over an order of magnitude more efficient than the previous wave-number independent preconditioner for problems with large numbers of time instances and/or large reduced frequencies.
Stepwise and stagewise approaches for spatial cluster detection
Xu, Jiale
2016-01-01
Spatial cluster detection is an important tool in many areas such as sociology, botany and public health. Previous work has mostly taken either hypothesis testing framework or Bayesian framework. In this paper, we propose a few approaches under a frequentist variable selection framework for spatial cluster detection. The forward stepwise methods search for multiple clusters by iteratively adding currently most likely cluster while adjusting for the effects of previously identified clusters. The stagewise methods also consist of a series of steps, but with tiny step size in each iteration. We study the features and performances of our proposed methods using simulations on idealized grids or real geographic area. From the simulations, we compare the performance of the proposed methods in terms of estimation accuracy and power of detections. These methods are applied to the the well-known New York leukemia data as well as Indiana poverty data. PMID:27246273
Stepwise and stagewise approaches for spatial cluster detection.
Xu, Jiale; Gangnon, Ronald E
2016-05-01
Spatial cluster detection is an important tool in many areas such as sociology, botany and public health. Previous work has mostly taken either a hypothesis testing framework or a Bayesian framework. In this paper, we propose a few approaches under a frequentist variable selection framework for spatial cluster detection. The forward stepwise methods search for multiple clusters by iteratively adding currently most likely cluster while adjusting for the effects of previously identified clusters. The stagewise methods also consist of a series of steps, but with a tiny step size in each iteration. We study the features and performances of our proposed methods using simulations on idealized grids or real geographic areas. From the simulations, we compare the performance of the proposed methods in terms of estimation accuracy and power. These methods are applied to the the well-known New York leukemia data as well as Indiana poverty data. Copyright © 2016 Elsevier Ltd. All rights reserved.
Applying Uncertainty Analysis to a Risk Assessment for the Pesticide Permethrin
We discuss the application of methods of uncertainty analysis from our previous poster to the problem of a risk assessment for exposure to the food-use pesticide permethrin resulting from residential pesticide crack and crevice application. Exposures are simulated by the SHEDS (S...
Biomechanics and Developmental Neuromotor Control.
ERIC Educational Resources Information Center
Zernicke, Ronald F.; Schneider, Klaus
1993-01-01
By applying the principles and methods of mechanics to the musculoskeletal system, new insights can be discovered about control of human limb dynamics in both adults and infants. Reviews previous research on how infants gain control of their limbs and learn to reach in the first year of life. (MDM)
Constructing networks from a dynamical system perspective for multivariate nonlinear time series.
Nakamura, Tomomichi; Tanizawa, Toshihiro; Small, Michael
2016-03-01
We describe a method for constructing networks for multivariate nonlinear time series. We approach the interaction between the various scalar time series from a deterministic dynamical system perspective and provide a generic and algorithmic test for whether the interaction between two measured time series is statistically significant. The method can be applied even when the data exhibit no obvious qualitative similarity: a situation in which the naive method utilizing the cross correlation function directly cannot correctly identify connectivity. To establish the connectivity between nodes we apply the previously proposed small-shuffle surrogate (SSS) method, which can investigate whether there are correlation structures in short-term variabilities (irregular fluctuations) between two data sets from the viewpoint of deterministic dynamical systems. The procedure to construct networks based on this idea is composed of three steps: (i) each time series is considered as a basic node of a network, (ii) the SSS method is applied to verify the connectivity between each pair of time series taken from the whole multivariate time series, and (iii) the pair of nodes is connected with an undirected edge when the null hypothesis cannot be rejected. The network constructed by the proposed method indicates the intrinsic (essential) connectivity of the elements included in the system or the underlying (assumed) system. The method is demonstrated for numerical data sets generated by known systems and applied to several experimental time series.
Pham, Tuyen Danh; Lee, Dong Eun; Park, Kang Ryoung
2017-07-08
Automatic recognition of banknotes is applied in payment facilities, such as automated teller machines (ATMs) and banknote counters. Besides the popular approaches that focus on studying the methods applied to various individual types of currencies, there have been studies conducted on simultaneous classification of banknotes from multiple countries. However, their methods were conducted with limited numbers of banknote images, national currencies, and denominations. To address this issue, we propose a multi-national banknote classification method based on visible-light banknote images captured by a one-dimensional line sensor and classified by a convolutional neural network (CNN) considering the size information of each denomination. Experiments conducted on the combined banknote image database of six countries with 62 denominations gave a classification accuracy of 100%, and results show that our proposed algorithm outperforms previous methods.
Cosmic strings and the microwave sky. I - Anisotropy from moving strings
NASA Technical Reports Server (NTRS)
Stebbins, Albert
1988-01-01
A method is developed for calculating the component of the microwave anisotropy around cosmic string loops due to their rapidly changing gravitational fields. The method is only valid for impact parameters from the string much smaller than the horizon size at the time the photon passes the string. The method makes it possible to calculate the temperature pattern around arbitrary string configurations numerically in terms of one-dimensional integrals. This method is applied to temperature jump across a string, confirming and extending previous work. It is also applied to cusps and kinks on strings, and to determining the temperature pattern far from a strong loop. The temperature pattern around a few loop configurations is explicitly calculated. Comparisons with the work of Brandenberger et al. (1986) indicates that they have overestimated the MBR anisotropy from gravitational radiation emitted from loops.
Pham, Tuyen Danh; Lee, Dong Eun; Park, Kang Ryoung
2017-01-01
Automatic recognition of banknotes is applied in payment facilities, such as automated teller machines (ATMs) and banknote counters. Besides the popular approaches that focus on studying the methods applied to various individual types of currencies, there have been studies conducted on simultaneous classification of banknotes from multiple countries. However, their methods were conducted with limited numbers of banknote images, national currencies, and denominations. To address this issue, we propose a multi-national banknote classification method based on visible-light banknote images captured by a one-dimensional line sensor and classified by a convolutional neural network (CNN) considering the size information of each denomination. Experiments conducted on the combined banknote image database of six countries with 62 denominations gave a classification accuracy of 100%, and results show that our proposed algorithm outperforms previous methods. PMID:28698466
A discussion on the origin of quantum probabilities
DOE Office of Scientific and Technical Information (OSTI.GOV)
Holik, Federico, E-mail: olentiev2@gmail.com; Departamento de Matemática - Ciclo Básico Común, Universidad de Buenos Aires - Pabellón III, Ciudad Universitaria, Buenos Aires; Sáenz, Manuel
We study the origin of quantum probabilities as arising from non-Boolean propositional-operational structures. We apply the method developed by Cox to non distributive lattices and develop an alternative formulation of non-Kolmogorovian probability measures for quantum mechanics. By generalizing the method presented in previous works, we outline a general framework for the deduction of probabilities in general propositional structures represented by lattices (including the non-distributive case). -- Highlights: •Several recent works use a derivation similar to that of R.T. Cox to obtain quantum probabilities. •We apply Cox’s method to the lattice of subspaces of the Hilbert space. •We obtain a derivationmore » of quantum probabilities which includes mixed states. •The method presented in this work is susceptible to generalization. •It includes quantum mechanics and classical mechanics as particular cases.« less
Determining attenuation properties of interfering fast and slow ultrasonic waves in cancellous bone.
Nelson, Amber M; Hoffman, Joseph J; Anderson, Christian C; Holland, Mark R; Nagatani, Yoshiki; Mizuno, Katsunori; Matsukawa, Mami; Miller, James G
2011-10-01
Previous studies have shown that interference between fast waves and slow waves can lead to observed negative dispersion in cancellous bone. In this study, the effects of overlapping fast and slow waves on measurements of the apparent attenuation as a function of propagation distance are investigated along with methods of analysis used to determine the attenuation properties. Two methods are applied to simulated data that were generated based on experimentally acquired signals taken from a bovine specimen. The first method uses a time-domain approach that was dictated by constraints imposed by the partial overlap of fast and slow waves. The second method uses a frequency-domain log-spectral subtraction technique on the separated fast and slow waves. Applying the time-domain analysis to the broadband data yields apparent attenuation behavior that is larger in the early stages of propagation and decreases as the wave travels deeper. In contrast, performing frequency-domain analysis on the separated fast waves and slow waves results in attenuation coefficients that are independent of propagation distance. Results suggest that features arising from the analysis of overlapping two-mode data may represent an alternate explanation for the previously reported apparent dependence on propagation distance of the attenuation coefficient of cancellous bone. © 2011 Acoustical Society of America
Determining attenuation properties of interfering fast and slow ultrasonic waves in cancellous bone
Nelson, Amber M.; Hoffman, Joseph J.; Anderson, Christian C.; Holland, Mark R.; Nagatani, Yoshiki; Mizuno, Katsunori; Matsukawa, Mami; Miller, James G.
2011-01-01
Previous studies have shown that interference between fast waves and slow waves can lead to observed negative dispersion in cancellous bone. In this study, the effects of overlapping fast and slow waves on measurements of the apparent attenuation as a function of propagation distance are investigated along with methods of analysis used to determine the attenuation properties. Two methods are applied to simulated data that were generated based on experimentally acquired signals taken from a bovine specimen. The first method uses a time-domain approach that was dictated by constraints imposed by the partial overlap of fast and slow waves. The second method uses a frequency-domain log-spectral subtraction technique on the separated fast and slow waves. Applying the time-domain analysis to the broadband data yields apparent attenuation behavior that is larger in the early stages of propagation and decreases as the wave travels deeper. In contrast, performing frequency-domain analysis on the separated fast waves and slow waves results in attenuation coefficients that are independent of propagation distance. Results suggest that features arising from the analysis of overlapping two-mode data may represent an alternate explanation for the previously reported apparent dependence on propagation distance of the attenuation coefficient of cancellous bone. PMID:21973378
Nunes, Rita G; Hajnal, Joseph V
2018-06-01
Point spread function (PSF) mapping enables estimating the displacement fields required for distortion correction of echo planar images. Recently, a highly accelerated approach was introduced for estimating displacements from the phase slope of under-sampled PSF mapping data. Sampling schemes with varying spacing were proposed requiring stepwise phase unwrapping. To avoid unwrapping errors, an alternative approach applying the concept of finite rate of innovation to PSF mapping (FRIP) is introduced, using a pattern search strategy to locate the PSF peak, and the two methods are compared. Fully sampled PSF data was acquired in six subjects at 3.0 T, and distortion maps were estimated after retrospective under-sampling. The two methods were compared for both previously published and newly optimized sampling patterns. Prospectively under-sampled data were also acquired. Shift maps were estimated and deviations relative to the fully sampled reference map were calculated. The best performance was achieved when using FRIP with a previously proposed sampling scheme. The two methods were comparable for the remaining schemes. The displacement field errors tended to be lower as the number of samples or their spacing increased. A robust method for estimating the position of the PSF peak has been introduced.
NASA Astrophysics Data System (ADS)
Huang, Zhijiong; Hu, Yongtao; Zheng, Junyu; Zhai, Xinxin; Huang, Ran
2018-05-01
Lateral boundary conditions (LBCs) are essential for chemical transport models to simulate regional transport; however they often contain large uncertainties. This study proposes an optimized data fusion approach to reduce the bias of LBCs by fusing gridded model outputs, from which the daughter domain's LBCs are derived, with ground-level measurements. The optimized data fusion approach follows the framework of a previous interpolation-based fusion method but improves it by using a bias kriging method to correct the spatial bias in gridded model outputs. Cross-validation shows that the optimized approach better estimates fused fields in areas with a large number of observations compared to the previous interpolation-based method. The optimized approach was applied to correct LBCs of PM2.5 concentrations for simulations in the Pearl River Delta (PRD) region as a case study. Evaluations show that the LBCs corrected by data fusion improve in-domain PM2.5 simulations in terms of the magnitude and temporal variance. Correlation increases by 0.13-0.18 and fractional bias (FB) decreases by approximately 3%-15%. This study demonstrates the feasibility of applying data fusion to improve regional air quality modeling.
Human Splice-Site Prediction with Deep Neural Networks.
Naito, Tatsuhiko
2018-04-18
Accurate splice-site prediction is essential to delineate gene structures from sequence data. Several computational techniques have been applied to create a system to predict canonical splice sites. For classification tasks, deep neural networks (DNNs) have achieved record-breaking results and often outperformed other supervised learning techniques. In this study, a new method of splice-site prediction using DNNs was proposed. The proposed system receives an input sequence data and returns an answer as to whether it is splice site. The length of input is 140 nucleotides, with the consensus sequence (i.e., "GT" and "AG" for the donor and acceptor sites, respectively) in the middle. Each input sequence model is applied to the pretrained DNN model that determines the probability that an input is a splice site. The model consists of convolutional layers and bidirectional long short-term memory network layers. The pretraining and validation were conducted using the data set tested in previously reported methods. The performance evaluation results showed that the proposed method can outperform the previous methods. In addition, the pattern learned by the DNNs was visualized as position frequency matrices (PFMs). Some of PFMs were very similar to the consensus sequence. The trained DNN model and the brief source code for the prediction system are uploaded. Further improvement will be achieved following the further development of DNNs.
Kalinina, Elizabeth A
2013-08-01
The explicit Euler's method is known to be very easy and effective in implementation for many applications. This article extends results previously obtained for the systems of linear differential equations with constant coefficients to arbitrary systems of ordinary differential equations. Optimal (providing minimum total error) step size is calculated at each step of Euler's method. Several examples of solving stiff systems are included. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.
Analysis of time-of-flight spectra
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gibson, E.M.; Foxon, C.T.; Zhang, J.
1990-07-01
A simplified method of data analysis for time of flight measurements of the velocity of molecular beams sources is described. This method does not require the complex data fitting previously used in such studies. The method is applied to the study of Pb molecular beams from a true Knudsen source and has been used to show that a VG Quadrupoles SXP300H mass spectrometer, when fitted with an open cross-beam ionizer, acts as an ideal density detector over a wide range of operating conditions.
[Mammals' camera-trapping in Sierra Nanchititla, Mexico: relative abundance and activity patterns].
Monroy-Vilchis, Octavio; Zarco-González, Martha M; Rodríguez-Soto, Clarita; Soria-Díaz, Leroy; Urios, Vicente
2011-03-01
Species conservation and their management depend on the availability of their population behavior and changes in time. This way, population studies include aspects such as species abundance and activity pattern, among others, with the advantage that nowadays new technologies can be applied, in addition to common methods. In this study, we used camera-traps to obtain the index of relative abundance and to establish activity pattern of medium and large mammals in Sierra Nanchititla, Mexico. The study was conducted from December 2003 to May 2006, with a total sampling effort of 4 305 trap-days. We obtained 897 photographs of 19 different species. Nasua narica, Sylvilagus floridanus and Urocyon cinereoargenteus were the most abundant, in agreement with the relative abundance index (RAI, number of independent records/100 trap-days), and according to previous studies with indirect methods in the area. The activity patterns of the species showed that 67% of them are nocturnal, except Odocoileus virginianus, Nasua narica and others. Some species showed differences with previously reported patterns, which are related with seasonality, resources availability, organism sex, principally. The applied method contributed with reliable data about relative abundance and activity patterns.
Estimation of tiger densities in India using photographic captures and recaptures
Karanth, U.; Nichols, J.D.
1998-01-01
Previously applied methods for estimating tiger (Panthera tigris) abundance using total counts based on tracks have proved unreliable. In this paper we use a field method proposed by Karanth (1995), combining camera-trap photography to identify individual tigers based on stripe patterns, with capture-recapture estimators. We developed a sampling design for camera-trapping and used the approach to estimate tiger population size and density in four representative tiger habitats in different parts of India. The field method worked well and provided data suitable for analysis using closed capture-recapture models. The results suggest the potential for applying this methodology for estimating abundances, survival rates and other population parameters in tigers and other low density, secretive animal species with distinctive coat patterns or other external markings. Estimated probabilities of photo-capturing tigers present in the study sites ranged from 0.75 - 1.00. The estimated mean tiger densities ranged from 4.1 (SE hat= 1.31) to 11.7 (SE hat= 1.93) tigers/100 km2. The results support the previous suggestions of Karanth and Sunquist (1995) that densities of tigers and other large felids may be primarily determined by prey community structure at a given site.
Point pattern match-based change detection in a constellation of previously detected objects
Paglieroni, David W.
2016-06-07
A method and system is provided that applies attribute- and topology-based change detection to objects that were detected on previous scans of a medium. The attributes capture properties or characteristics of the previously detected objects, such as location, time of detection, detection strength, size, elongation, orientation, etc. The locations define a three-dimensional network topology forming a constellation of previously detected objects. The change detection system stores attributes of the previously detected objects in a constellation database. The change detection system detects changes by comparing the attributes and topological consistency of newly detected objects encountered during a new scan of the medium to previously detected objects in the constellation database. The change detection system may receive the attributes of the newly detected objects as the objects are detected by an object detection system in real time.
The Long-Term Sustainability of Different Item Response Theory Scaling Methods
ERIC Educational Resources Information Center
Keller, Lisa A.; Keller, Robert R.
2011-01-01
This article investigates the accuracy of examinee classification into performance categories and the estimation of the theta parameter for several item response theory (IRT) scaling techniques when applied to six administrations of a test. Previous research has investigated only two administrations; however, many testing programs equate tests…
48 CFR 52.216-15 - Predetermined Indirect Cost Rates.
Code of Federal Regulations, 2010 CFR
2010-10-01
.... (c) Allowability of costs and acceptability of cost allocation methods shall be determined in...) the period for which the rates apply, and (4) the specific items treated as direct costs or any changes in the items previously agreed to be direct costs. The indirect cost rate agreement shall not...
48 CFR 3452.216-71 - Negotiated overhead rates-fixed.
Code of Federal Regulations, 2010 CFR
2010-10-01
... acceptability of cost allocation methods shall be determined in accordance with part 31 of the Federal... different period, for which the rates apply, and (4) the specific items treated as direct costs or any changes in the items previously agreed to be direct costs. (e) Pending establishment of fixed overhead...
Discriminative Projection Selection Based Face Image Hashing
NASA Astrophysics Data System (ADS)
Karabat, Cagatay; Erdogan, Hakan
Face image hashing is an emerging method used in biometric verification systems. In this paper, we propose a novel face image hashing method based on a new technique called discriminative projection selection. We apply the Fisher criterion for selecting the rows of a random projection matrix in a user-dependent fashion. Moreover, another contribution of this paper is to employ a bimodal Gaussian mixture model at the quantization step. Our simulation results on three different databases demonstrate that the proposed method has superior performance in comparison to previously proposed random projection based methods.
Absolute order-of-magnitude reasoning applied to a social multi-criteria evaluation framework
NASA Astrophysics Data System (ADS)
Afsordegan, A.; Sánchez, M.; Agell, N.; Aguado, J. C.; Gamboa, G.
2016-03-01
A social multi-criteria evaluation framework for solving a real-case problem of selecting a wind farm location in the regions of Urgell and Conca de Barberá in Catalonia (northeast of Spain) is studied. This paper applies a qualitative multi-criteria decision analysis approach based on linguistic labels assessment able to address uncertainty and deal with different levels of precision. This method is based on qualitative reasoning as an artificial intelligence technique for assessing and ranking multi-attribute alternatives with linguistic labels in order to handle uncertainty. This method is suitable for problems in the social framework such as energy planning which require the construction of a dialogue process among many social actors with high level of complexity and uncertainty. The method is compared with an existing approach, which has been applied previously in the wind farm location problem. This approach, consisting of an outranking method, is based on Condorcet's original method. The results obtained by both approaches are analysed and their performance in the selection of the wind farm location is compared in aggregation procedures. Although results show that both methods conduct to similar alternatives rankings, the study highlights both their advantages and drawbacks.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Albeverio, Sergio; Chen Kai; Fei Shaoming
A necessary separability criterion that relates the structures of the total density matrix and its reductions is given. The method used is based on the realignment method [K. Chen and L. A. Wu, Quant. Inf. Comput. 3, 193 (2003)]. The separability criterion naturally generalizes the reduction separability criterion introduced independently in the previous work [M. Horodecki and P. Horodecki, Phys. Rev. A 59, 4206 (1999) and N. J. Cerf, C. Adami, and R. M. Gingrich, Phys. Rev. A 60, 898 (1999)]. In special cases, it recovers the previous reduction criterion and the recent generalized partial transposition criterion [K. Chen andmore » L. A. Wu, Phys. Lett. A 306, 14 (2002)]. The criterion involves only simple matrix manipulations and can therefore be easily applied.« less
Bayesian approach for counting experiment statistics applied to a neutrino point source analysis
NASA Astrophysics Data System (ADS)
Bose, D.; Brayeur, L.; Casier, M.; de Vries, K. D.; Golup, G.; van Eijndhoven, N.
2013-12-01
In this paper we present a model independent analysis method following Bayesian statistics to analyse data from a generic counting experiment and apply it to the search for neutrinos from point sources. We discuss a test statistic defined following a Bayesian framework that will be used in the search for a signal. In case no signal is found, we derive an upper limit without the introduction of approximations. The Bayesian approach allows us to obtain the full probability density function for both the background and the signal rate. As such, we have direct access to any signal upper limit. The upper limit derivation directly compares with a frequentist approach and is robust in the case of low-counting observations. Furthermore, it allows also to account for previous upper limits obtained by other analyses via the concept of prior information without the need of the ad hoc application of trial factors. To investigate the validity of the presented Bayesian approach, we have applied this method to the public IceCube 40-string configuration data for 10 nearby blazars and we have obtained a flux upper limit, which is in agreement with the upper limits determined via a frequentist approach. Furthermore, the upper limit obtained compares well with the previously published result of IceCube, using the same data set.
NASA Astrophysics Data System (ADS)
Kim, Seoksoo; Jung, Sungmo; Song, Jae-Gu; Kang, Byong-Ho
As augmented reality and a gravity sensor is of growing interest, siginificant developement is being made on related technology, which allows application of the technology in a variety of areas with greater expectations. In applying Context-aware to augmented reality, it can make useful programs. A traning system suggested in this study helps a user to understand an effcienct training method using augmented reality and make sure if his exercise is being done propery based on the data collected by a gravity sensor. Therefore, this research aims to suggest an efficient training environment that can enhance previous training methods by applying augmented reality and a gravity sensor.
An investigation into exoplanet transits and uncertainties
NASA Astrophysics Data System (ADS)
Ji, Y.; Banks, T.; Budding, E.; Rhodes, M. D.
2017-06-01
A simple transit model is described along with tests of this model against published results for 4 exoplanet systems (Kepler-1, 2, 8, and 77). Data from the Kepler mission are used. The Markov Chain Monte Carlo (MCMC) method is applied to obtain realistic error estimates. Optimisation of limb darkening coefficients is subject to data quality. It is more likely for MCMC to derive an empirical limb darkening coefficient for light curves with S/N (signal to noise) above 15. Finally, the model is applied to Kepler data for 4 Kepler candidate systems (KOI 760.01, 767.01, 802.01, and 824.01) with previously unpublished results. Error estimates for these systems are obtained via the MCMC method.
Towards the estimation of effect measures in studies using respondent-driven sampling.
Rotondi, Michael A
2014-06-01
Respondent-driven sampling (RDS) is an increasingly common sampling technique to recruit hidden populations. Statistical methods for RDS are not straightforward due to the correlation between individual outcomes and subject weighting; thus, analyses are typically limited to estimation of population proportions. This manuscript applies the method of variance estimates recovery (MOVER) to construct confidence intervals for effect measures such as risk difference (difference of proportions) or relative risk in studies using RDS. To illustrate the approach, MOVER is used to construct confidence intervals for differences in the prevalence of demographic characteristics between an RDS study and convenience study of injection drug users. MOVER is then applied to obtain a confidence interval for the relative risk between education levels and HIV seropositivity and current infection with syphilis, respectively. This approach provides a simple method to construct confidence intervals for effect measures in RDS studies. Since it only relies on a proportion and appropriate confidence limits, it can also be applied to previously published manuscripts.
Community Detection in Complex Networks via Clique Conductance.
Lu, Zhenqi; Wahlström, Johan; Nehorai, Arye
2018-04-13
Network science plays a central role in understanding and modeling complex systems in many areas including physics, sociology, biology, computer science, economics, politics, and neuroscience. One of the most important features of networks is community structure, i.e., clustering of nodes that are locally densely interconnected. Communities reveal the hierarchical organization of nodes, and detecting communities is of great importance in the study of complex systems. Most existing community-detection methods consider low-order connection patterns at the level of individual links. But high-order connection patterns, at the level of small subnetworks, are generally not considered. In this paper, we develop a novel community-detection method based on cliques, i.e., local complete subnetworks. The proposed method overcomes the deficiencies of previous similar community-detection methods by considering the mathematical properties of cliques. We apply the proposed method to computer-generated graphs and real-world network datasets. When applied to networks with known community structure, the proposed method detects the structure with high fidelity and sensitivity. When applied to networks with no a priori information regarding community structure, the proposed method yields insightful results revealing the organization of these complex networks. We also show that the proposed method is guaranteed to detect near-optimal clusters in the bipartition case.
Sonar Imaging of Elastic Fluid-Filled Cylindrical Shells.
NASA Astrophysics Data System (ADS)
Dodd, Stirling Scott
1995-01-01
Previously a method of describing spherical acoustic waves in cylindrical coordinates was applied to the problem of point source scattering by an elastic infinite fluid -filled cylindrical shell (S. Dodd and C. Loeffler, J. Acoust. Soc. Am. 97, 3284(A) (1995)). This method is applied to numerically model monostatic oblique incidence scattering from a truncated cylinder by a narrow-beam high-frequency imaging sonar. The narrow beam solution results from integrating the point source solution over the spatial extent of a line source and line receiver. The cylinder truncation is treated by the method of images, and assumes that the reflection coefficient at the truncation is unity. The scattering form functions, calculated using this method, are applied as filters to a narrow bandwidth, high ka pulse to find the time domain scattering response. The time domain pulses are further processed and displayed in the form of a sonar image. These images compare favorably to experimentally obtained images (G. Kaduchak and C. Loeffler, J. Acoust. Soc. Am. 97, 3289(A) (1995)). The impact of the s_{ rm o} and a_{rm o} Lamb waves is vividly apparent in the images.
Smoothing of climate time series revisited
NASA Astrophysics Data System (ADS)
Mann, Michael E.
2008-08-01
We present an easily implemented method for smoothing climate time series, generalizing upon an approach previously described by Mann (2004). The method adaptively weights the three lowest order time series boundary constraints to optimize the fit with the raw time series. We apply the method to the instrumental global mean temperature series from 1850-2007 and to various surrogate global mean temperature series from 1850-2100 derived from the CMIP3 multimodel intercomparison project. These applications demonstrate that the adaptive method systematically out-performs certain widely used default smoothing methods, and is more likely to yield accurate assessments of long-term warming trends.
NASA Astrophysics Data System (ADS)
Takahashi, Hiroki; Hasegawa, Hideyuki; Kanai, Hiroshi
2013-07-01
For the facilitation of analysis and elimination of the operator dependence in estimating the myocardial function in echocardiography, we have previously developed a method for automated identification of the heart wall. However, there are misclassified regions because the magnitude-squared coherence (MSC) function of echo signals, which is one of the features in the previous method, is sensitively affected by the clutter components such as multiple reflection and off-axis echo from external tissue or the nearby myocardium. The objective of the present study is to improve the performance of automated identification of the heart wall. For this purpose, we proposed a method to suppress the effect of the clutter components on the MSC of echo signals by applying an adaptive moving target indicator (MTI) filter to echo signals. In vivo experimental results showed that the misclassified regions were significantly reduced using our proposed method in the longitudinal axis view of the heart.
Color extended visual cryptography using error diffusion.
Kang, InKoo; Arce, Gonzalo R; Lee, Heung-Kyu
2011-01-01
Color visual cryptography (VC) encrypts a color secret message into n color halftone image shares. Previous methods in the literature show good results for black and white or gray scale VC schemes, however, they are not sufficient to be applied directly to color shares due to different color structures. Some methods for color visual cryptography are not satisfactory in terms of producing either meaningless shares or meaningful shares with low visual quality, leading to suspicion of encryption. This paper introduces the concept of visual information pixel (VIP) synchronization and error diffusion to attain a color visual cryptography encryption method that produces meaningful color shares with high visual quality. VIP synchronization retains the positions of pixels carrying visual information of original images throughout the color channels and error diffusion generates shares pleasant to human eyes. Comparisons with previous approaches show the superior performance of the new method.
Method and system for efficient video compression with low-complexity encoder
NASA Technical Reports Server (NTRS)
Chen, Jun (Inventor); He, Dake (Inventor); Sheinin, Vadim (Inventor); Jagmohan, Ashish (Inventor); Lu, Ligang (Inventor)
2012-01-01
Disclosed are a method and system for video compression, wherein the video encoder has low computational complexity and high compression efficiency. The disclosed system comprises a video encoder and a video decoder, wherein the method for encoding includes the steps of converting a source frame into a space-frequency representation; estimating conditional statistics of at least one vector of space-frequency coefficients; estimating encoding rates based on the said conditional statistics; and applying Slepian-Wolf codes with the said computed encoding rates. The preferred method for decoding includes the steps of; generating a side-information vector of frequency coefficients based on previously decoded source data, encoder statistics, and previous reconstructions of the source frequency vector; and performing Slepian-Wolf decoding of at least one source frequency vector based on the generated side-information, the Slepian-Wolf code bits and the encoder statistics.
Efficient path-based computations on pedigree graphs with compact encodings
2012-01-01
A pedigree is a diagram of family relationships, and it is often used to determine the mode of inheritance (dominant, recessive, etc.) of genetic diseases. Along with rapidly growing knowledge of genetics and accumulation of genealogy information, pedigree data is becoming increasingly important. In large pedigree graphs, path-based methods for efficiently computing genealogical measurements, such as inbreeding and kinship coefficients of individuals, depend on efficient identification and processing of paths. In this paper, we propose a new compact path encoding scheme on large pedigrees, accompanied by an efficient algorithm for identifying paths. We demonstrate the utilization of our proposed method by applying it to the inbreeding coefficient computation. We present time and space complexity analysis, and also manifest the efficiency of our method for evaluating inbreeding coefficients as compared to previous methods by experimental results using pedigree graphs with real and synthetic data. Both theoretical and experimental results demonstrate that our method is more scalable and efficient than previous methods in terms of time and space requirements. PMID:22536898
Extraction of memory colors for preferred color correction in digital TVs
NASA Astrophysics Data System (ADS)
Ryu, Byong Tae; Yeom, Jee Young; Kim, Choon-Woo; Ahn, Ji-Young; Kang, Dong-Woo; Shin, Hyun-Ho
2009-01-01
Subjective image quality is one of the most important performance indicators for digital TVs. In order to improve subjective image quality, preferred color correction is often employed. More specifically, areas of memory colors such as skin, grass, and sky are modified to generate pleasing impression to viewers. Before applying the preferred color correction, tendency of preference for memory colors should be identified. It is often accomplished by off-line human visual tests. Areas containing the memory colors should be extracted then color correction is applied to the extracted areas. These processes should be performed on-line. This paper presents a new method for area extraction of three types of memory colors. Performance of the proposed method is evaluated by calculating the correct and false detection ratios. Experimental results indicate that proposed method outperform previous methods proposed for the memory color extraction.
Spotting the difference in molecular dynamics simulations of biomolecules
NASA Astrophysics Data System (ADS)
Sakuraba, Shun; Kono, Hidetoshi
2016-08-01
Comparing two trajectories from molecular simulations conducted under different conditions is not a trivial task. In this study, we apply a method called Linear Discriminant Analysis with ITERative procedure (LDA-ITER) to compare two molecular simulation results by finding the appropriate projection vectors. Because LDA-ITER attempts to determine a projection such that the projections of the two trajectories do not overlap, the comparison does not suffer from a strong anisotropy, which is an issue in protein dynamics. LDA-ITER is applied to two test cases: the T4 lysozyme protein simulation with or without a point mutation and the allosteric protein PDZ2 domain of hPTP1E with or without a ligand. The projection determined by the method agrees with the experimental data and previous simulations. The proposed procedure, which complements existing methods, is a versatile analytical method that is specialized to find the "difference" between two trajectories.
UK audit of glomerular filtration rate measurement from plasma sampling in 2013.
Murray, Anthony W; Lawson, Richard S; Cade, Sarah C; Hall, David O; Kenny, Bob; O'Shaughnessy, Emma; Taylor, Jon; Towey, David; White, Duncan; Carson, Kathryn
2014-11-01
An audit was carried out into UK glomerular filtration rate (GFR) calculation. The results were compared with an identical 2001 audit. Participants used their routine method to calculate GFR for 20 data sets (four plasma samples) in millilitres per minute and also the GFR normalized for body surface area. Some unsound data sets were included to analyse the applied quality control (QC) methods. Variability between centres was assessed for each data set, compared with the national median and a reference value calculated using the method recommended in the British Nuclear Medicine Society guidelines. The influence of the number of samples on variability was studied. Supplementary data were requested on workload and methodology. The 59 returns showed widespread standardization. The applied early exponential clearance correction was the main contributor to the observed variability. These corrections were applied by 97% of centres (50% - 2001) with 80% using the recommended averaged Brochner-Mortenson correction. Approximately 75% applied the recommended Haycock body surface area formula for adults (78% for children). The effect of the number of samples used was not significant. There was wide variability in the applied QC techniques, especially in terms of the use of the volume of distribution. The widespread adoption of the guidelines has harmonized national GFR calculation compared with the previous audit. Further standardization could further reduce variability. This audit has highlighted the need to address the national standardization of QC methods. Radionuclide techniques are confirmed as the preferred method for GFR measurement when an unequivocal result is required.
Citation of previous meta-analyses on the same topic: a clue to perpetuation of incorrect methods?
Li, Tianjing; Dickersin, Kay
2013-06-01
Systematic reviews and meta-analyses serve as a basis for decision-making and clinical practice guidelines and should be carried out using appropriate methodology to avoid incorrect inferences. We describe the characteristics, statistical methods used for meta-analyses, and citation patterns of all 21 glaucoma systematic reviews we identified pertaining to the effectiveness of prostaglandin analog eye drops in treating primary open-angle glaucoma, published between December 2000 and February 2012. We abstracted data, assessed whether appropriate statistical methods were applied in meta-analyses, and examined citation patterns of included reviews. We identified two forms of problematic statistical analyses in 9 of the 21 systematic reviews examined. Except in 1 case, none of the 9 reviews that used incorrect statistical methods cited a previously published review that used appropriate methods. Reviews that used incorrect methods were cited 2.6 times more often than reviews that used appropriate statistical methods. We speculate that by emulating the statistical methodology of previous systematic reviews, systematic review authors may have perpetuated incorrect approaches to meta-analysis. The use of incorrect statistical methods, perhaps through emulating methods described in previous research, calls conclusions of systematic reviews into question and may lead to inappropriate patient care. We urge systematic review authors and journal editors to seek the advice of experienced statisticians before undertaking or accepting for publication a systematic review and meta-analysis. The author(s) have no proprietary or commercial interest in any materials discussed in this article. Copyright © 2013 American Academy of Ophthalmology. Published by Elsevier Inc. All rights reserved.
Spoof Detection for Finger-Vein Recognition System Using NIR Camera.
Nguyen, Dat Tien; Yoon, Hyo Sik; Pham, Tuyen Danh; Park, Kang Ryoung
2017-10-01
Finger-vein recognition, a new and advanced biometrics recognition method, is attracting the attention of researchers because of its advantages such as high recognition performance and lesser likelihood of theft and inaccuracies occurring on account of skin condition defects. However, as reported by previous researchers, it is possible to attack a finger-vein recognition system by using presentation attack (fake) finger-vein images. As a result, spoof detection, named as presentation attack detection (PAD), is necessary in such recognition systems. Previous attempts to establish PAD methods primarily focused on designing feature extractors by hand (handcrafted feature extractor) based on the observations of the researchers about the difference between real (live) and presentation attack finger-vein images. Therefore, the detection performance was limited. Recently, the deep learning framework has been successfully applied in computer vision and delivered superior results compared to traditional handcrafted methods on various computer vision applications such as image-based face recognition, gender recognition and image classification. In this paper, we propose a PAD method for near-infrared (NIR) camera-based finger-vein recognition system using convolutional neural network (CNN) to enhance the detection ability of previous handcrafted methods. Using the CNN method, we can derive a more suitable feature extractor for PAD than the other handcrafted methods using a training procedure. We further process the extracted image features to enhance the presentation attack finger-vein image detection ability of the CNN method using principal component analysis method (PCA) for dimensionality reduction of feature space and support vector machine (SVM) for classification. Through extensive experimental results, we confirm that our proposed method is adequate for presentation attack finger-vein image detection and it can deliver superior detection results compared to CNN-based methods and other previous handcrafted methods.
Spoof Detection for Finger-Vein Recognition System Using NIR Camera
Nguyen, Dat Tien; Yoon, Hyo Sik; Pham, Tuyen Danh; Park, Kang Ryoung
2017-01-01
Finger-vein recognition, a new and advanced biometrics recognition method, is attracting the attention of researchers because of its advantages such as high recognition performance and lesser likelihood of theft and inaccuracies occurring on account of skin condition defects. However, as reported by previous researchers, it is possible to attack a finger-vein recognition system by using presentation attack (fake) finger-vein images. As a result, spoof detection, named as presentation attack detection (PAD), is necessary in such recognition systems. Previous attempts to establish PAD methods primarily focused on designing feature extractors by hand (handcrafted feature extractor) based on the observations of the researchers about the difference between real (live) and presentation attack finger-vein images. Therefore, the detection performance was limited. Recently, the deep learning framework has been successfully applied in computer vision and delivered superior results compared to traditional handcrafted methods on various computer vision applications such as image-based face recognition, gender recognition and image classification. In this paper, we propose a PAD method for near-infrared (NIR) camera-based finger-vein recognition system using convolutional neural network (CNN) to enhance the detection ability of previous handcrafted methods. Using the CNN method, we can derive a more suitable feature extractor for PAD than the other handcrafted methods using a training procedure. We further process the extracted image features to enhance the presentation attack finger-vein image detection ability of the CNN method using principal component analysis method (PCA) for dimensionality reduction of feature space and support vector machine (SVM) for classification. Through extensive experimental results, we confirm that our proposed method is adequate for presentation attack finger-vein image detection and it can deliver superior detection results compared to CNN-based methods and other previous handcrafted methods. PMID:28974031
The Expanding Role of Applications in the Development and Validation of CFD at NASA
NASA Technical Reports Server (NTRS)
Schuster, David M.
2010-01-01
This paper focuses on the recent escalation in application of CFD to manned and unmanned flight projects at NASA and the need to often apply these methods to problems for which little or no previous validation data directly applies. The paper discusses the evolution of NASA.s CFD development from a strict Develop, Validate, Apply strategy to sometimes allowing for a Develop, Apply, Validate approach. The risks of this approach and some of its unforeseen benefits are discussed and tied to specific operational examples. There are distinct advantages for the CFD developer that is able to operate in this paradigm, and recommendations are provided for those inclined and willing to work in this environment.
Militello, L G; Hutton, R J
1998-11-01
Cognitive task analysis (CTA) is a set of methods for identifying cognitive skills, or mental demands, needed to perform a task proficiently. The product of the task analysis can be used to inform the design of interfaces and training systems. However, CTA is resource intensive and has previously been of limited use to design practitioners. A streamlined method of CTA, Applied Cognitive Task Analysis (ACTA), is presented in this paper. ACTA consists of three interview methods that help the practitioner to extract information about the cognitive demands and skills required for a task. ACTA also allows the practitioner to represent this information in a format that will translate more directly into applied products, such as improved training scenarios or interface recommendations. This paper will describe the three methods, an evaluation study conducted to assess the usability and usefulness of the methods, and some directions for future research for making cognitive task analysis accessible to practitioners. ACTA techniques were found to be easy to use, flexible, and to provide clear output. The information and training materials developed based on ACTA interviews were found to be accurate and important for training purposes.
Cognitive Support in Teaching Football Techniques
ERIC Educational Resources Information Center
Duda, Henryk
2009-01-01
Study aim: To improve the teaching of football techniques by applying cognitive and imagery techniques. Material and methods: Four groups of subjects, n = 32 each, were studied: male and female physical education students aged 20-21 years, not engaged previously in football training; male juniors and minors, aged 16 and 13 years, respectively,…
Analytic Solutions of the Vector Burgers Equation
NASA Technical Reports Server (NTRS)
Nerney, Steven; Schmahl, Edward J.; Musielak, Z. E.
1996-01-01
The well-known analytical solution of Burgers' equation is extended to curvilinear coordinate systems in three dimensions by a method that is much simpler and more suitable to practical applications than that previously used. The results obtained are applied to incompressible flow with cylindrical symmetry, and also to the decay of an initially linearly increasing wind.
Measuring Disorientation Based on the Needleman-Wunsch Algorithm
ERIC Educational Resources Information Center
Güyer, Tolga; Atasoy, Bilal; Somyürek, Sibel
2015-01-01
This study offers a new method to measure navigation disorientation in web based systems which is powerful learning medium for distance and open education. The Needleman-Wunsch algorithm is used to measure disorientation in a more precise manner. The process combines theoretical and applied knowledge from two previously distinct research areas,…
Airflow Resistance of Loose-Fill Mineral Fiber Insulations in Retrofit Applications
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schumacher, C. J.; Fox, M. J.; Lstiburek, J.
2015-02-01
This report expands on Building America Report 1109 by applying the experimental apparatus and test method to dense-pack retrofit applications using mineral fiber insulation materials. Three fiber glass insulation materials and one stone wool insulation material were tested, and the results compared to the cellulose results from the previous study.
ERIC Educational Resources Information Center
Mursu, Anja; Luukkonen, Irmeli; Toivanen, Marika; Korpela, Mikko
2007-01-01
Introduction: The purpose of information systems is to facilitate work activities: here we consider how Activity Theory can be applied in information systems development. Method. The requirements for an analytical model for emancipatory, work-oriented information systems research and practice are specified. Previous research work in Activity…
Assessing Fidelity of Implementation of an Unprescribed, Diagnostic Mathematics Intervention
ERIC Educational Resources Information Center
Munter, Charles; Wilhelm, Anne Garrison; Cobb, Paul; Cordray, David S.
2014-01-01
This article draws on previously employed methods for conducting fidelity studies and applies them to an evaluation of an unprescribed intervention. We document the process of assessing the fidelity of implementation of the Math Recovery first-grade tutoring program, an unprescribed, diagnostic intervention. We describe how we drew on recent…
The Fun Culture in Seniors' Online Communities
ERIC Educational Resources Information Center
Nimrod, Galit
2011-01-01
Purpose of the study: Previous research found that "fun on line" is the most dominant content in seniors' online communities. The present study aimed to further explore the "fun culture" in these communities and to discover its unique qualities. Design and Methods: The study applied an online ethnography (netnography) approach, utilizing a full…
The Influence of Trust in Principals' Mentoring Experiences across Different Career Phases
ERIC Educational Resources Information Center
Bakioglu, Aysen; Hacifazlioglu, Ozge; Ozcan, Kenan
2010-01-01
The purpose of this study is to examine the perceptions of primary school principals about the influence of "trust" in their mentoring experiences. Both quantitative and qualitative methods were used in the study. The Primary School Principals' Mentoring Questionnaire previously developed by the researchers was applied to 1462 primary…
Time series modeling of human operator dynamics in manual control tasks
NASA Technical Reports Server (NTRS)
Biezad, D. J.; Schmidt, D. K.
1984-01-01
A time-series technique is presented for identifying the dynamic characteristics of the human operator in manual control tasks from relatively short records of experimental data. Control of system excitation signals used in the identification is not required. The approach is a multi-channel identification technique for modeling multi-input/multi-output situations. The method presented includes statistical tests for validity, is designed for digital computation, and yields estimates for the frequency responses of the human operator. A comprehensive relative power analysis may also be performed for validated models. This method is applied to several sets of experimental data; the results are discussed and shown to compare favorably with previous research findings. New results are also presented for a multi-input task that has not been previously modeled to demonstrate the strengths of the method.
Skeletal Mechanism Generation of Surrogate Jet Fuels for Aeropropulsion Modeling
NASA Astrophysics Data System (ADS)
Sung, Chih-Jen; Niemeyer, Kyle E.
2010-05-01
A novel implementation for the skeletal reduction of large detailed reaction mechanisms using the directed relation graph with error propagation and sensitivity analysis (DRGEPSA) is developed and presented with skeletal reductions of two important hydrocarbon components, n-heptane and n-decane, relevant to surrogate jet fuel development. DRGEPSA integrates two previously developed methods, directed relation graph-aided sensitivity analysis (DRGASA) and directed relation graph with error propagation (DRGEP), by first applying DRGEP to efficiently remove many unimportant species prior to sensitivity analysis to further remove unimportant species, producing an optimally small skeletal mechanism for a given error limit. It is illustrated that the combination of the DRGEP and DRGASA methods allows the DRGEPSA approach to overcome the weaknesses of each previous method, specifically that DRGEP cannot identify all unimportant species and that DRGASA shields unimportant species from removal.
Time Series Modeling of Human Operator Dynamics in Manual Control Tasks
NASA Technical Reports Server (NTRS)
Biezad, D. J.; Schmidt, D. K.
1984-01-01
A time-series technique is presented for identifying the dynamic characteristics of the human operator in manual control tasks from relatively short records of experimental data. Control of system excitation signals used in the identification is not required. The approach is a multi-channel identification technique for modeling multi-input/multi-output situations. The method presented includes statistical tests for validity, is designed for digital computation, and yields estimates for the frequency response of the human operator. A comprehensive relative power analysis may also be performed for validated models. This method is applied to several sets of experimental data; the results are discussed and shown to compare favorably with previous research findings. New results are also presented for a multi-input task that was previously modeled to demonstrate the strengths of the method.
Molinos-Senante, María; Donoso, Guillermo; Sala-Garrido, Ramon; Villegas, Andrés
2018-03-01
Benchmarking the efficiency of water companies is essential to set water tariffs and to promote their sustainability. In doing so, most of the previous studies have applied conventional data envelopment analysis (DEA) models. However, it is a deterministic method that does not allow to identify environmental factors influencing efficiency scores. To overcome this limitation, this paper evaluates the efficiency of a sample of Chilean water and sewerage companies applying a double-bootstrap DEA model. Results evidenced that the ranking of water and sewerage companies changes notably whether efficiency scores are computed applying conventional or double-bootstrap DEA models. Moreover, it was found that the percentage of non-revenue water and customer density are factors influencing the efficiency of Chilean water and sewerage companies. This paper illustrates the importance of using a robust and reliable method to increase the relevance of benchmarking tools.
Parallel Implicit Runge-Kutta Methods Applied to Coupled Orbit/Attitude Propagation
NASA Astrophysics Data System (ADS)
Hatten, Noble; Russell, Ryan P.
2017-12-01
A variable-step Gauss-Legendre implicit Runge-Kutta (GLIRK) propagator is applied to coupled orbit/attitude propagation. Concepts previously shown to improve efficiency in 3DOF propagation are modified and extended to the 6DOF problem, including the use of variable-fidelity dynamics models. The impact of computing the stage dynamics of a single step in parallel is examined using up to 23 threads and 22 associated GLIRK stages; one thread is reserved for an extra dynamics function evaluation used in the estimation of the local truncation error. Efficiency is found to peak for typical examples when using approximately 8 to 12 stages for both serial and parallel implementations. Accuracy and efficiency compare favorably to explicit Runge-Kutta and linear-multistep solvers for representative scenarios. However, linear-multistep methods are found to be more efficient for some applications, particularly in a serial computing environment, or when parallelism can be applied across multiple trajectories.
Percolation analysis of nonlinear structures in scale-free two-dimensional simulations
NASA Technical Reports Server (NTRS)
Dominik, Kurt G.; Shandarin, Sergei F.
1992-01-01
Results are presented of applying percolation analysis to several two-dimensional N-body models which simulate the formation of large-scale structure. Three parameters are estimated: total area (a(c)), total mass (M(C)), and percolation density (rho(c)) of the percolating structure at the percolation threshold for both unsmoothed and smoothed (with different scales L(s)) nonlinear with filamentary structures, confirming early speculations that this type of model has several features of filamentary-type distributions. Also, it is shown that, by properly applying smoothing techniques, many problems previously considered detrimental can be dealt with and overcome. Possible difficulties and prospects with the use of this method are discussed, specifically relating to techniques and methods already applied to CfA deep sky surveys. The success of this test in two dimensions and the potential for extrapolation to three dimensions is also discussed.
ELM: an Algorithm to Estimate the Alpha Abundance from Low-resolution Spectra
NASA Astrophysics Data System (ADS)
Bu, Yude; Zhao, Gang; Pan, Jingchang; Bharat Kumar, Yerra
2016-01-01
We have investigated a novel methodology using the extreme learning machine (ELM) algorithm to determine the α abundance of stars. Applying two methods based on the ELM algorithm—ELM+spectra and ELM+Lick indices—to the stellar spectra from the ELODIE database, we measured the α abundance with a precision better than 0.065 dex. By applying these two methods to the spectra with different signal-to-noise ratios (S/Ns) and different resolutions, we found that ELM+spectra is more robust against degraded resolution and ELM+Lick indices is more robust against variation in S/N. To further validate the performance of ELM, we applied ELM+spectra and ELM+Lick indices to SDSS spectra and estimated α abundances with a precision around 0.10 dex, which is comparable to the results given by the SEGUE Stellar Parameter Pipeline. We further applied ELM to the spectra of stars in Galactic globular clusters (M15, M13, M71) and open clusters (NGC 2420, M67, NGC 6791), and results show good agreement with previous studies (within 1σ). A comparison of the ELM with other widely used methods including support vector machine, Gaussian process regression, artificial neural networks, and linear least-squares regression shows that ELM is efficient with computational resources and more accurate than other methods.
NASA Astrophysics Data System (ADS)
Yin, K.; Belonoshko, A. B.; Zhou, H.; Lu, X.
2016-12-01
The melting temperatures of materials in the interior of the Earth has significant implications in many areas of geophysics. The direct calculations of the melting point by atomic simulations would face substantial hysteresis problem. To overcome the hysteresis encountered in the atomic simulations there are a few different melting-point determination methods available nowadays, which are founded independently, such as the free energy method, the two-phase or coexistence method, and the Z method, etc. In this study, we provide a theoretical understanding the relations of these methods from a geometrical perspective based on a quantitative construction of the volume-entropy-energy thermodynamic surface, a model first proposed by J. Willard Gibbs in 1873. Then combining with an experimental data and/or a previous melting-point determination method, we apply this model to derive the high-pressure melting curves for several lower mantle minerals with less computational efforts relative to using previous methods only. Through this way, some polyatomic minerals at extreme pressures which are almost unsolvable before are calculated fully from first principles now.
Airburst height computation method of Sea-Impact Test
NASA Astrophysics Data System (ADS)
Kim, Jinho; Kim, Hyungsup; Chae, Sungwoo; Park, Sungho
2017-05-01
This paper describes the ways how to measure the airburst height of projectiles and rockets. In general, the airburst height could be determined by using triangulation method or the images from the camera installed on the radar. There are some limitations in these previous methods when the missiles impact the sea surface. To apply triangulation method, the cameras should be installed so that the lines of sight intersect at angles from 60 to 120 degrees. There could be no effective observation towers to install the optical system. In case the range of the missile is more than 50km, the images from the camera of the radar could be useless. This paper proposes the method to measure the airburst height of sea impact projectile by using a single camera. The camera would be installed on the island near to the impact area and the distance could be computed by using the position and attitude of camera and sea level. To demonstrate the proposed method, the results from the proposed method are compared with that from the previous method.
Hidden marker position estimation during sit-to-stand with walker.
Yoon, Sang Ho; Jun, Hong Gul; Dan, Byung Ju; Jo, Byeong Rim; Min, Byung Hoon
2012-01-01
Motion capture analysis of sit-to-stand task with assistive device is hard to achieve due to obstruction on reflective makers. Previously developed robotic system, Smart Mobile Walker, is used as an assistive device to perform motion capture analysis in sit-to-stand task. All lower limb markers except hip markers are invisible through whole session. The link-segment and regression method is applied to estimate the marker position during sit-to-stand. Applying a new method, the lost marker positions are restored and the biomechanical evaluation of the sit-to-stand movement with a Smart Mobile Walker could be carried out. The accuracy of the marker position estimation is verified with normal sit-to-stand data from more than 30 clinical trials. Moreover, further research on improving the link segment and regression method is addressed.
Virtual fringe projection system with nonparallel illumination based on iteration
NASA Astrophysics Data System (ADS)
Zhou, Duo; Wang, Zhangying; Gao, Nan; Zhang, Zonghua; Jiang, Xiangqian
2017-06-01
Fringe projection profilometry has been widely applied in many fields. To set up an ideal measuring system, a virtual fringe projection technique has been studied to assist in the design of hardware configurations. However, existing virtual fringe projection systems use parallel illumination and have a fixed optical framework. This paper presents a virtual fringe projection system with nonparallel illumination. Using an iterative method to calculate intersection points between rays and reference planes or object surfaces, the proposed system can simulate projected fringe patterns and captured images. A new explicit calibration method has been presented to validate the precision of the system. Simulated results indicate that the proposed iterative method outperforms previous systems. Our virtual system can be applied to error analysis, algorithm optimization, and help operators to find ideal system parameter settings for actual measurements.
Andrianakis, I; Vernon, I; McCreesh, N; McKinley, T J; Oakley, J E; Nsubuga, R N; Goldstein, M; White, R G
2017-08-01
Complex stochastic models are commonplace in epidemiology, but their utility depends on their calibration to empirical data. History matching is a (pre)calibration method that has been applied successfully to complex deterministic models. In this work, we adapt history matching to stochastic models, by emulating the variance in the model outputs, and therefore accounting for its dependence on the model's input values. The method proposed is applied to a real complex epidemiological model of human immunodeficiency virus in Uganda with 22 inputs and 18 outputs, and is found to increase the efficiency of history matching, requiring 70% of the time and 43% fewer simulator evaluations compared with a previous variant of the method. The insight gained into the structure of the human immunodeficiency virus model, and the constraints placed on it, are then discussed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
D'Elia, M.; Edwards, H. C.; Hu, J.
Previous work has demonstrated that propagating groups of samples, called ensembles, together through forward simulations can dramatically reduce the aggregate cost of sampling-based uncertainty propagation methods [E. Phipps, M. D'Elia, H. C. Edwards, M. Hoemmen, J. Hu, and S. Rajamanickam, SIAM J. Sci. Comput., 39 (2017), pp. C162--C193]. However, critical to the success of this approach when applied to challenging problems of scientific interest is the grouping of samples into ensembles to minimize the total computational work. For example, the total number of linear solver iterations for ensemble systems may be strongly influenced by which samples form the ensemble whenmore » applying iterative linear solvers to parameterized and stochastic linear systems. In this paper we explore sample grouping strategies for local adaptive stochastic collocation methods applied to PDEs with uncertain input data, in particular canonical anisotropic diffusion problems where the diffusion coefficient is modeled by truncated Karhunen--Loève expansions. Finally, we demonstrate that a measure of the total anisotropy of the diffusion coefficient is a good surrogate for the number of linear solver iterations for each sample and therefore provides a simple and effective metric for grouping samples.« less
D'Elia, M.; Edwards, H. C.; Hu, J.; ...
2018-01-18
Previous work has demonstrated that propagating groups of samples, called ensembles, together through forward simulations can dramatically reduce the aggregate cost of sampling-based uncertainty propagation methods [E. Phipps, M. D'Elia, H. C. Edwards, M. Hoemmen, J. Hu, and S. Rajamanickam, SIAM J. Sci. Comput., 39 (2017), pp. C162--C193]. However, critical to the success of this approach when applied to challenging problems of scientific interest is the grouping of samples into ensembles to minimize the total computational work. For example, the total number of linear solver iterations for ensemble systems may be strongly influenced by which samples form the ensemble whenmore » applying iterative linear solvers to parameterized and stochastic linear systems. In this paper we explore sample grouping strategies for local adaptive stochastic collocation methods applied to PDEs with uncertain input data, in particular canonical anisotropic diffusion problems where the diffusion coefficient is modeled by truncated Karhunen--Loève expansions. Finally, we demonstrate that a measure of the total anisotropy of the diffusion coefficient is a good surrogate for the number of linear solver iterations for each sample and therefore provides a simple and effective metric for grouping samples.« less
Takabatake, Reona; Masubuchi, Tomoko; Futo, Satoshi; Minegishi, Yasutaka; Noguchi, Akio; Kondo, Kazunari; Teshima, Reiko; Kurashima, Takeyo; Mano, Junichi; Kitta, Kazumi
2016-01-01
A novel real-time PCR-based analytical method was developed for the event-specific quantification of a genetically modified (GM) maize, 3272. We first attempted to obtain genome DNA from this maize using a DNeasy Plant Maxi kit and a DNeasy Plant Mini kit, which have been widely utilized in our previous studies, but DNA extraction yields from 3272 were markedly lower than those from non-GM maize seeds. However, lowering of DNA extraction yields was not observed with GM quicker or Genomic-tip 20/G. We chose GM quicker for evaluation of the quantitative method. We prepared a standard plasmid for 3272 quantification. The conversion factor (Cf), which is required to calculate the amount of a genetically modified organism (GMO), was experimentally determined for two real-time PCR instruments, the Applied Biosystems 7900HT (the ABI 7900) and the Applied Biosystems 7500 (the ABI7500). The determined Cf values were 0.60 and 0.59 for the ABI 7900 and the ABI 7500, respectively. To evaluate the developed method, a blind test was conducted as part of an interlaboratory study. The trueness and precision were evaluated as the bias and reproducibility of the relative standard deviation (RSDr). The determined values were similar to those in our previous validation studies. The limit of quantitation for the method was estimated to be 0.5% or less, and we concluded that the developed method would be suitable and practical for detection and quantification of 3272.
A rapid and rational approach to generating isomorphous heavy-atom phasing derivatives
Lu, Jinghua; Sun, Peter D.
2014-01-01
In attempts to replace the conventional trial-and-error heavy-atom derivative search method with a rational approach, we previously defined heavy metal compound reactivity against peptide ligands. Here, we assembled a composite pH and buffer-dependent peptide reactivity profile for each heavy metal compound to guide rational heavy-atom derivative search. When knowledge of the best-reacting heavy-atom compound is combined with mass spectrometry-assisted derivatization, and with a quick-soak method to optimize phasing, it is likely that the traditional heavy-atom compounds could meet the demand of modern high-throughput X-ray crystallography. As an example, we applied this rational heavy-atom phasing approach to determine a previously unknown mouse serum amyloid A2 crystal structure. PMID:25040395
Applications of asynoptic space - Time Fourier transform methods to scanning satellite measurements
NASA Technical Reports Server (NTRS)
Lait, Leslie R.; Stanford, John L.
1988-01-01
A method proposed by Salby (1982) for computing the zonal space-time Fourier transform of asynoptically acquired satellite data is discussed. The method and its relationship to other techniques are briefly described, and possible problems in applying it to real data are outlined. Examples of results obtained using this technique are given which demonstrate its sensitivity to small-amplitude signals. A number of waves are found which have previously been observed as well as two not heretofore reported. A possible extension of the method which could increase temporal and longitudinal resolution is described.
Top down, bottom up structured programming and program structuring
NASA Technical Reports Server (NTRS)
Hamilton, M.; Zeldin, S.
1972-01-01
New design and programming techniques for shuttle software. Based on previous Apollo experience, recommendations are made to apply top-down structured programming techniques to shuttle software. New software verification techniques for large software systems are recommended. HAL, the higher order language selected for the shuttle flight code, is discussed and found to be adequate for implementing these techniques. Recommendations are made to apply the workable combination of top-down, bottom-up methods in the management of shuttle software. Program structuring is discussed relevant to both programming and management techniques.
On the computation of steady Hopper flows. II: von Mises materials in various geometries
NASA Astrophysics Data System (ADS)
Gremaud, Pierre A.; Matthews, John V.; O'Malley, Meghan
2004-11-01
Similarity solutions are constructed for the flow of granular materials through hoppers. Unlike previous work, the present approach applies to nonaxisymmetric containers. The model involves ten unknowns (stresses, velocity, and plasticity function) determined by nine nonlinear first order partial differential equations together with a quadratic algebraic constraint (yield condition). A pseudospectral discretization is applied; the resulting problem is solved with a trust region method. The important role of the hopper geometry on the flow is illustrated by several numerical experiments of industrial relevance.
Development of the Ion Exchange-Gravimetric Method for Sodium in Serum as a Definitive Method
Moody, John R.; Vetter, Thomas W.
1996-01-01
An ion exchange-gravimetric method, previously developed as a National Committee for Clinical Laboratory Standards (NCCLS) reference method for the determination of sodium in human serum, has been re-evaluated and improved. Sources of analytical error in this method have been examined more critically and the overall uncertainties decreased. Additionally, greater accuracy and repeatability have been achieved by the application of this definitive method to a sodium chloride reference material. In this method sodium in serum is ion-exchanged, selectively eluted and converted to a weighable precipitate as Na2SO4. Traces of sodium eluting before or after the main fraction, and precipitate contaminants are determined instrumentally. Co-precipitating contaminants contribute less than 0.1 % while the analyte lost to other eluted ion-exchange fractions contributes less than 0.02 % to the total precipitate mass. With improvements, the relative expanded uncertainty (k = 2) of the method, as applied to serum, is 0.3 % to 0.4 % and is less than 0.1 % when applied to a sodium chloride reference material. PMID:27805122
An automated method for tracking clouds in planetary atmospheres
NASA Astrophysics Data System (ADS)
Luz, D.; Berry, D. L.; Roos-Serote, M.
2008-05-01
We present an automated method for cloud tracking which can be applied to planetary images. The method is based on a digital correlator which compares two or more consecutive images and identifies patterns by maximizing correlations between image blocks. This approach bypasses the problem of feature detection. Four variations of the algorithm are tested on real cloud images of Jupiter's white ovals from the Galileo mission, previously analyzed in Vasavada et al. [Vasavada, A.R., Ingersoll, A.P., Banfield, D., Bell, M., Gierasch, P.J., Belton, M.J.S., Orton, G.S., Klaasen, K.P., Dejong, E., Breneman, H.H., Jones, T.J., Kaufman, J.M., Magee, K.P., Senske, D.A. 1998. Galileo imaging of Jupiter's atmosphere: the great red spot, equatorial region, and white ovals. Icarus, 135, 265, doi:10.1006/icar.1998.5984]. Direct correlation, using the sum of squared differences between image radiances as a distance estimator (baseline case), yields displacement vectors very similar to this previous analysis. Combining this distance estimator with the method of order ranks results in a technique which is more robust in the presence of outliers and noise and of better quality. Finally, we introduce a distance metric which, combined with order ranks, provides results of similar quality to the baseline case and is faster. The new approach can be applied to data from a number of space-based imaging instruments with a non-negligible gain in computing time.
A two-stage linear discriminant analysis via QR-decomposition.
Ye, Jieping; Li, Qi
2005-06-01
Linear Discriminant Analysis (LDA) is a well-known method for feature extraction and dimension reduction. It has been used widely in many applications involving high-dimensional data, such as image and text classification. An intrinsic limitation of classical LDA is the so-called singularity problems; that is, it fails when all scatter matrices are singular. Many LDA extensions were proposed in the past to overcome the singularity problems. Among these extensions, PCA+LDA, a two-stage method, received relatively more attention. In PCA+LDA, the LDA stage is preceded by an intermediate dimension reduction stage using Principal Component Analysis (PCA). Most previous LDA extensions are computationally expensive, and not scalable, due to the use of Singular Value Decomposition or Generalized Singular Value Decomposition. In this paper, we propose a two-stage LDA method, namely LDA/QR, which aims to overcome the singularity problems of classical LDA, while achieving efficiency and scalability simultaneously. The key difference between LDA/QR and PCA+LDA lies in the first stage, where LDA/QR applies QR decomposition to a small matrix involving the class centroids, while PCA+LDA applies PCA to the total scatter matrix involving all training data points. We further justify the proposed algorithm by showing the relationship among LDA/QR and previous LDA methods. Extensive experiments on face images and text documents are presented to show the effectiveness of the proposed algorithm.
A forward model-based validation of cardiovascular system identification
NASA Technical Reports Server (NTRS)
Mukkamala, R.; Cohen, R. J.
2001-01-01
We present a theoretical evaluation of a cardiovascular system identification method that we previously developed for the analysis of beat-to-beat fluctuations in noninvasively measured heart rate, arterial blood pressure, and instantaneous lung volume. The method provides a dynamical characterization of the important autonomic and mechanical mechanisms responsible for coupling the fluctuations (inverse modeling). To carry out the evaluation, we developed a computational model of the cardiovascular system capable of generating realistic beat-to-beat variability (forward modeling). We applied the method to data generated from the forward model and compared the resulting estimated dynamics with the actual dynamics of the forward model, which were either precisely known or easily determined. We found that the estimated dynamics corresponded to the actual dynamics and that this correspondence was robust to forward model uncertainty. We also demonstrated the sensitivity of the method in detecting small changes in parameters characterizing autonomic function in the forward model. These results provide confidence in the performance of the cardiovascular system identification method when applied to experimental data.
An IMU-to-Body Alignment Method Applied to Human Gait Analysis.
Vargas-Valencia, Laura Susana; Elias, Arlindo; Rocon, Eduardo; Bastos-Filho, Teodiano; Frizera, Anselmo
2016-12-10
This paper presents a novel calibration procedure as a simple, yet powerful, method to place and align inertial sensors with body segments. The calibration can be easily replicated without the need of any additional tools. The proposed method is validated in three different applications: a computer mathematical simulation; a simplified joint composed of two semi-spheres interconnected by a universal goniometer; and a real gait test with five able-bodied subjects. Simulation results demonstrate that, after the calibration method is applied, the joint angles are correctly measured independently of previous sensor placement on the joint, thus validating the proposed procedure. In the cases of a simplified joint and a real gait test with human volunteers, the method also performs correctly, although secondary plane errors appear when compared with the simulation results. We believe that such errors are caused by limitations of the current inertial measurement unit (IMU) technology and fusion algorithms. In conclusion, the presented calibration procedure is an interesting option to solve the alignment problem when using IMUs for gait analysis.
Colon Stem Cell and Crypt Dynamics Exposed by Cell Lineage Reconstruction
Itzkovitz, Shalev; Elbaz, Judith; Maruvka, Yosef E.; Segev, Elad; Shlush, Liran I.; Dekel, Nava; Shapiro, Ehud
2011-01-01
Stem cell dynamics in vivo are often being studied by lineage tracing methods. Our laboratory has previously developed a retrospective method for reconstructing cell lineage trees from somatic mutations accumulated in microsatellites. This method was applied here to explore different aspects of stem cell dynamics in the mouse colon without the use of stem cell markers. We first demonstrated the reliability of our method for the study of stem cells by confirming previously established facts, and then we addressed open questions. Our findings confirmed that colon crypts are monoclonal and that, throughout adulthood, the process of monoclonal conversion plays a major role in the maintenance of crypts. The absence of immortal strand mechanism in crypts stem cells was validated by the age-dependent accumulation of microsatellite mutations. In addition, we confirmed the positive correlation between physical and lineage proximity of crypts, by showing that the colon is separated into small domains that share a common ancestor. We gained new data demonstrating that colon epithelium is clustered separately from hematopoietic and other cell types, indicating that the colon is constituted of few progenitors and ruling out significant renewal of colonic epithelium from hematopoietic cells during adulthood. Overall, our study demonstrates the reliability of cell lineage reconstruction for the study of stem cell dynamics, and it further addresses open questions in colon stem cells. In addition, this method can be applied to study stem cell dynamics in other systems. PMID:21829376
NASA Technical Reports Server (NTRS)
Hartung, Lin C.
1991-01-01
A method for predicting radiation adsorption and emission coefficients in thermochemical nonequilibrium flows is developed. The method is called the Langley optimized radiative nonequilibrium code (LORAN). It applies the smeared band approximation for molecular radiation to produce moderately detailed results and is intended to fill the gap between detailed but costly prediction methods and very fast but highly approximate methods. The optimization of the method to provide efficient solutions allowing coupling to flowfield solvers is discussed. Representative results are obtained and compared to previous nonequilibrium radiation methods, as well as to ground- and flight-measured data. Reasonable agreement is found in all cases. A multidimensional radiative transport method is also developed for axisymmetric flows. Its predictions for wall radiative flux are 20 to 25 percent lower than those of the tangent slab transport method, as expected, though additional investigation of the symmetry and outflow boundary conditions is indicated. The method was applied to the peak heating condition of the aeroassist flight experiment (AFE) trajectory, with results comparable to predictions from other methods. The LORAN method was also applied in conjunction with the computational fluid dynamics (CFD) code LAURA to study the sensitivity of the radiative heating prediction to various models used in nonequilibrium CFD. This study suggests that radiation measurements can provide diagnostic information about the detailed processes occurring in a nonequilibrium flowfield because radiation phenomena are very sensitive to these processes.
Electron beams scanning: A novel method
NASA Astrophysics Data System (ADS)
Askarbioki, M.; Zarandi, M. B.; Khakshournia, S.; Shirmardi, S. P.; Sharifian, M.
2018-06-01
In this research, a spatial electron beam scanning is reported. There are various methods for ion and electron beam scanning. The best known of these methods is the wire scanning wherein the parameters of beam are measured by one or more conductive wires. This article suggests a novel method for e-beam scanning without the previous errors of old wire scanning. In this method, the techniques of atomic physics are applied so that a knife edge has a scanner role and the wires have detector roles. It will determine the 2D e-beam profile readily when the positions of the scanner and detectors are specified.
Researcher’s Perspective of Substitution Method on Text Steganography
NASA Astrophysics Data System (ADS)
Zamir Mansor, Fawwaz; Mustapha, Aida; Azah Samsudin, Noor
2017-08-01
The linguistic steganography studies are still in the stage of development and empowerment practices. This paper will present several text steganography on substitution methods based on the researcher’s perspective, all scholar paper will analyse and compared. The objective of this paper is to give basic information in the substitution method of text domain steganography that has been applied by previous researchers. The typical ways of this method also will be identified in this paper to reveal the most effective method in text domain steganography. Finally, the advantage of the characteristic and drawback on these techniques in generally also presented in this paper.
System analysis in forest resources: proceedings of the 2003 symposium.
Michael Bevers; Tara M. Barrett
2005-01-01
The 2003 symposium of systems analysis in forest resources brought together researchers and practitioners who apply methods of optimization, simulation, management science, and systems analysis to forestry problems. This was the 10th symposium in the series, with previous conferences held in 1975, 1985, 1988, 1991, 1993, 1994, 1997, 2000, and 2002. The forty-two papers...
Direct measurement of carbon-14 in carbon dioxide by liquid scintillation counting
NASA Technical Reports Server (NTRS)
Horrocks, D. L.
1969-01-01
Liquid scintillation counting technique is applied to the direct measurement of carbon-14 in carbon dioxide. This method has high counting efficiency and eliminates many of the basic problems encountered with previous techniques. The technique can be used to achieve a percent substitution reaction and is of interest as an analytical technique.
ERIC Educational Resources Information Center
Marty, Laurence; Venturini, Patrice; Almqvist, Jonas
2018-01-01
Classroom actions rely, among other things, on teaching habits and traditions. Previous research has clarified three different teaching traditions in science education: the academic tradition builds on the idea that simply the products and methods of science are worth teaching; the applied tradition focuses on students' ability to use scientific…
Airflow Resistance of Loose-Fill Mineral Fiber Insulations in Retrofit Applications
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schumacher, C. J.; Fox, M. J.; Lstiburek, J.
2015-02-01
This report expands on Building America Report 1109 by applying the experimental apparatus and test method to dense-pack retrofit applications using mineral fiber insulation materials. Three (3) fiber glass insulation materials and one (1) stone wool insulation material were tested, and the results compared to the cellulose results from the previous study.
Applying Knowledge-Based Methods to Design and Implement an Air Quality Workshop
Daniel L. Schmoldt; David L. Peterson
1991-01-01
In response to protection needs in class I wilderness areas, forest land managers of the USDA Forest Service must provide input to regulatory agencies regarding air pollutant impacts on air quality-related values. Regional workshops have been convened for land managers and scientists to discuss the aspects and extent of wilderness protection needs. Previous experience...
Local blur analysis and phase error correction method for fringe projection profilometry systems.
Rao, Li; Da, Feipeng
2018-05-20
We introduce a flexible error correction method for fringe projection profilometry (FPP) systems in the presence of local blur phenomenon. Local blur caused by global light transport such as camera defocus, projector defocus, and subsurface scattering will cause significant systematic errors in FPP systems. Previous methods, which adopt high-frequency patterns to separate the direct and global components, fail when the global light phenomenon occurs locally. In this paper, the influence of local blur on phase quality is thoroughly analyzed, and a concise error correction method is proposed to compensate the phase errors. For defocus phenomenon, this method can be directly applied. With the aid of spatially varying point spread functions and local frontal plane assumption, experiments show that the proposed method can effectively alleviate the system errors and improve the final reconstruction accuracy in various scenes. For a subsurface scattering scenario, if the translucent object is dominated by multiple scattering, the proposed method can also be applied to correct systematic errors once the bidirectional scattering-surface reflectance distribution function of the object material is measured.
McKeever, P E; Letica, L H; Shakui, P; Averill, D R
1988-09-01
Multiple wells (M-wells) have been made over tissue sections on single microscopic slides to simultaneously localize binding specificity of many antibodies. More than 20 individual 4-microliter wells over tissue have been applied/slide, representing more than a 5-fold improvement in wells/slide and a 25-fold reduction in reagent volume over previous methods. More than 30 wells/slide have been applied over cellular monolayers. To produce the improvement, previous strategies of placing specimens into wells were changed to instead create wells over the specimen. We took advantage of the hydrophobic properties of paint to surround the wells and to segregate the various different primary antibodies. Segregation was complete on wells alternating with and without primary monoclonal antibody. The procedure accommodates both frozen and paraffin sections, yielding slides which last more than a year. After monoclonal antibody detection, standard histologic stains can be applied as counterstains. M-wells are suitable for localizing binding of multiple reagents or sample unknowns (polyclonal or monoclonal antibodies, hybridoma supernatants, body fluids, lectins) to either tissues or cells. Their small sample volume and large number of sample wells/slide could be particularly useful for early screening of hybridoma supernatants and for titration curves in immunohistochemistry (McKeever PE, Shakui P, Letica LH, Averill DR: J Histochem Cytochem 36:931, 1988).
Novel metaheuristic for parameter estimation in nonlinear dynamic biological systems
Rodriguez-Fernandez, Maria; Egea, Jose A; Banga, Julio R
2006-01-01
Background We consider the problem of parameter estimation (model calibration) in nonlinear dynamic models of biological systems. Due to the frequent ill-conditioning and multi-modality of many of these problems, traditional local methods usually fail (unless initialized with very good guesses of the parameter vector). In order to surmount these difficulties, global optimization (GO) methods have been suggested as robust alternatives. Currently, deterministic GO methods can not solve problems of realistic size within this class in reasonable computation times. In contrast, certain types of stochastic GO methods have shown promising results, although the computational cost remains large. Rodriguez-Fernandez and coworkers have presented hybrid stochastic-deterministic GO methods which could reduce computation time by one order of magnitude while guaranteeing robustness. Our goal here was to further reduce the computational effort without loosing robustness. Results We have developed a new procedure based on the scatter search methodology for nonlinear optimization of dynamic models of arbitrary (or even unknown) structure (i.e. black-box models). In this contribution, we describe and apply this novel metaheuristic, inspired by recent developments in the field of operations research, to a set of complex identification problems and we make a critical comparison with respect to the previous (above mentioned) successful methods. Conclusion Robust and efficient methods for parameter estimation are of key importance in systems biology and related areas. The new metaheuristic presented in this paper aims to ensure the proper solution of these problems by adopting a global optimization approach, while keeping the computational effort under reasonable values. This new metaheuristic was applied to a set of three challenging parameter estimation problems of nonlinear dynamic biological systems, outperforming very significantly all the methods previously used for these benchmark problems. PMID:17081289
Novel metaheuristic for parameter estimation in nonlinear dynamic biological systems.
Rodriguez-Fernandez, Maria; Egea, Jose A; Banga, Julio R
2006-11-02
We consider the problem of parameter estimation (model calibration) in nonlinear dynamic models of biological systems. Due to the frequent ill-conditioning and multi-modality of many of these problems, traditional local methods usually fail (unless initialized with very good guesses of the parameter vector). In order to surmount these difficulties, global optimization (GO) methods have been suggested as robust alternatives. Currently, deterministic GO methods can not solve problems of realistic size within this class in reasonable computation times. In contrast, certain types of stochastic GO methods have shown promising results, although the computational cost remains large. Rodriguez-Fernandez and coworkers have presented hybrid stochastic-deterministic GO methods which could reduce computation time by one order of magnitude while guaranteeing robustness. Our goal here was to further reduce the computational effort without loosing robustness. We have developed a new procedure based on the scatter search methodology for nonlinear optimization of dynamic models of arbitrary (or even unknown) structure (i.e. black-box models). In this contribution, we describe and apply this novel metaheuristic, inspired by recent developments in the field of operations research, to a set of complex identification problems and we make a critical comparison with respect to the previous (above mentioned) successful methods. Robust and efficient methods for parameter estimation are of key importance in systems biology and related areas. The new metaheuristic presented in this paper aims to ensure the proper solution of these problems by adopting a global optimization approach, while keeping the computational effort under reasonable values. This new metaheuristic was applied to a set of three challenging parameter estimation problems of nonlinear dynamic biological systems, outperforming very significantly all the methods previously used for these benchmark problems.
Balla, Anusha; Cho, Kwan Hyung; Kim, Yu Chul; Maeng, Han-Joo
2018-03-30
A simple, sensitive, and reliable reversed-phase, Ultra-High-Pressure Liquid Chromatography (UHPLC) coupled with a Diode Array Detector (DAD) method for the simultaneous determination of Procainamide (PA) and its major metabolite, N -acetylprocainamide (NAPA), in rat plasma was developed and validated. A simple deproteinization method with methanol was applied to the rat plasma samples, which were analyzed using UHPLC equipped with DAD at 280 nm, and a Synergi™ 4 µm polar, reversed-phase column using 1% acetic acid (pH 5.5) and methanol (76:24, v / v ) as eluent in isocratic mode at a flow rate 0.2 mL/min. The method showed good linearity ( r ² > 0.998) over the concentration range of 20-100,000 and 20-10,000 ng/mL for PA and NAPA, respectively. Intra- and inter-day accuracies ranged from 97.7 to 110.9%, and precision was <10.5% for PA and 99.7 to 109.2 and <10.5%, respectively, for NAPA. The lower limit of quantification was 20 ng/mL for both compounds. This is the first report of the UHPLC-DAD bioanalytical method for simultaneous measurement of PA and NAPA. The most obvious advantage of this method over previously reported HPLC methods is that it requires small sample and injection volumes, with a straightforward, one-step sample preparation. It overcomes the limitations of previous methods, which use large sample volume and complex sample preparation. The devised method was successfully applied to the quantification of PA and NAPA after an intravenous bolus administration of 10 mg/kg procainamide hydrochloride to rats.
NASA Astrophysics Data System (ADS)
Luo, Wei; Jasiewicz, Jaroslaw; Stepinski, Tomasz; Wang, Jinfeng; Xu, Chengdong; Cang, Xuezhi
2016-01-01
Previous studies of land dissection density (D) often find contradictory results regarding factors controlling its spatial variation. We hypothesize that the dominant controlling factors (and the interactions between them) vary from region to region due to differences in each region's local characteristics and geologic history. We test this hypothesis by applying a geographical detector method to eight physiographic divisions of the conterminous United States and identify the dominant factor(s) in each. The geographical detector method computes the power of determinant (q) that quantitatively measures the affinity between the factor considered and D. Results show that the factor (or factor combination) with the largest q value is different for physiographic regions with different characteristics and geologic histories. For example, lithology dominates in mountainous regions, curvature dominates in plains, and glaciation dominates in previously glaciated areas. The geographical detector method offers an objective framework for revealing factors controlling Earth surface processes.
Effects of bioirrigation of non-biting midges (Diptera: Chironomidae) on lake sediment respiration
Baranov, Viktor; Lewandowski, Jörg; Romeijn, Paul; Singer, Gabriel; Krause, Stefan
2016-01-01
Bioirrigation or the transport of fluids into the sediment matrix due to the activities of organisms such as bloodworms (larvae of Diptera, Chironomidae), has substantial impacts on sediment respiration in lakes. However, previous quantifications of bioirrigation impacts of Chironomidae have been limited by technical challenges such as the difficulty to separate faunal and bacterial respiration. This paper describes a novel method based on the bioreactive tracer resazurin for measuring respiration in-situ in non-sealed systems with constant oxygen supply. Applying this new method in microcosm experiments revealed that bioirrigation enhanced sediment respiration by up to 2.5 times. The new method is yielding lower oxygen consumption than previously reported, as it is only sensitive to aerobic heterotrophous respiration and not to other processes causing oxygen decrease. Hence it decouples the quantification of respiration of animals and inorganic oxygen consumption from microbe respiration in sediment. PMID:27256514
Effects of bioirrigation of non-biting midges (Diptera: Chironomidae) on lake sediment respiration.
Baranov, Viktor; Lewandowski, Jörg; Romeijn, Paul; Singer, Gabriel; Krause, Stefan
2016-06-03
Bioirrigation or the transport of fluids into the sediment matrix due to the activities of organisms such as bloodworms (larvae of Diptera, Chironomidae), has substantial impacts on sediment respiration in lakes. However, previous quantifications of bioirrigation impacts of Chironomidae have been limited by technical challenges such as the difficulty to separate faunal and bacterial respiration. This paper describes a novel method based on the bioreactive tracer resazurin for measuring respiration in-situ in non-sealed systems with constant oxygen supply. Applying this new method in microcosm experiments revealed that bioirrigation enhanced sediment respiration by up to 2.5 times. The new method is yielding lower oxygen consumption than previously reported, as it is only sensitive to aerobic heterotrophous respiration and not to other processes causing oxygen decrease. Hence it decouples the quantification of respiration of animals and inorganic oxygen consumption from microbe respiration in sediment.
Effects of bioirrigation of non-biting midges (Diptera: Chironomidae) on lake sediment respiration
NASA Astrophysics Data System (ADS)
Baranov, Viktor; Lewandowski, Jörg; Romeijn, Paul; Singer, Gabriel; Krause, Stefan
2016-06-01
Bioirrigation or the transport of fluids into the sediment matrix due to the activities of organisms such as bloodworms (larvae of Diptera, Chironomidae), has substantial impacts on sediment respiration in lakes. However, previous quantifications of bioirrigation impacts of Chironomidae have been limited by technical challenges such as the difficulty to separate faunal and bacterial respiration. This paper describes a novel method based on the bioreactive tracer resazurin for measuring respiration in-situ in non-sealed systems with constant oxygen supply. Applying this new method in microcosm experiments revealed that bioirrigation enhanced sediment respiration by up to 2.5 times. The new method is yielding lower oxygen consumption than previously reported, as it is only sensitive to aerobic heterotrophous respiration and not to other processes causing oxygen decrease. Hence it decouples the quantification of respiration of animals and inorganic oxygen consumption from microbe respiration in sediment.
Prediction of axial limit capacity of stone columns using dimensional analysis
NASA Astrophysics Data System (ADS)
Nazaruddin A., T.; Mohamed, Zainab; Mohd Azizul, L.; Hafez M., A.
2017-08-01
Stone column is the most favorable method used by engineers in designing work for stabilization of soft ground for road embankment, and foundation for liquid structure. Easy installation and cheaper cost are among the factors that make stone column more preferable than other method. Furthermore, stone column also can acts as vertical drain to increase the rate of consolidation during preloading stage before construction work started. According to previous studied there are several parameters that influence the capacity of stone column. Among of them are angle friction of among the stones, arrangement of column (two pattern arrangement most applied triangular and square), spacing center to center between columns, shear strength of soil, and physical size of column (diameter and length). Dimensional analysis method (Buckingham-Pi Theorem) has used to carry out the new formula for prediction of load capacity stone columns. Experimental data from two previous studies was used for analysis of study.
Parser Combinators: a Practical Application for Generating Parsers for NMR Data
Fenwick, Matthew; Weatherby, Gerard; Ellis, Heidi JC; Gryk, Michael R.
2013-01-01
Nuclear Magnetic Resonance (NMR) spectroscopy is a technique for acquiring protein data at atomic resolution and determining the three-dimensional structure of large protein molecules. A typical structure determination process results in the deposition of a large data sets to the BMRB (Bio-Magnetic Resonance Data Bank). This data is stored and shared in a file format called NMR-Star. This format is syntactically and semantically complex making it challenging to parse. Nevertheless, parsing these files is crucial to applying the vast amounts of biological information stored in NMR-Star files, allowing researchers to harness the results of previous studies to direct and validate future work. One powerful approach for parsing files is to apply a Backus-Naur Form (BNF) grammar, which is a high-level model of a file format. Translation of the grammatical model to an executable parser may be automatically accomplished. This paper will show how we applied a model BNF grammar of the NMR-Star format to create a free, open-source parser, using a method that originated in the functional programming world known as “parser combinators”. This paper demonstrates the effectiveness of a principled approach to file specification and parsing. This paper also builds upon our previous work [1], in that 1) it applies concepts from Functional Programming (which is relevant even though the implementation language, Java, is more mainstream than Functional Programming), and 2) all work and accomplishments from this project will be made available under standard open source licenses to provide the community with the opportunity to learn from our techniques and methods. PMID:24352525
Kent, R M; Guinane, C M; O'Connor, P M; Fitzgerald, G F; Hill, C; Stanton, C; Ross, R P
2012-08-01
The aim of this study was to identify Bacillus isolates capable of degrading sodium caseinate and subsequently to generate bioactive peptides with antimicrobial activity. Sodium caseinate (2.5% w/v) was inoculated separately with 16 Bacillus isolates and allowed to ferment overnight. Protein breakdown in the fermentates was analysed using gel permeation-HPLC (GP-HPLC) and screened for peptides (<3-kDa) with MALDI-TOF mass spectrometry. Caseicin A (IKHQGLPQE) and caseicin B (VLNENLLR), two previously characterized antimicrobial peptides, were identified in the fermentates of both Bacillus cereus and Bacillus thuringiensis isolates. The caseicin peptides were subsequently purified by RP-HPLC and antimicrobial assays indicated that the peptides maintained the previously identified inhibitory activity against the infant formula pathogen Cronobacter sakazakii. We report a new method using Bacillus sp. to generate two previously characterized antimicrobial peptides from casein. This study highlights the potential to exploit Bacillus sp. or the enzymes they produce for the generation of bioactive antimicrobial peptides from bovine casein. © 2012 The Authors. Letters in Applied Microbiology © 2012 The Society for Applied Microbiology.
An incremental DPMM-based method for trajectory clustering, modeling, and retrieval.
Hu, Weiming; Li, Xi; Tian, Guodong; Maybank, Stephen; Zhang, Zhongfei
2013-05-01
Trajectory analysis is the basis for many applications, such as indexing of motion events in videos, activity recognition, and surveillance. In this paper, the Dirichlet process mixture model (DPMM) is applied to trajectory clustering, modeling, and retrieval. We propose an incremental version of a DPMM-based clustering algorithm and apply it to cluster trajectories. An appropriate number of trajectory clusters is determined automatically. When trajectories belonging to new clusters arrive, the new clusters can be identified online and added to the model without any retraining using the previous data. A time-sensitive Dirichlet process mixture model (tDPMM) is applied to each trajectory cluster for learning the trajectory pattern which represents the time-series characteristics of the trajectories in the cluster. Then, a parameterized index is constructed for each cluster. A novel likelihood estimation algorithm for the tDPMM is proposed, and a trajectory-based video retrieval model is developed. The tDPMM-based probabilistic matching method and the DPMM-based model growing method are combined to make the retrieval model scalable and adaptable. Experimental comparisons with state-of-the-art algorithms demonstrate the effectiveness of our algorithm.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Anderton, Christopher R.; Chu, Rosalie K.; Tolic, Nikola
The ability to visualize biochemical interactions between microbial communities using MALDI MSI has provided tremendous insights into a variety of biological fields. Matrix application using a sieve proved to be incredibly useful, but it had many limitations that include uneven matrix coverage and limitation in the types of matrices one could employ in their studies. Recently, there has been a concerted effort to improve matrix application for studying agar plated microbial cultures, many of which utilized automated matrix sprayers. Here, we describe the usefulness of using a robotic sprayer for matrix application. The robotic sprayer has two-dimensional control over wheremore » matrix is applied and a heated capillary that allows for rapid drying of the applied matrix. This method provided a significant increase in MALDI sensitivity over the sieve method, as demonstrated by FT-ICR MS analysis, facilitating the ability to gain higher lateral resolution MS images of Bacillus Subtilis than previously reported. This method also allowed for the use of different matrices to be applied to the culture surfaces.« less
White, Cynthia; Mao, Zhiyuan; Savage, Van M.
2016-01-01
Interactions among drugs play a critical role in the killing efficacy of multi-drug treatments. Recent advances in theory and experiment for three-drug interactions enable the search for emergent interactions—ones not predictable from pairwise interactions. Previous work has shown it is easier to detect synergies and antagonisms among pairwise interactions when a rescaling method is applied to the interaction metric. However, no study has carefully examined whether new types of normalization might be needed for emergence. Here, we propose several rescaling methods for enhancing the classification of the higher order drug interactions based on our conceptual framework. To choose the rescaling that best separates synergism, antagonism and additivity, we conducted bacterial growth experiments in the presence of single, pairwise and triple-drug combinations among 14 antibiotics. We found one of our rescaling methods is far better at distinguishing synergistic and antagonistic emergent interactions than any of the other methods. Using our new method, we find around 50% of emergent interactions are additive, much less than previous reports of greater than 90% additivity. We conclude that higher order emergent interactions are much more common than previously believed, and we argue these findings for drugs suggest that appropriate rescaling is crucial to infer higher order interactions. PMID:27278366
NASA Technical Reports Server (NTRS)
Balasubramanian, R.; Norrie, D. H.; De Vries, G.
1979-01-01
Abel's integral equation is the governing equation for certain problems in physics and engineering, such as radiation from distributed sources. The finite element method for the solution of this non-linear equation is presented for problems with cylindrical symmetry and the extension to more general integral equations is indicated. The technique was applied to an axisymmetric glow discharge problem and the results show excellent agreement with previously obtained solutions
NASA Technical Reports Server (NTRS)
Johnson, R. A.; Wehrly, T.
1976-01-01
Population models for dependence between two angular measurements and for dependence between an angular and a linear observation are proposed. The method of canonical correlations first leads to new population and sample measures of dependence in this latter situation. An example relating wind direction to the level of a pollutant is given. Next, applied to pairs of angular measurements, the method yields previously proposed sample measures in some special cases and a new sample measure in general.
Fourier-space combination of Planck and Herschel images
NASA Astrophysics Data System (ADS)
Abreu-Vicente, J.; Stutz, A.; Henning, Th.; Keto, E.; Ballesteros-Paredes, J.; Robitaille, T.
2017-08-01
Context. Herschel has revolutionized our ability to measure column densities (NH) and temperatures (T) of molecular clouds thanks to its far infrared multiwavelength coverage. However, the lack of a well defined background intensity level in the Herschel data limits the accuracy of the NH and T maps. Aims: We aim to provide a method that corrects the missing Herschel background intensity levels using the Planck model for foreground Galactic thermal dust emission. For the Herschel/PACS data, both the constant-offset as well as the spatial dependence of the missing background must be addressed. For the Herschel/SPIRE data, the constant-offset correction has already been applied to the archival data so we are primarily concerned with the spatial dependence, which is most important at 250 μm. Methods: We present a Fourier method that combines the publicly available Planck model on large angular scales with the Herschel images on smaller angular scales. Results: We have applied our method to two regions spanning a range of Galactic environments: Perseus and the Galactic plane region around l = 11deg (HiGal-11). We post-processed the combined dust continuum emission images to generate column density and temperature maps. We compared these to previously adopted constant-offset corrections. We find significant differences (≳20%) over significant ( 15%) areas of the maps, at low column densities (NH ≲ 1022 cm-2) and relatively high temperatures (T ≳ 20 K). We have also applied our method to synthetic observations of a simulated molecular cloud to validate our method. Conclusions: Our method successfully corrects the Herschel images, including both the constant-offset intensity level and the scale-dependent background variations measured by Planck. Our method improves the previous constant-offset corrections, which did not account for variations in the background emission levels. The image FITS files used in this paper are only available at the CDS via anonymous ftp to http://cdsarc.u-strasbg.fr (http://130.79.128.5) or via http://cdsarc.u-strasbg.fr/viz-bin/qcat?J/A+A/604/A65
Self-calibrating models for dynamic monitoring and diagnosis
NASA Technical Reports Server (NTRS)
Kuipers, Benjamin
1994-01-01
The present goal in qualitative reasoning is to develop methods for automatically building qualitative and semiquantitative models of dynamic systems and to use them for monitoring and fault diagnosis. The qualitative approach to modeling provides a guarantee of coverage while our semiquantitative methods support convergence toward a numerical model as observations are accumulated. We have developed and applied methods for automatic creation of qualitative models, developed two methods for obtaining tractable results on problems that were previously intractable for qualitative simulation, and developed more powerful methods for learning semiquantitative models from observations and deriving semiquantitative predictions from them. With these advances, qualitative reasoning comes significantly closer to realizing its aims as a practical engineering method.
Where do Students Go Wrong in Applying the Scientific Method?
NASA Astrophysics Data System (ADS)
Rubbo, Louis; Moore, Christopher
2015-04-01
Non-science majors completing a liberal arts degree are frequently required to take a science course. Ideally with the completion of a required science course, liberal arts students should demonstrate an improved capability in the application of the scientific method. In previous work we have demonstrated that this is possible if explicit instruction is spent on the development of scientific reasoning skills. However, even with explicit instruction, students still struggle to apply the scientific process. Counter to our expectations, the difficulty is not isolated to a single issue such as stating a testable hypothesis, designing an experiment, or arriving at a supported conclusion. Instead students appear to struggle with every step in the process. This talk summarizes our work looking at and identifying where students struggle in the application of the scientific method. This material is based upon work supported by the National Science Foundation under Grant No. 1244801.
A Lyapunov method for stability analysis of piecewise-affine systems over non-invariant domains
NASA Astrophysics Data System (ADS)
Rubagotti, Matteo; Zaccarian, Luca; Bemporad, Alberto
2016-05-01
This paper analyses stability of discrete-time piecewise-affine systems, defined on possibly non-invariant domains, taking into account the possible presence of multiple dynamics in each of the polytopic regions of the system. An algorithm based on linear programming is proposed, in order to prove exponential stability of the origin and to find a positively invariant estimate of its region of attraction. The results are based on the definition of a piecewise-affine Lyapunov function, which is in general discontinuous on the boundaries of the regions. The proposed method is proven to lead to feasible solutions in a broader range of cases as compared to a previously proposed approach. Two numerical examples are shown, among which a case where the proposed method is applied to a closed-loop system, to which model predictive control was applied without a-priori guarantee of stability.
Extrapolation techniques applied to matrix methods in neutron diffusion problems
NASA Technical Reports Server (NTRS)
Mccready, Robert R
1956-01-01
A general matrix method is developed for the solution of characteristic-value problems of the type arising in many physical applications. The scheme employed is essentially that of Gauss and Seidel with appropriate modifications needed to make it applicable to characteristic-value problems. An iterative procedure produces a sequence of estimates to the answer; and extrapolation techniques, based upon previous behavior of iterants, are utilized in speeding convergence. Theoretically sound limits are placed on the magnitude of the extrapolation that may be tolerated. This matrix method is applied to the problem of finding criticality and neutron fluxes in a nuclear reactor with control rods. The two-dimensional finite-difference approximation to the two-group neutron fluxes in a nuclear reactor with control rods. The two-dimensional finite-difference approximation to the two-group neutron-diffusion equations is treated. Results for this example are indicated.
Islam, M T; Trevorah, R M; Appadoo, D R T; Best, S P; Chantler, C T
2017-04-15
We present methodology for the first FTIR measurements of ferrocene using dilute wax solutions for dispersion and to preserve non-crystallinity; a new method for removal of channel spectra interference for high quality data; and a consistent approach for the robust estimation of a defined uncertainty for advanced structural χ r 2 analysis and mathematical hypothesis testing. While some of these issues have been investigated previously, the combination of novel approaches gives markedly improved results. Methods for addressing these in the presence of a modest signal and how to quantify the quality of the data irrespective of preprocessing for subsequent hypothesis testing are applied to the FTIR spectra of Ferrocene (Fc) and deuterated ferrocene (dFc, Fc-d 10 ) collected at the THz/Far-IR beam-line of the Australian Synchrotron at operating temperatures of 7K through 353K. Copyright © 2017 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Islam, M. T.; Trevorah, R. M.; Appadoo, D. R. T.; Best, S. P.; Chantler, C. T.
2017-04-01
We present methodology for the first FTIR measurements of ferrocene using dilute wax solutions for dispersion and to preserve non-crystallinity; a new method for removal of channel spectra interference for high quality data; and a consistent approach for the robust estimation of a defined uncertainty for advanced structural χr2 analysis and mathematical hypothesis testing. While some of these issues have been investigated previously, the combination of novel approaches gives markedly improved results. Methods for addressing these in the presence of a modest signal and how to quantify the quality of the data irrespective of preprocessing for subsequent hypothesis testing are applied to the FTIR spectra of Ferrocene (Fc) and deuterated ferrocene (dFc, Fc-d10) collected at the THz/Far-IR beam-line of the Australian Synchrotron at operating temperatures of 7 K through 353 K.
The colorimetric analysis of anti-tuberculosis fixed-dose combination tablets and capsules.
Ellard, G A
1999-11-01
The perceived need to demonstrate whether or not the actual amounts of rifampicin, isoniazid and pyrazinamide in fixed-dose combination tablets or capsules correspond to their stated drug contents. To adapt specific, robust and simple colorimetric methods that have been previously applied to measuring plasma and urinary rifampicin, isoniazid, pyrazinamide and ethambutol concentrations to estimate tablet and capsule drug contents. The methods were applied to the analysis of 14 commercially manufactured fixed-dose combinations: two capsule and three tablet formulations containing rifampicin and isoniazid; seven tablet formulations containing rifampicin, isoniazid and pyrazinamide; and two tablet formulations containing rifampicin, isoniazid, pyrazinamide and ethambutol. All the combined formulations contained near to their stated drug contents. Replicate analyses confirmed the excellent precision of the drug analyses. Such methods are not only rapid to perform but should be practical in many Third World situations with relatively modest laboratory facilities.
Reconstructing signals from noisy data with unknown signal and noise covariance.
Oppermann, Niels; Robbers, Georg; Ensslin, Torsten A
2011-10-01
We derive a method to reconstruct Gaussian signals from linear measurements with Gaussian noise. This new algorithm is intended for applications in astrophysics and other sciences. The starting point of our considerations is the principle of minimum Gibbs free energy, which was previously used to derive a signal reconstruction algorithm handling uncertainties in the signal covariance. We extend this algorithm to simultaneously uncertain noise and signal covariances using the same principles in the derivation. The resulting equations are general enough to be applied in many different contexts. We demonstrate the performance of the algorithm by applying it to specific example situations and compare it to algorithms not allowing for uncertainties in the noise covariance. The results show that the method we suggest performs very well under a variety of circumstances and is indeed qualitatively superior to the other methods in cases where uncertainty in the noise covariance is present.
Incremental Transductive Learning Approaches to Schistosomiasis Vector Classification
NASA Astrophysics Data System (ADS)
Fusco, Terence; Bi, Yaxin; Wang, Haiying; Browne, Fiona
2016-08-01
The key issues pertaining to collection of epidemic disease data for our analysis purposes are that it is a labour intensive, time consuming and expensive process resulting in availability of sparse sample data which we use to develop prediction models. To address this sparse data issue, we present the novel Incremental Transductive methods to circumvent the data collection process by applying previously acquired data to provide consistent, confidence-based labelling alternatives to field survey research. We investigated various reasoning approaches for semi-supervised machine learning including Bayesian models for labelling data. The results show that using the proposed methods, we can label instances of data with a class of vector density at a high level of confidence. By applying the Liberal and Strict Training Approaches, we provide a labelling and classification alternative to standalone algorithms. The methods in this paper are components in the process of reducing the proliferation of the Schistosomiasis disease and its effects.
Automated anatomical labeling method for abdominal arteries extracted from 3D abdominal CT images
NASA Astrophysics Data System (ADS)
Oda, Masahiro; Hoang, Bui Huy; Kitasaka, Takayuki; Misawa, Kazunari; Fujiwara, Michitaka; Mori, Kensaku
2012-02-01
This paper presents an automated anatomical labeling method of abdominal arteries. In abdominal surgery, understanding of blood vessel structure concerning with a target organ is very important. Branching pattern of blood vessels differs among individuals. It is required to develop a system that can assist understanding of a blood vessel structure and anatomical names of blood vessels of a patient. Previous anatomical labbeling methods for abdominal arteries deal with either of the upper or lower abdominal arteries. In this paper, we present an automated anatomical labeling method of both of the upper and lower abdominal arteries extracted from CT images. We obtain a tree structure of artery regions and calculate feature values for each branch. These feature values include the diameter, curvature, direction, and running vectors of a branch. Target arteries of this method are grouped based on branching conditions. The following processes are separately applied for each group. We compute candidate artery names by using classifiers that are trained to output artery names. A correction process of the candidate anatomical names based on the rule of majority is applied to determine final names. We applied the proposed method to 23 cases of 3D abdominal CT images. Experimental results showed that the proposed method is able to perform nomenclature of entire major abdominal arteries. The recall and the precision rates of labeling are 79.01% and 80.41%, respectively.
Spreco, A; Eriksson, O; Dahlström, Ö; Timpka, T
2017-07-01
Methods for the detection of influenza epidemics and prediction of their progress have seldom been comparatively evaluated using prospective designs. This study aimed to perform a prospective comparative trial of algorithms for the detection and prediction of increased local influenza activity. Data on clinical influenza diagnoses recorded by physicians and syndromic data from a telenursing service were used. Five detection and three prediction algorithms previously evaluated in public health settings were calibrated and then evaluated over 3 years. When applied on diagnostic data, only detection using the Serfling regression method and prediction using the non-adaptive log-linear regression method showed acceptable performances during winter influenza seasons. For the syndromic data, none of the detection algorithms displayed a satisfactory performance, while non-adaptive log-linear regression was the best performing prediction method. We conclude that evidence was found for that available algorithms for influenza detection and prediction display satisfactory performance when applied on local diagnostic data during winter influenza seasons. When applied on local syndromic data, the evaluated algorithms did not display consistent performance. Further evaluations and research on combination of methods of these types in public health information infrastructures for 'nowcasting' (integrated detection and prediction) of influenza activity are warranted.
Suleimanov, Yury V; Green, William H
2015-09-08
We present a simple protocol which allows fully automated discovery of elementary chemical reaction steps using in cooperation double- and single-ended transition-state optimization algorithms--the freezing string and Berny optimization methods, respectively. To demonstrate the utility of the proposed approach, the reactivity of several single-molecule systems of combustion and atmospheric chemistry importance is investigated. The proposed algorithm allowed us to detect without any human intervention not only "known" reaction pathways, manually detected in the previous studies, but also new, previously "unknown", reaction pathways which involve significant atom rearrangements. We believe that applying such a systematic approach to elementary reaction path finding will greatly accelerate the discovery of new chemistry and will lead to more accurate computer simulations of various chemical processes.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Subekti, M.; Center for Development of Reactor Safety Technology, National Nuclear Energy Agency of Indonesia, Puspiptek Complex BO.80, Serpong-Tangerang, 15340; Ohno, T.
2006-07-01
The neuro-expert has been utilized in previous monitoring-system research of Pressure Water Reactor (PWR). The research improved the monitoring system by utilizing neuro-expert, conventional noise analysis and modified neural networks for capability extension. The parallel method applications required distributed architecture of computer-network for performing real-time tasks. The research aimed to improve the previous monitoring system, which could detect sensor degradation, and to perform the monitoring demonstration in High Temperature Engineering Tested Reactor (HTTR). The developing monitoring system based on some methods that have been tested using the data from online PWR simulator, as well as RSG-GAS (30 MW research reactormore » in Indonesia), will be applied in HTTR for more complex monitoring. (authors)« less
ELM: AN ALGORITHM TO ESTIMATE THE ALPHA ABUNDANCE FROM LOW-RESOLUTION SPECTRA
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bu, Yude; Zhao, Gang; Kumar, Yerra Bharat
We have investigated a novel methodology using the extreme learning machine (ELM) algorithm to determine the α abundance of stars. Applying two methods based on the ELM algorithm—ELM+spectra and ELM+Lick indices—to the stellar spectra from the ELODIE database, we measured the α abundance with a precision better than 0.065 dex. By applying these two methods to the spectra with different signal-to-noise ratios (S/Ns) and different resolutions, we found that ELM+spectra is more robust against degraded resolution and ELM+Lick indices is more robust against variation in S/N. To further validate the performance of ELM, we applied ELM+spectra and ELM+Lick indices to SDSSmore » spectra and estimated α abundances with a precision around 0.10 dex, which is comparable to the results given by the SEGUE Stellar Parameter Pipeline. We further applied ELM to the spectra of stars in Galactic globular clusters (M15, M13, M71) and open clusters (NGC 2420, M67, NGC 6791), and results show good agreement with previous studies (within 1σ). A comparison of the ELM with other widely used methods including support vector machine, Gaussian process regression, artificial neural networks, and linear least-squares regression shows that ELM is efficient with computational resources and more accurate than other methods.« less
Goodarzi, Mohammad; Jensen, Richard; Vander Heyden, Yvan
2012-12-01
A Quantitative Structure-Retention Relationship (QSRR) is proposed to estimate the chromatographic retention of 83 diverse drugs on a Unisphere poly butadiene (PBD) column, using isocratic elutions at pH 11.7. Previous work has generated QSRR models for them using Classification And Regression Trees (CART). In this work, Ant Colony Optimization is used as a feature selection method to find the best molecular descriptors from a large pool. In addition, several other selection methods have been applied, such as Genetic Algorithms, Stepwise Regression and the Relief method, not only to evaluate Ant Colony Optimization as a feature selection method but also to investigate its ability to find the important descriptors in QSRR. Multiple Linear Regression (MLR) and Support Vector Machines (SVMs) were applied as linear and nonlinear regression methods, respectively, giving excellent correlation between the experimental, i.e. extrapolated to a mobile phase consisting of pure water, and predicted logarithms of the retention factors of the drugs (logk(w)). The overall best model was the SVM one built using descriptors selected by ACO. Copyright © 2012 Elsevier B.V. All rights reserved.
Conformal mapping for multiple terminals
Wang, Weimin; Ma, Wenying; Wang, Qiang; Ren, Hao
2016-01-01
Conformal mapping is an important mathematical tool that can be used to solve various physical and engineering problems in many fields, including electrostatics, fluid mechanics, classical mechanics, and transformation optics. It is an accurate and convenient way to solve problems involving two terminals. However, when faced with problems involving three or more terminals, which are more common in practical applications, existing conformal mapping methods apply assumptions or approximations. A general exact method does not exist for a structure with an arbitrary number of terminals. This study presents a conformal mapping method for multiple terminals. Through an accurate analysis of boundary conditions, additional terminals or boundaries are folded into the inner part of a mapped region. The method is applied to several typical situations, and the calculation process is described for two examples of an electrostatic actuator with three electrodes and of a light beam splitter with three ports. Compared with previously reported results, the solutions for the two examples based on our method are more precise and general. The proposed method is helpful in promoting the application of conformal mapping in analysis of practical problems. PMID:27830746
NASA Technical Reports Server (NTRS)
Ahn, Kyung H.
1994-01-01
The RNG-based algebraic turbulence model, with a new method of solving the cubic equation and applying new length scales, is introduced. An analysis is made of the RNG length scale which was previously reported and the resulting eddy viscosity is compared with those from other algebraic turbulence models. Subsequently, a new length scale is introduced which actually uses the two previous RNG length scales in a systematic way to improve the model performance. The performance of the present RNG model is demonstrated by simulating the boundary layer flow over a flat plate and the flow over an airfoil.
Development of a model for predicting NASA/MSFC program success
NASA Technical Reports Server (NTRS)
Riggs, Jeffrey; Miller, Tracy; Finley, Rosemary
1990-01-01
Research conducted during the execution of a previous contract (NAS8-36955/0039) firmly established the feasibility of developing a tool to aid decision makers in predicting the potential success of proposed projects. The final report from that investigation contains an outline of the method to be applied in developing this Project Success Predictor Model. As a follow-on to the previous study, this report describes in detail the development of this model and includes full explanation of the data-gathering techniques used to poll expert opinion. The report includes the presentation of the model code itself.
Yamashita, Kunihiko; Shinoda, Shinsuke; Hagiwara, Saori; Itagaki, Hiroshi
2015-04-01
To date, there has been no well-established local lymph node assay (LLNA) that includes an elicitation phase. Therefore, we developed a modified local lymph node assay with an elicitation phase (LLNA:DAE) to discriminate true skin sensitizers from chemicals that gave borderline positive results and previously reported this assay. To develop the LLNA:DAE method as a useful stand-alone testing method, we investigated the complete procedure for the LLNA:DAE method using hexyl cinnamic aldehyde (HCA), isoeugenol, and 2,4-dinitrochlorobenzene (DNCB) as test compounds. We defined the LLNA:DAE procedure as follows: in the dose-finding test, four concentrations of chemical applied to dorsum of the right ear on days 1, 2, and 3 and dorsum of both ears on day 10. Ear thickness and skin irritation score were measured on days 1, 3, 5, 10, and 12. Local lymph nodes were excised and weighed on day 12. The test dose for the primary LLNA:DAE study was selected as the dose that gave the highest left ear lymph node weight in the dose-finding study, or the lowest dose that produced a left ear lymph node of over 4 mg. This procedure was validated using nine different chemicals. Furthermore, qualitative relationship was observed between the degree of elicitation response in the left ear lymph node and the skin sensitizing potency of 32 chemicals tested in this study and the previous study. These results indicated that LLNA:DAE method was as first LLNA method that was able to evaluate the skin sensitizing potential and potency in elicitation response.
Detection and 3D representation of pulmonary air bubbles in HRCT volumes
NASA Astrophysics Data System (ADS)
Silva, Jose S.; Silva, Augusto F.; Santos, Beatriz S.; Madeira, Joaquim
2003-05-01
Bubble emphysema is a disease characterized by the presence of air bubbles within the lungs. With the purpose of identifying pulmonary air bubbles, two alternative methods were developed, using High Resolution Computer Tomography (HRCT) exams. The search volume is confined to the pulmonary volume through a previously developed pulmonary contour detection algorithm. The first detection method follows a slice by slice approach and uses selection criteria based on the Hounsfield levels, dimensions, shape and localization of the bubbles. Candidate regions that do not exhibit axial coherence along at least two sections are excluded. Intermediate sections are interpolated for a more realistic representation of lungs and bubbles. The second detection method, after the pulmonary volume delimitation, follows a fully 3D approach. A global threshold is applied to the entire lung volume returning candidate regions. 3D morphologic operators are used to remove spurious structures and to circumscribe the bubbles. Bubble representation is accomplished by two alternative methods. The first generates bubble surfaces based on the voxel volumes previously detected; the second method assumes that bubbles are approximately spherical. In order to obtain better 3D representations, fits super-quadrics to bubble volume. The fitting process is based on non-linear least squares optimization method, where a super-quadric is adapted to a regular grid of points defined on each bubble. All methods were applied to real and semi-synthetical data where artificial and randomly deformed bubbles were embedded in the interior of healthy lungs. Quantitative results regarding bubble geometric features are either similar to a priori known values used in simulation tests, or indicate clinically acceptable dimensions and locations when dealing with real data.
Real-Time Frequency Response Estimation Using Joined-Wing SensorCraft Aeroelastic Wind-Tunnel Data
NASA Technical Reports Server (NTRS)
Grauer, Jared A; Heeg, Jennifer; Morelli, Eugene A
2012-01-01
A new method is presented for estimating frequency responses and their uncertainties from wind-tunnel data in real time. The method uses orthogonal phase-optimized multi- sine excitation inputs and a recursive Fourier transform with a least-squares estimator. The method was first demonstrated with an F-16 nonlinear flight simulation and results showed that accurate short period frequency responses were obtained within 10 seconds. The method was then applied to wind-tunnel data from a previous aeroelastic test of the Joined- Wing SensorCraft. Frequency responses describing bending strains from simultaneous control surface excitations were estimated in a time-efficient manner.
Validation of catchment models for predicting land-use and climate change impacts. 1. Method
NASA Astrophysics Data System (ADS)
Ewen, J.; Parkin, G.
1996-02-01
Computer simulation models are increasingly being proposed as tools capable of giving water resource managers accurate predictions of the impact of changes in land-use and climate. Previous validation testing of catchment models is reviewed, and it is concluded that the methods used do not clearly test a model's fitness for such a purpose. A new generally applicable method is proposed. This involves the direct testing of fitness for purpose, uses established scientific techniques, and may be implemented within a quality assured programme of work. The new method is applied in Part 2 of this study (Parkin et al., J. Hydrol., 175:595-613, 1996).
Tugwell, Peter; Pottie, Kevin; Welch, Vivian; Ueffing, Erin; Chambers, Andrea; Feightner, John
2011-01-01
Background: This article describes the evidence review and guideline development method developed for the Clinical Preventive Guidelines for Immigrants and Refugees in Canada by the Canadian Collaboration for Immigrant and Refugee Health Guideline Committee. Methods: The Appraisal of Guidelines for Research and Evaluation (AGREE) best-practice framework was combined with the recently developed Grading of Recommendations Assessment, Development and Evaluation (GRADE) approach to produce evidence-based clinical guidelines for immigrants and refugees in Canada. Results: A systematic approach was designed to produce the evidence reviews and apply the GRADE approach, including building on evidence from previous systematic reviews, searching for and comparing evidence between general and specific immigrant populations, and applying the GRADE criteria for making recommendations. This method was used for priority health conditions that had been selected by practitioners caring for immigrants and refugees in Canada. Interpretation: This article outlines the 14-step method that was defined to standardize the guideline development process for each priority health condition. PMID:20573711
Torres, Ana M; Lopez, Jose J; Pueo, Basilio; Cobos, Maximo
2013-04-01
Plane-wave decomposition (PWD) methods using microphone arrays have been shown to be a very useful tool within the applied acoustics community for their multiple applications in room acoustics analysis and synthesis. While many theoretical aspects of PWD have been previously addressed in the literature, the practical advantages of the PWD method to assess the acoustic behavior of real rooms have been barely explored so far. In this paper, the PWD method is employed to analyze the sound field inside a selected set of real rooms having a well-defined purpose. To this end, a circular microphone array is used to capture and process a number of impulse responses at different spatial positions, providing angle-dependent data for both direct and reflected wavefronts. The detection of reflected plane waves is performed by means of image processing techniques applied over the raw array response data and over the PWD data, showing the usefulness of image-processing-based methods for room acoustics analysis.
Valtierra, Robert D; Glynn Holt, R; Cholewiak, Danielle; Van Parijs, Sofie M
2013-09-01
Multipath localization techniques have not previously been applied to baleen whale vocalizations due to difficulties in application to tonal vocalizations. Here it is shown that an autocorrelation method coupled with the direct reflected time difference of arrival localization technique can successfully resolve location information. A derivation was made to model the autocorrelation of a direct signal and its overlapping reflections to illustrate that an autocorrelation may be used to extract reflection information from longer duration signals containing a frequency sweep, such as some calls produced by baleen whales. An analysis was performed to characterize the difference in behavior of the autocorrelation when applied to call types with varying parameters (sweep rate, call duration). The method's feasibility was tested using data from playback transmissions to localize an acoustic transducer at a known depth and location. The method was then used to estimate the depth and range of a single North Atlantic right whale (Eubalaena glacialis) and humpback whale (Megaptera novaeangliae) from two separate experiments.
Lattice hydrodynamic model based traffic control: A transportation cyber-physical system approach
NASA Astrophysics Data System (ADS)
Liu, Hui; Sun, Dihua; Liu, Weining
2016-11-01
Lattice hydrodynamic model is a typical continuum traffic flow model, which describes the jamming transition of traffic flow properly. Previous studies in lattice hydrodynamic model have shown that the use of control method has the potential to improve traffic conditions. In this paper, a new control method is applied in lattice hydrodynamic model from a transportation cyber-physical system approach, in which only one lattice site needs to be controlled in this control scheme. The simulation verifies the feasibility and validity of this method, which can ensure the efficient and smooth operation of the traffic flow.
An analytical method to predict efficiency of aircraft gearboxes
NASA Technical Reports Server (NTRS)
Anderson, N. E.; Loewenthal, S. H.; Black, J. D.
1984-01-01
A spur gear efficiency prediction method previously developed by the authors was extended to include power loss of planetary gearsets. A friction coefficient model was developed for MIL-L-7808 oil based on disc machine data. This combined with the recent capability of predicting losses in spur gears of nonstandard proportions allows the calculation of power loss for complete aircraft gearboxes that utilize spur gears. The method was applied to the T56/501 turboprop gearbox and compared with measured test data. Bearing losses were calculated with large scale computer programs. Breakdowns of the gearbox losses point out areas for possible improvement.
NASA Astrophysics Data System (ADS)
Chen, Ming-Chih; Hsiao, Shen-Fu
In this paper, we propose an area-efficient design of Advanced Encryption Standard (AES) processor by applying a new common-expression-elimination (CSE) method to the sub-functions of various transformations required in AES. The proposed method reduces the area cost of realizing the sub-functions by extracting the common factors in the bit-level XOR/AND-based sum-of-product expressions of these sub-functions using a new CSE algorithm. Cell-based implementation results show that the AES processor with our proposed CSE method has significant area improvement compared with previous designs.
2014-01-01
Expression quantitative trait loci (eQTL) mapping is a tool that can systematically identify genetic variation affecting gene expression. eQTL mapping studies have shown that certain genomic locations, referred to as regulatory hotspots, may affect the expression levels of many genes. Recently, studies have shown that various confounding factors may induce spurious regulatory hotspots. Here, we introduce a novel statistical method that effectively eliminates spurious hotspots while retaining genuine hotspots. Applied to simulated and real datasets, we validate that our method achieves greater sensitivity while retaining low false discovery rates compared to previous methods. PMID:24708878
A Complex Systems Approach to Causal Discovery in Psychiatry.
Saxe, Glenn N; Statnikov, Alexander; Fenyo, David; Ren, Jiwen; Li, Zhiguo; Prasad, Meera; Wall, Dennis; Bergman, Nora; Briggs, Ernestine C; Aliferis, Constantin
2016-01-01
Conventional research methodologies and data analytic approaches in psychiatric research are unable to reliably infer causal relations without experimental designs, or to make inferences about the functional properties of the complex systems in which psychiatric disorders are embedded. This article describes a series of studies to validate a novel hybrid computational approach--the Complex Systems-Causal Network (CS-CN) method-designed to integrate causal discovery within a complex systems framework for psychiatric research. The CS-CN method was first applied to an existing dataset on psychopathology in 163 children hospitalized with injuries (validation study). Next, it was applied to a much larger dataset of traumatized children (replication study). Finally, the CS-CN method was applied in a controlled experiment using a 'gold standard' dataset for causal discovery and compared with other methods for accurately detecting causal variables (resimulation controlled experiment). The CS-CN method successfully detected a causal network of 111 variables and 167 bivariate relations in the initial validation study. This causal network had well-defined adaptive properties and a set of variables was found that disproportionally contributed to these properties. Modeling the removal of these variables resulted in significant loss of adaptive properties. The CS-CN method was successfully applied in the replication study and performed better than traditional statistical methods, and similarly to state-of-the-art causal discovery algorithms in the causal detection experiment. The CS-CN method was validated, replicated, and yielded both novel and previously validated findings related to risk factors and potential treatments of psychiatric disorders. The novel approach yields both fine-grain (micro) and high-level (macro) insights and thus represents a promising approach for complex systems-oriented research in psychiatry.
On the ground state energy of the delta-function Fermi gas
NASA Astrophysics Data System (ADS)
Tracy, Craig A.; Widom, Harold
2016-10-01
The weak coupling asymptotics to order γ of the ground state energy of the delta-function Fermi gas, derived heuristically in the literature, is here made rigorous. Further asymptotics are in principle computable. The analysis applies to the Gaudin integral equation, a method previously used by one of the authors for the asymptotics of large Toeplitz matrices.
Polarimetric glucose sensing using Brewster reflection applying a rotating retarder analyzer
NASA Astrophysics Data System (ADS)
Boeckle, Stefan; Rovati, Luigi L.; Ansari, Rafat R.
2003-10-01
Previously, we proposed a polarimetric method, that exploits the Brewster-reflection with the final goal of application to the human eye (reflection off the eye lens) for non-invasive glucose sensing. The linearly polarized reflected light of this optical scheme is rotated by the glucose molecules present in the aqueous humor, thus carries the blood glucose concentration information. A proof-of-concept experimental bench-top setup is presented, applying a multi-wavelength true phase measurement approach and a rotating phase retarder as an analyzer to measure the very small rotation angles and the complete polarization state of the measurement light.
Analytical and Numerical Results for an Adhesively Bonded Joint Subjected to Pure Bending
NASA Technical Reports Server (NTRS)
Smeltzer, Stanley S., III; Lundgren, Eric
2006-01-01
A one-dimensional, semi-analytical methodology that was previously developed for evaluating adhesively bonded joints composed of anisotropic adherends and adhesives that exhibit inelastic material behavior is further verified in the present paper. A summary of the first-order differential equations and applied joint loading used to determine the adhesive response from the methodology are also presented. The method was previously verified against a variety of single-lap joint configurations from the literature that subjected the joints to cases of axial tension and pure bending. Using the same joint configuration and applied bending load presented in a study by Yang, the finite element analysis software ABAQUS was used to further verify the semi-analytical method. Linear static ABAQUS results are presented for two models, one with a coarse and one with a fine element meshing, that were used to verify convergence of the finite element analyses. Close agreement between the finite element results and the semi-analytical methodology were determined for both the shear and normal stress responses of the adhesive bondline. Thus, the semi-analytical methodology was successfully verified using the ABAQUS finite element software and a single-lap joint configuration subjected to pure bending.
NASA Astrophysics Data System (ADS)
Renkoski, Timothy E.; Hatch, Kenneth D.; Utzinger, Urs
2012-03-01
With no sufficient screening test for ovarian cancer, a method to evaluate the ovarian disease state quickly and nondestructively is needed. The authors have applied a wide-field spectral imager to freshly resected ovaries of 30 human patients in a study believed to be the first of its magnitude. Endogenous fluorescence was excited with 365-nm light and imaged in eight emission bands collectively covering the 400- to 640-nm range. Linear discriminant analysis was used to classify all image pixels and generate diagnostic maps of the ovaries. Training the classifier with previously collected single-point autofluorescence measurements of a spectroscopic probe enabled this novel classification. The process by which probe-collected spectra were transformed for comparison with imager spectra is described. Sensitivity of 100% and specificity of 51% were obtained in classifying normal and cancerous ovaries using autofluorescence data alone. Specificity increased to 69% when autofluorescence data were divided by green reflectance data to correct for spatial variation in tissue absorption properties. Benign neoplasm ovaries were also found to classify as nonmalignant using the same algorithm. Although applied ex vivo, the method described here appears useful for quick assessment of cancer presence in the human ovary.
Retreatment Predictions in Odontology by means of CBR Systems.
Campo, Livia; Aliaga, Ignacio J; De Paz, Juan F; García, Alvaro Enrique; Bajo, Javier; Villarubia, Gabriel; Corchado, Juan M
2016-01-01
The field of odontology requires an appropriate adjustment of treatments according to the circumstances of each patient. A follow-up treatment for a patient experiencing problems from a previous procedure such as endodontic therapy, for example, may not necessarily preclude the possibility of extraction. It is therefore necessary to investigate new solutions aimed at analyzing data and, with regard to the given values, determine whether dental retreatment is required. In this work, we present a decision support system which applies the case-based reasoning (CBR) paradigm, specifically designed to predict the practicality of performing or not performing a retreatment. Thus, the system uses previous experiences to provide new predictions, which is completely innovative in the field of odontology. The proposed prediction technique includes an innovative combination of methods that minimizes false negatives to the greatest possible extent. False negatives refer to a prediction favoring a retreatment when in fact it would be ineffective. The combination of methods is performed by applying an optimization problem to reduce incorrect classifications and takes into account different parameters, such as precision, recall, and statistical probabilities. The proposed system was tested in a real environment and the results obtained are promising.
Retreatment Predictions in Odontology by means of CBR Systems
Campo, Livia; Aliaga, Ignacio J.; García, Alvaro Enrique; Villarubia, Gabriel; Corchado, Juan M.
2016-01-01
The field of odontology requires an appropriate adjustment of treatments according to the circumstances of each patient. A follow-up treatment for a patient experiencing problems from a previous procedure such as endodontic therapy, for example, may not necessarily preclude the possibility of extraction. It is therefore necessary to investigate new solutions aimed at analyzing data and, with regard to the given values, determine whether dental retreatment is required. In this work, we present a decision support system which applies the case-based reasoning (CBR) paradigm, specifically designed to predict the practicality of performing or not performing a retreatment. Thus, the system uses previous experiences to provide new predictions, which is completely innovative in the field of odontology. The proposed prediction technique includes an innovative combination of methods that minimizes false negatives to the greatest possible extent. False negatives refer to a prediction favoring a retreatment when in fact it would be ineffective. The combination of methods is performed by applying an optimization problem to reduce incorrect classifications and takes into account different parameters, such as precision, recall, and statistical probabilities. The proposed system was tested in a real environment and the results obtained are promising. PMID:26884749
NASA Astrophysics Data System (ADS)
Yildiz, Nihat; San, Sait Eren; Okutan, Mustafa; Kaya, Hüseyin
2010-04-01
Among other significant obstacles, inherent nonlinearity in experimental physical response data poses severe difficulty in empirical physical formula (EPF) construction. In this paper, we applied a novel method (namely layered feedforward neural network (LFNN) approach) to produce explicit nonlinear EPFs for experimental nonlinear electro-optical responses of doped nematic liquid crystals (NLCs). Our motivation was that, as we showed in a previous theoretical work, an appropriate LFNN, due to its exceptional nonlinear function approximation capabilities, is highly relevant to EPF construction. Therefore, in this paper, we obtained excellently produced LFNN approximation functions as our desired EPFs for above-mentioned highly nonlinear response data of NLCs. In other words, by using suitable LFNNs, we successfully fitted the experimentally measured response and predicted the new (yet-to-be measured) response data. The experimental data (response versus input) were diffraction and dielectric properties versus bias voltage; and they were all taken from our previous experimental work. We conclude that in general, LFNN can be applied to construct various types of EPFs for the corresponding various nonlinear physical perturbation (thermal, electronic, molecular, electric, optical, etc.) data of doped NLCs.
Curtin, C; Kennedy, E; Henschke, P A
2012-07-01
The aim of this study was to determine sulphite tolerance for a large number of Dekkera bruxellensis isolates and evaluate the relationship between this phenotype and previously assigned genotype markers. A published microplate-based method for evaluation of yeast growth in the presence of sulphite was benchmarked against culturability following sulphite treatment, for the D. bruxellensis type strain (CBS 74) and a reference wine isolate (AWRI 1499). This method was used to estimate maximal sulphite tolerance for 41 D. bruxellensis isolates, which was found to vary over a fivefold range. Significant differences in sulphite tolerance were observed when isolates were grouped according to previously assigned genotypes and ribotypes. Variable sulphite tolerance for the wine spoilage yeast D. bruxellensis can be linked to genotype markers. Strategies to minimize risk of wine spoilage by D. bruxellensis must take into account at least a threefold range in effective sulphite concentration that is dependent upon the genotype group(s) present. The isolates characterized in this study will be a useful resource for establishing the mechanisms conferring sulphite tolerance for this industrially important yeast species. © 2012 The Authors. Letters in Applied Microbiology © 2012 The Society for Applied Microbiology.
Seol, Ye-In; Kim, Young-Kuk
2014-01-01
Power-aware scheduling reduces CPU energy consumption in hard real-time systems through dynamic voltage scaling (DVS). In this paper, we deal with pinwheel task model which is known as static and predictable task model and could be applied to various embedded or ubiquitous systems. In pinwheel task model, each task's priority is static and its execution sequence could be predetermined. There have been many static approaches to power-aware scheduling in pinwheel task model. But, in this paper, we will show that the dynamic priority scheduling results in power-aware scheduling could be applied to pinwheel task model. This method is more effective than adopting the previous static priority scheduling methods in saving energy consumption and, for the system being still static, it is more tractable and applicable to small sized embedded or ubiquitous computing. Also, we introduce a novel power-aware scheduling algorithm which exploits all slacks under preemptive earliest-deadline first scheduling which is optimal in uniprocessor system. The dynamic priority method presented in this paper could be applied directly to static systems of pinwheel task model. The simulation results show that the proposed algorithm with the algorithmic complexity of O(n) reduces the energy consumption by 10-80% over the existing algorithms.
2014-01-01
Power-aware scheduling reduces CPU energy consumption in hard real-time systems through dynamic voltage scaling (DVS). In this paper, we deal with pinwheel task model which is known as static and predictable task model and could be applied to various embedded or ubiquitous systems. In pinwheel task model, each task's priority is static and its execution sequence could be predetermined. There have been many static approaches to power-aware scheduling in pinwheel task model. But, in this paper, we will show that the dynamic priority scheduling results in power-aware scheduling could be applied to pinwheel task model. This method is more effective than adopting the previous static priority scheduling methods in saving energy consumption and, for the system being still static, it is more tractable and applicable to small sized embedded or ubiquitous computing. Also, we introduce a novel power-aware scheduling algorithm which exploits all slacks under preemptive earliest-deadline first scheduling which is optimal in uniprocessor system. The dynamic priority method presented in this paper could be applied directly to static systems of pinwheel task model. The simulation results show that the proposed algorithm with the algorithmic complexity of O(n) reduces the energy consumption by 10–80% over the existing algorithms. PMID:25121126
Image-Subtraction Photometry of Variable Stars in the Globular Clusters NGC 6388 and NGC 6441
NASA Technical Reports Server (NTRS)
Corwin, Michael T.; Sumerel, Andrew N.; Pritzl, Barton J.; Smith, Horace A.; Catelan, M.; Sweigart, Allen V.; Stetson, Peter B.
2006-01-01
We have applied Alard's image subtraction method (ISIS v2.1) to the observations of the globular clusters NGC 6388 and NGC 6441 previously analyzed using standard photometric techniques (DAOPHOT, ALLFRAME). In this reanalysis of observations obtained at CTIO, besides recovering the variables previously detected on the basis of our ground-based images, we have also been able to recover most of the RR Lyrae variables previously detected only in the analysis of Hubble Space Telescope WFPC2 observations of the inner region of NGC 6441. In addition, we report five possible new variables not found in the analysis of the EST observations of NGC 6441. This dramatically illustrates the capabilities of image subtraction techniques applied to ground-based data to recover variables in extremely crowded fields. We have also detected twelve new variables and six possible variables in NGC 6388 not found in our previous groundbased studies. Revised mean periods for RRab stars in NGC 6388 and NGC 6441 are 0.676 day and 0.756 day, respectively. These values are among the largest known for any galactic globular cluster. Additional probable type II Cepheids were identified in NGC 6388, confirming its status as a metal-rich globular cluster rich in Cepheids.
Rapid-estimation method for assessing scour at highway bridges
Holnbeck, Stephen R.
1998-01-01
A method was developed by the U.S. Geological Survey for rapid estimation of scour at highway bridges using limited site data and analytical procedures to estimate pier, abutment, and contraction scour depths. The basis for the method was a procedure recommended by the Federal Highway Administration for conducting detailed scour investigations, commonly referred to as the Level 2 method. Using pier, abutment, and contraction scour results obtained from Level 2 investigations at 122 sites in 10 States, envelope curves and graphical relations were developed that enable determination of scour-depth estimates at most bridge sites in a matter of a few hours. Rather than using complex hydraulic variables, surrogate variables more easily obtained in the field were related to calculated scour-depth data from Level 2 studies. The method was tested by having several experienced individuals apply the method in the field, and results were compared among the individuals and with previous detailed analyses performed for the sites. Results indicated that the variability in predicted scour depth among individuals applying the method generally was within an acceptable range, and that conservatively greater scour depths generally were obtained by the rapid-estimation method compared to the Level 2 method. The rapid-estimation method is considered most applicable for conducting limited-detail scour assessments and as a screening tool to determine those bridge sites that may require more detailed analysis. The method is designed to be applied only by a qualified professional possessing knowledge and experience in the fields of bridge scour, hydraulics, and flood hydrology, and having specific expertise with the Level 2 method.
Computation of Kinetics for the Hydrogen/Oxygen System Using the Thermodynamic Method
NASA Technical Reports Server (NTRS)
Marek, C. John
1996-01-01
A new method for predicting chemical rate constants using thermodynamics has been applied to the hydrogen/oxygen system. This method is based on using the gradient of the Gibbs free energy and a single proportionality constant D to determine the kinetic rate constants. Using this method the rate constants for any gas phase reaction can be computed from thermodynamic properties. A modified reaction set for the H/O system is determined. A11 of the third body efficiencies M are taken to be unity. Good agreement was obtained between the thermodynamic method and the experimental shock tube data. In addition, the hydrogen bromide experimental data presented in previous work is recomputed with M's of unity.
Classification of burn wounds using support vector machines
NASA Astrophysics Data System (ADS)
Acha, Begona; Serrano, Carmen; Palencia, Sergio; Murillo, Juan Jose
2004-05-01
The purpose of this work is to improve a previous method developed by the authors for the classification of burn wounds into their depths. The inputs of the system are color and texture information, as these are the characteristics observed by physicians in order to give a diagnosis. Our previous work consisted in segmenting the burn wound from the rest of the image and classifying the burn into its depth. In this paper we focus on the classification problem only. We already proposed to use a Fuzzy-ARTMAP neural network (NN). However, we may take advantage of new powerful classification tools such as Support Vector Machines (SVM). We apply the five-folded cross validation scheme to divide the database into training and validating sets. Then, we apply a feature selection method for each classifier, which will give us the set of features that yields the smallest classification error for each classifier. Features used to classify are first-order statistical parameters extracted from the L*, u* and v* color components of the image. The feature selection algorithms used are the Sequential Forward Selection (SFS) and the Sequential Backward Selection (SBS) methods. As data of the problem faced here are not linearly separable, the SVM was trained using some different kernels. The validating process shows that the SVM method, when using a Gaussian kernel of variance 1, outperforms classification results obtained with the rest of the classifiers, yielding an error classification rate of 0.7% whereas the Fuzzy-ARTMAP NN attained 1.6 %.
Accuracy of meteoroid speeds determined using a Fresnel transform procedure
NASA Astrophysics Data System (ADS)
Campbell, L.; Elford, W. G.
2006-03-01
New methods of determining meteor speeds using radar are giving results with an accuracy of better that 1%. It is anticipated that this degree of precision will allow determinations of pre-atmospheric speeds of shower meteors as well as estimates of the density of the meteoroids. The next step is to determine under what conditions these new measurements are reliable. Errors in meteoroid speeds determined using a Fresnel transform procedure applied to radar meteor data are investigated. The procedure determines the reflectivity of a meteor trail as a function of position, by application of the Fresnel transform to the time series of a radar reflection from the trail observed at a single detection station. It has previously been shown that this procedure can be used to determine the speed of the meteoroid, by finding the assumed speed that gives a reflectivity image that best meets physical expectations. It has also been shown that speeds determined by this method agree with those from the well established "pre-t o phase" method when applied to reflections with a high signal to noise ratio. However, there is a discrepancy between the two methods for weaker reflections. A method to investigate the discrepancy is described and applied, with the finding that the speed determined by using the Fresnel transform procedure is more accurate for weaker reflections than that given by the "pre-t o phase" method.
Astrometric Research of Asteroidal Satellites
NASA Astrophysics Data System (ADS)
Kikwaya, J.-B.; Thuillot, W.; Rocher, P.; Vieira Martins, R.; Arlot, J.-E.; Angeli, Cl.
2002-09-01
Several observational methods have been applied in order to detect asteroidal satellites. Some of them were rather successful, such as the stellar occultations and mutual eclipse methods. Recently other techniques such as the space imaging, the adaptive optics and the radar imaging inferred a great improvement in the search for these objects. However several limitations appear in the type of data that each of them allow us to access. We propose to apply an astrometric method in order as well to detect new asteroidal satellites as to get complementary data of some already detected objects (mainly their orbital period). This method is founded on the search of the reflex effect of the primary object due to the orbital motion of a possible satellite. Such an astrometric signature, already searched by Monet & Monet (1998), may reach several tens of MAS. Only a spectral analysis could then detect this signal under good conditions of signal/noise ratio and thanks to high quality astrometric measurements and coverage by different sites of observation. We have applied such a method for several asteroids. A preliminary result is obtained thanks to 377 CCD observations of 146 Lucina made at the Haute-Provence Observatory in South of France. A periodical signal appears in this analysis, leading to data compatible with a first detection of a probable satellite made previously (Arlot et al. 1985) by the occultation method.
Extraction of membrane structure in eyeball from MR volumes
NASA Astrophysics Data System (ADS)
Oda, Masahiro; Kin, Taichi; Mori, Kensaku
2017-03-01
This paper presents an accurate extraction method of spherical shaped membrane structures in the eyeball from MR volumes. In ophthalmic surgery, operation field is limited to a small region. Patient specific surgical simulation is useful to reduce complications. Understanding of tissue structure in the eyeball of a patient is required to achieve patient specific surgical simulations. Previous extraction methods of tissue structure in the eyeball use optical coherence tomography (OCT) images. Although OCT images have high resolution, imaging regions are limited to very small. Global structure extraction of the eyeball is difficult from OCT images. We propose an extraction method of spherical shaped membrane structures including the sclerotic coat, choroid, and retina. This method is applied to a T2 weighted MR volume of the head region. MR volume can capture tissue structure of whole eyeball. Because we use MR volumes, out method extracts whole membrane structures in the eyeball. We roughly extract membrane structures by applying a sheet structure enhancement filter. The rough extraction result includes parts of the membrane structures. Then, we apply the Hough transform to extract a sphere structure from the voxels set of the rough extraction result. The Hough transform finds a sphere structure from the rough extraction result. An experimental result using a T2 weighted MR volume of the head region showed that the proposed method can extract spherical shaped membrane structures accurately.
Adaptive Wiener filter super-resolution of color filter array images.
Karch, Barry K; Hardie, Russell C
2013-08-12
Digital color cameras using a single detector array with a Bayer color filter array (CFA) require interpolation or demosaicing to estimate missing color information and provide full-color images. However, demosaicing does not specifically address fundamental undersampling and aliasing inherent in typical camera designs. Fast non-uniform interpolation based super-resolution (SR) is an attractive approach to reduce or eliminate aliasing and its relatively low computational load is amenable to real-time applications. The adaptive Wiener filter (AWF) SR algorithm was initially developed for grayscale imaging and has not previously been applied to color SR demosaicing. Here, we develop a novel fast SR method for CFA cameras that is based on the AWF SR algorithm and uses global channel-to-channel statistical models. We apply this new method as a stand-alone algorithm and also as an initialization image for a variational SR algorithm. This paper presents the theoretical development of the color AWF SR approach and applies it in performance comparisons to other SR techniques for both simulated and real data.
Adaptive partially hidden Markov models with application to bilevel image coding.
Forchhammer, S; Rasmussen, T S
1999-01-01
Partially hidden Markov models (PHMMs) have previously been introduced. The transition and emission/output probabilities from hidden states, as known from the HMMs, are conditioned on the past. This way, the HMM may be applied to images introducing the dependencies of the second dimension by conditioning. In this paper, the PHMM is extended to multiple sequences with a multiple token version and adaptive versions of PHMM coding are presented. The different versions of the PHMM are applied to lossless bilevel image coding. To reduce and optimize the model cost and size, the contexts are organized in trees and effective quantization of the parameters is introduced. The new coding methods achieve results that are better than the JBIG standard on selected test images, although at the cost of increased complexity. By the minimum description length principle, the methods presented for optimizing the code length may apply as guidance for training (P)HMMs for, e.g., segmentation or recognition purposes. Thereby, the PHMM models provide a new approach to image modeling.
Scott, Brandon L; Hoppe, Adam D
2016-01-01
Fluorescence resonance energy transfer (FRET) microscopy is a powerful tool for imaging the interactions between fluorescently tagged proteins in two-dimensions. For FRET microscopy to reach its full potential, it must be able to image more than one pair of interacting molecules and image degradation from out-of-focus light must be reduced. Here we extend our previous work on the application of maximum likelihood methods to the 3-dimensional reconstruction of 3-way FRET interactions within cells. We validated the new method (3D-3Way FRET) by simulation and fluorescent protein test constructs expressed in cells. In addition, we improved the computational methods to create a 2-log reduction in computation time over our previous method (3DFSR). We applied 3D-3Way FRET to image the 3D subcellular distributions of HIV Gag assembly. Gag fused to three different FPs (CFP, YFP, and RFP), assembled into viral-like particles and created punctate FRET signals that become visible on the cell surface when 3D-3Way FRET was applied to the data. Control experiments in which YFP-Gag, RFP-Gag and free CFP were expressed, demonstrated localized FRET between YFP and RFP at sites of viral assembly that were not associated with CFP. 3D-3Way FRET provides the first approach for quantifying multiple FRET interactions while improving the 3D resolution of FRET microscopy data without introducing bias into the reconstructed estimates. This method should allow improvement of widefield, confocal and superresolution FRET microscopy data.
Analysis of dental root apical morphology: a new method for dietary reconstructions in primates.
Hamon, NoÉmie; Emonet, Edouard-Georges; Chaimanee, Yaowalak; Guy, Franck; Tafforeau, Paul; Jaeger, Jean-Jacques
2012-06-01
The reconstruction of paleo-diets is an important task in the study of fossil primates. Previously, paleo-diet reconstructions were performed using different methods based on extant primate models. In particular, dental microwear or isotopic analyses provided accurate reconstructions for some fossil primates. However, there is sometimes difficult or impossible to apply these methods to fossil material. Therefore, the development of new, independent methods of diet reconstructions is crucial to improve our knowledge of primates paleobiology and paleoecology. This study aims to investigate the correlation between tooth root apical morphology and diet in primates, and its potential for paleo-diet reconstructions. Dental roots are composed of two portions: the eruptive portion with a smooth and regular surface, and the apical penetrative portion which displays an irregular and corrugated surface. Here, the angle formed by these two portions (aPE), and the ratio of penetrative portion over total root length (PPI), are calculated for each mandibular tooth root. A strong correlation between these two variables and the proportion of some food types (fruits, leaves, seeds, animal matter, and vertebrates) in diet is found, allowing the use of tooth root apical morphology as a tool for dietary reconstructions in primates. The method was then applied to the fossil hominoid Khoratpithecus piriyai, from the Late Miocene of Thailand. The paleo-diet deduced from aPE and PPI is dominated by fruits (>50%), associated with animal matter (1-25%). Leaves, vertebrates and most probably seeds were excluded from the diet of Khoratpithecus, which is consistent with previous studies. Copyright © 2012 Wiley Periodicals, Inc.
The applied technologies to access clean water for remote communities
NASA Astrophysics Data System (ADS)
Rabindra, I. B.
2018-01-01
A lot of research is done to overcome the remote communities to access clean water, yet very little is utilized and implemented by the community. Various reasons can probably be made for, which is the application of research results is assessed less practical. The aims of this paper is seeking a practical approach, how to establish criteria for the design can be easier applied, at the proper locations, the simple construction, effectively producing a volume and quality of clean water designation. The methods used in this paper is a technological model assessment of treatment/filtering clean water produced a variety of previous research, to establish a model of appropriate technology for remote communities. Various research results collected from the study of literature, while the identification of opportunities and threats to its application is done using a SWOT analysis. This article discussion is looking for alternative models of clean water filtration technology from the previous research results, to be selected as appropriate technology, easily applied and bring of many benefits to the remote communities. The conclusions resulting from the discussion in this paper, expected to be used as the basic criteria of design model of clean water filtration technologies that can be accepted and applied effectively by the remote communities.
A Model-Based Approach for Identifying Signatures of Ancient Balancing Selection in Genetic Data
DeGiorgio, Michael; Lohmueller, Kirk E.; Nielsen, Rasmus
2014-01-01
While much effort has focused on detecting positive and negative directional selection in the human genome, relatively little work has been devoted to balancing selection. This lack of attention is likely due to the paucity of sophisticated methods for identifying sites under balancing selection. Here we develop two composite likelihood ratio tests for detecting balancing selection. Using simulations, we show that these methods outperform competing methods under a variety of assumptions and demographic models. We apply the new methods to whole-genome human data, and find a number of previously-identified loci with strong evidence of balancing selection, including several HLA genes. Additionally, we find evidence for many novel candidates, the strongest of which is FANK1, an imprinted gene that suppresses apoptosis, is expressed during meiosis in males, and displays marginal signs of segregation distortion. We hypothesize that balancing selection acts on this locus to stabilize the segregation distortion and negative fitness effects of the distorter allele. Thus, our methods are able to reproduce many previously-hypothesized signals of balancing selection, as well as discover novel interesting candidates. PMID:25144706
A model-based approach for identifying signatures of ancient balancing selection in genetic data.
DeGiorgio, Michael; Lohmueller, Kirk E; Nielsen, Rasmus
2014-08-01
While much effort has focused on detecting positive and negative directional selection in the human genome, relatively little work has been devoted to balancing selection. This lack of attention is likely due to the paucity of sophisticated methods for identifying sites under balancing selection. Here we develop two composite likelihood ratio tests for detecting balancing selection. Using simulations, we show that these methods outperform competing methods under a variety of assumptions and demographic models. We apply the new methods to whole-genome human data, and find a number of previously-identified loci with strong evidence of balancing selection, including several HLA genes. Additionally, we find evidence for many novel candidates, the strongest of which is FANK1, an imprinted gene that suppresses apoptosis, is expressed during meiosis in males, and displays marginal signs of segregation distortion. We hypothesize that balancing selection acts on this locus to stabilize the segregation distortion and negative fitness effects of the distorter allele. Thus, our methods are able to reproduce many previously-hypothesized signals of balancing selection, as well as discover novel interesting candidates.
NASA Astrophysics Data System (ADS)
Itoh, Masato; Hagimori, Yuki; Nonaka, Kenichiro; Sekiguchi, Kazuma
2016-09-01
In this study, we apply a hierarchical model predictive control to omni-directional mobile vehicle, and improve the tracking performance. We deal with an independent four-wheel driving/steering vehicle (IFWDS) equipped with four coaxial steering mechanisms (CSM). The coaxial steering mechanism is a special one composed of two steering joints on the same axis. In our previous study with respect to IFWDS with ideal steering, we proposed a model predictive tracking control. However, this method did not consider constraints of the coaxial steering mechanism which causes delay of steering. We also proposed a model predictive steering control considering constraints of this mechanism. In this study, we propose a hierarchical system combining above two control methods for IFWDS. An upper controller, which deals with vehicle kinematics, runs a model predictive tracking control, and a lower controller, which considers constraints of coaxial steering mechanism, runs a model predictive steering control which tracks the predicted steering angle optimized an upper controller. We verify the superiority of this method by comparing this method with the previous method.
Development and evaluation of modified envelope correlation method for deep tectonic tremor
NASA Astrophysics Data System (ADS)
Mizuno, N.; Ide, S.
2017-12-01
We develop a new location method for deep tectonic tremors, as an improvement of widely used envelope correlation method, and applied it to construct a tremor catalog in western Japan. Using the cross-correlation functions as objective functions and weighting components of data by the inverse of error variances, the envelope cross-correlation method is redefined as a maximum likelihood method. This method is also capable of multiple source detection, because when several events occur almost simultaneously, they appear as local maxima of likelihood.The average of weighted cross-correlation functions, defined as ACC, is a nonlinear function whose variable is a position of deep tectonic tremor. The optimization method has two steps. First, we fix the source depth to 30 km and use a grid search with 0.2 degree intervals to find the maxima of ACC, which are candidate event locations. Then, using each of the candidate locations as initial values, we apply a gradient method to determine horizontal and vertical components of a hypocenter. Sometimes, several source locations are determined in a time window of 5 minutes. We estimate the resolution, which is defined as a distance of sources to be detected separately by the location method, is about 100 km. The validity of this estimation is confirmed by a numerical test using synthetic waveforms. Applying to continuous seismograms in western Japan for over 10 years, the new method detected 27% more tremors than a previous method, owing to the multiple detection and improvement of accuracy by appropriate weighting scheme.
Karajan, N; Otto, D; Oladyshkin, S; Ehlers, W
2014-10-01
A possibility to simulate the mechanical behaviour of the human spine is given by modelling the stiffer structures, i.e. the vertebrae, as a discrete multi-body system (MBS), whereas the softer connecting tissue, i.e. the softer intervertebral discs (IVD), is represented in a continuum-mechanical sense using the finite-element method (FEM). From a modelling point of view, the mechanical behaviour of the IVD can be included into the MBS in two different ways. They can either be computed online in a so-called co-simulation of a MBS and a FEM or offline in a pre-computation step, where a representation of the discrete mechanical response of the IVD needs to be defined in terms of the applied degrees of freedom (DOF) of the MBS. For both methods, an appropriate homogenisation step needs to be applied to obtain the discrete mechanical response of the IVD, i.e. the resulting forces and moments. The goal of this paper was to present an efficient method to approximate the mechanical response of an IVD in an offline computation. In a previous paper (Karajan et al. in Biomech Model Mechanobiol 12(3):453-466, 2012), it was proven that a cubic polynomial for the homogenised forces and moments of the FE model is a suitable choice to approximate the purely elastic response as a coupled function of the DOF of the MBS. In this contribution, the polynomial chaos expansion (PCE) is applied to generate these high-dimensional polynomials. Following this, the main challenge is to determine suitable deformation states of the IVD for pre-computation, such that the polynomials can be constructed with high accuracy and low numerical cost. For the sake of a simple verification, the coupling method and the PCE are applied to the same simplified motion segment of the spine as was used in the previous paper, i.e. two cylindrical vertebrae and a cylindrical IVD in between. In a next step, the loading rates are included as variables in the polynomial response functions to account for a more realistic response of the overall viscoelastic intervertebral disc. Herein, an additive split into elastic and inelastic contributions to the homogenised forces and moments is applied.
Random-breakage mapping method applied to human DNA sequences
NASA Technical Reports Server (NTRS)
Lobrich, M.; Rydberg, B.; Cooper, P. K.; Chatterjee, A. (Principal Investigator)
1996-01-01
The random-breakage mapping method [Game et al. (1990) Nucleic Acids Res., 18, 4453-4461] was applied to DNA sequences in human fibroblasts. The methodology involves NotI restriction endonuclease digestion of DNA from irradiated calls, followed by pulsed-field gel electrophoresis, Southern blotting and hybridization with DNA probes recognizing the single copy sequences of interest. The Southern blots show a band for the unbroken restriction fragments and a smear below this band due to radiation induced random breaks. This smear pattern contains two discontinuities in intensity at positions that correspond to the distance of the hybridization site to each end of the restriction fragment. By analyzing the positions of those discontinuities we confirmed the previously mapped position of the probe DXS1327 within a NotI fragment on the X chromosome, thus demonstrating the validity of the technique. We were also able to position the probes D21S1 and D21S15 with respect to the ends of their corresponding NotI fragments on chromosome 21. A third chromosome 21 probe, D21S11, has previously been reported to be close to D21S1, although an uncertainty about a second possible location existed. Since both probes D21S1 and D21S11 hybridized to a single NotI fragment and yielded a similar smear pattern, this uncertainty is removed by the random-breakage mapping method.
Robust Path Planning and Feedback Design Under Stochastic Uncertainty
NASA Technical Reports Server (NTRS)
Blackmore, Lars
2008-01-01
Autonomous vehicles require optimal path planning algorithms to achieve mission goals while avoiding obstacles and being robust to uncertainties. The uncertainties arise from exogenous disturbances, modeling errors, and sensor noise, which can be characterized via stochastic models. Previous work defined a notion of robustness in a stochastic setting by using the concept of chance constraints. This requires that mission constraint violation can occur with a probability less than a prescribed value.In this paper we describe a novel method for optimal chance constrained path planning with feedback design. The approach optimizes both the reference trajectory to be followed and the feedback controller used to reject uncertainty. Our method extends recent results in constrained control synthesis based on convex optimization to solve control problems with nonconvex constraints. This extension is essential for path planning problems, which inherently have nonconvex obstacle avoidance constraints. Unlike previous approaches to chance constrained path planning, the new approach optimizes the feedback gain as wellas the reference trajectory.The key idea is to couple a fast, nonconvex solver that does not take into account uncertainty, with existing robust approaches that apply only to convex feasible regions. By alternating between robust and nonrobust solutions, the new algorithm guarantees convergence to a global optimum. We apply the new method to an unmanned aircraft and show simulation results that demonstrate the efficacy of the approach.
Classification of Uxo by Principal Dipole Polarizability
NASA Astrophysics Data System (ADS)
Kappler, K. N.
2010-12-01
Data acquired by multiple-Transmitter, multiple-receiver time-domain electromagnetic devices show great potential for determining the geometric and compositional information relating to near surface conductive targets. Here is presented an analysis of data from one such system; the Berkeley Unexploded-ordnance Discriminator (BUD) system. BUD data are succinctly reduced by processing the multi-static data matrices to obtain magnetic dipole polarizability matrices for data from each time gate. When viewed over all time gates, the projections of the data onto the principal polar axes yield so-called polarizability curves. These curves are especially well suited to discriminating between subsurface conductivity anomalies which correspond to objects of rotational symmetry and irregularly shaped objects. The curves have previously been successfully employed as library elements in a pattern recognition scheme aimed at discriminating harmless scrap metal from dangerous intact unexploded ordnance. However, previous polarizability-curve matching methods have only been applied at field sites which are known a priori to be contaminated by a single type of ordnance, and furthermore, the particular ordnance present in the subsurface was known to be large. Thus signal amplitude was a key element in the discrimination process. The work presented here applies feature-based pattern classification techniques to BUD field data where more than 20 categories of object are present. Data soundings from a calibration grid at the Yuma, AZ proving ground are used in a cross validation study to calibrate the pattern recognition method. The resultant method is then applied to a Blind Test Grid. Results indicate that when lone UXO are present and SNR is reasonably high, Polarizability Curve Matching successfully discriminates UXO from scrap metal when a broad range of objects are present.
Kuipers, Jeroen; Kalicharan, Ruby D; Wolters, Anouk H G; van Ham, Tjakko J; Giepmans, Ben N G
2016-05-25
Large-scale 2D electron microscopy (EM), or nanotomy, is the tissue-wide application of nanoscale resolution electron microscopy. Others and we previously applied large scale EM to human skin pancreatic islets, tissue culture and whole zebrafish larvae(1-7). Here we describe a universally applicable method for tissue-scale scanning EM for unbiased detection of sub-cellular and molecular features. Nanotomy was applied to investigate the healthy and a neurodegenerative zebrafish brain. Our method is based on standardized EM sample preparation protocols: Fixation with glutaraldehyde and osmium, followed by epoxy-resin embedding, ultrathin sectioning and mounting of ultrathin-sections on one-hole grids, followed by post staining with uranyl and lead. Large-scale 2D EM mosaic images are acquired using a scanning EM connected to an external large area scan generator using scanning transmission EM (STEM). Large scale EM images are typically ~ 5 - 50 G pixels in size, and best viewed using zoomable HTML files, which can be opened in any web browser, similar to online geographical HTML maps. This method can be applied to (human) tissue, cross sections of whole animals as well as tissue culture(1-5). Here, zebrafish brains were analyzed in a non-invasive neuronal ablation model. We visualize within a single dataset tissue, cellular and subcellular changes which can be quantified in various cell types including neurons and microglia, the brain's macrophages. In addition, nanotomy facilitates the correlation of EM with light microscopy (CLEM)(8) on the same tissue, as large surface areas previously imaged using fluorescent microscopy, can subsequently be subjected to large area EM, resulting in the nano-anatomy (nanotomy) of tissues. In all, nanotomy allows unbiased detection of features at EM level in a tissue-wide quantifiable manner.
Kuipers, Jeroen; Kalicharan, Ruby D.; Wolters, Anouk H. G.
2016-01-01
Large-scale 2D electron microscopy (EM), or nanotomy, is the tissue-wide application of nanoscale resolution electron microscopy. Others and we previously applied large scale EM to human skin pancreatic islets, tissue culture and whole zebrafish larvae1-7. Here we describe a universally applicable method for tissue-scale scanning EM for unbiased detection of sub-cellular and molecular features. Nanotomy was applied to investigate the healthy and a neurodegenerative zebrafish brain. Our method is based on standardized EM sample preparation protocols: Fixation with glutaraldehyde and osmium, followed by epoxy-resin embedding, ultrathin sectioning and mounting of ultrathin-sections on one-hole grids, followed by post staining with uranyl and lead. Large-scale 2D EM mosaic images are acquired using a scanning EM connected to an external large area scan generator using scanning transmission EM (STEM). Large scale EM images are typically ~ 5 - 50 G pixels in size, and best viewed using zoomable HTML files, which can be opened in any web browser, similar to online geographical HTML maps. This method can be applied to (human) tissue, cross sections of whole animals as well as tissue culture1-5. Here, zebrafish brains were analyzed in a non-invasive neuronal ablation model. We visualize within a single dataset tissue, cellular and subcellular changes which can be quantified in various cell types including neurons and microglia, the brain's macrophages. In addition, nanotomy facilitates the correlation of EM with light microscopy (CLEM)8 on the same tissue, as large surface areas previously imaged using fluorescent microscopy, can subsequently be subjected to large area EM, resulting in the nano-anatomy (nanotomy) of tissues. In all, nanotomy allows unbiased detection of features at EM level in a tissue-wide quantifiable manner. PMID:27285162
DOE Office of Scientific and Technical Information (OSTI.GOV)
Foreman-Mackey, Daniel; Hogg, David W.; Morton, Timothy D., E-mail: danfm@nyu.edu
No true extrasolar Earth analog is known. Hundreds of planets have been found around Sun-like stars that are either Earth-sized but on shorter periods, or else on year-long orbits but somewhat larger. Under strong assumptions, exoplanet catalogs have been used to make an extrapolated estimate of the rate at which Sun-like stars host Earth analogs. These studies are complicated by the fact that every catalog is censored by non-trivial selection effects and detection efficiencies, and every property (period, radius, etc.) is measured noisily. Here we present a general hierarchical probabilistic framework for making justified inferences about the population of exoplanets,more » taking into account survey completeness and, for the first time, observational uncertainties. We are able to make fewer assumptions about the distribution than previous studies; we only require that the occurrence rate density be a smooth function of period and radius (employing a Gaussian process). By applying our method to synthetic catalogs, we demonstrate that it produces more accurate estimates of the whole population than standard procedures based on weighting by inverse detection efficiency. We apply the method to an existing catalog of small planet candidates around G dwarf stars. We confirm a previous result that the radius distribution changes slope near Earth's radius. We find that the rate density of Earth analogs is about 0.02 (per star per natural logarithmic bin in period and radius) with large uncertainty. This number is much smaller than previous estimates made with the same data but stronger assumptions.« less
Yu, Jingkai; Finley, Russell L
2009-01-01
High-throughput experimental and computational methods are generating a wealth of protein-protein interaction data for a variety of organisms. However, data produced by current state-of-the-art methods include many false positives, which can hinder the analyses needed to derive biological insights. One way to address this problem is to assign confidence scores that reflect the reliability and biological significance of each interaction. Most previously described scoring methods use a set of likely true positives to train a model to score all interactions in a dataset. A single positive training set, however, may be biased and not representative of true interaction space. We demonstrate a method to score protein interactions by utilizing multiple independent sets of training positives to reduce the potential bias inherent in using a single training set. We used a set of benchmark yeast protein interactions to show that our approach outperforms other scoring methods. Our approach can also score interactions across data types, which makes it more widely applicable than many previously proposed methods. We applied the method to protein interaction data from both Drosophila melanogaster and Homo sapiens. Independent evaluations show that the resulting confidence scores accurately reflect the biological significance of the interactions.
Sauwen, Nicolas; Acou, Marjan; Bharath, Halandur N; Sima, Diana M; Veraart, Jelle; Maes, Frederik; Himmelreich, Uwe; Achten, Eric; Van Huffel, Sabine
2017-01-01
Non-negative matrix factorization (NMF) has become a widely used tool for additive parts-based analysis in a wide range of applications. As NMF is a non-convex problem, the quality of the solution will depend on the initialization of the factor matrices. In this study, the successive projection algorithm (SPA) is proposed as an initialization method for NMF. SPA builds on convex geometry and allocates endmembers based on successive orthogonal subspace projections of the input data. SPA is a fast and reproducible method, and it aligns well with the assumptions made in near-separable NMF analyses. SPA was applied to multi-parametric magnetic resonance imaging (MRI) datasets for brain tumor segmentation using different NMF algorithms. Comparison with common initialization methods shows that SPA achieves similar segmentation quality and it is competitive in terms of convergence rate. Whereas SPA was previously applied as a direct endmember extraction tool, we have shown improved segmentation results when using SPA as an initialization method, as it allows further enhancement of the sources during the NMF iterative procedure.
An IMU-to-Body Alignment Method Applied to Human Gait Analysis
Vargas-Valencia, Laura Susana; Elias, Arlindo; Rocon, Eduardo; Bastos-Filho, Teodiano; Frizera, Anselmo
2016-01-01
This paper presents a novel calibration procedure as a simple, yet powerful, method to place and align inertial sensors with body segments. The calibration can be easily replicated without the need of any additional tools. The proposed method is validated in three different applications: a computer mathematical simulation; a simplified joint composed of two semi-spheres interconnected by a universal goniometer; and a real gait test with five able-bodied subjects. Simulation results demonstrate that, after the calibration method is applied, the joint angles are correctly measured independently of previous sensor placement on the joint, thus validating the proposed procedure. In the cases of a simplified joint and a real gait test with human volunteers, the method also performs correctly, although secondary plane errors appear when compared with the simulation results. We believe that such errors are caused by limitations of the current inertial measurement unit (IMU) technology and fusion algorithms. In conclusion, the presented calibration procedure is an interesting option to solve the alignment problem when using IMUs for gait analysis. PMID:27973406
NASA Astrophysics Data System (ADS)
Metwally, Fadia H.
2008-02-01
The quantitative predictive abilities of the new and simple bivariate spectrophotometric method are compared with the results obtained by the use of multivariate calibration methods [the classical least squares (CLS), principle component regression (PCR) and partial least squares (PLS)], using the information contained in the absorption spectra of the appropriate solutions. Mixtures of the two drugs Nifuroxazide (NIF) and Drotaverine hydrochloride (DRO) were resolved by application of the bivariate method. The different chemometric approaches were applied also with previous optimization of the calibration matrix, as they are useful in simultaneous inclusion of many spectral wavelengths. The results found by application of the bivariate, CLS, PCR and PLS methods for the simultaneous determinations of mixtures of both components containing 2-12 μg ml -1 of NIF and 2-8 μg ml -1 of DRO are reported. Both approaches were satisfactorily applied to the simultaneous determination of NIF and DRO in pure form and in pharmaceutical formulation. The results were in accordance with those given by the EVA Pharma reference spectrophotometric method.
Using Grounded Theory Method to Capture and Analyze Health Care Experiences
Foley, Geraldine; Timonen, Virpi
2015-01-01
Objective Grounded theory (GT) is an established qualitative research method, but few papers have encapsulated the benefits, limits, and basic tenets of doing GT research on user and provider experiences of health care services. GT can be used to guide the entire study method, or it can be applied at the data analysis stage only. Methods We summarize key components of GT and common GT procedures used by qualitative researchers in health care research. We draw on our experience of conducting a GT study on amyotrophic lateral sclerosis patients’ experiences of health care services. Findings We discuss why some approaches in GT research may work better than others, particularly when the focus of study is hard-to-reach population groups. We highlight the flexibility of procedures in GT to build theory about how people engage with health care services. Conclusion GT enables researchers to capture and understand health care experiences. GT methods are particularly valuable when the topic of interest has not previously been studied. GT can be applied to bring structure and rigor to the analysis of qualitative data. PMID:25523315
González-Vidal, Juan José; Pérez-Pueyo, Rosanna; Soneira, María José; Ruiz-Moreno, Sergio
2015-03-01
A new method has been developed to automatically identify Raman spectra, whether they correspond to single- or multicomponent spectra. The method requires no user input or judgment. There are thus no parameters to be tweaked. Furthermore, it provides a reliability factor on the resulting identification, with the aim of becoming a useful support tool for the analyst in the decision-making process. The method relies on the multivariate techniques of principal component analysis (PCA) and independent component analysis (ICA), and on some metrics. It has been developed for the application of automated spectral analysis, where the analyzed spectrum is provided by a spectrometer that has no previous knowledge of the analyzed sample, meaning that the number of components in the sample is unknown. We describe the details of this method and demonstrate its efficiency by identifying both simulated spectra and real spectra. The method has been applied to artistic pigment identification. The reliable and consistent results that were obtained make the methodology a helpful tool suitable for the identification of pigments in artwork or in paint in general.
Prediction and analysis of beta-turns in proteins by support vector machine.
Pham, Tho Hoan; Satou, Kenji; Ho, Tu Bao
2003-01-01
Tight turn has long been recognized as one of the three important features of proteins after the alpha-helix and beta-sheet. Tight turns play an important role in globular proteins from both the structural and functional points of view. More than 90% tight turns are beta-turns. Analysis and prediction of beta-turns in particular and tight turns in general are very useful for the design of new molecules such as drugs, pesticides, and antigens. In this paper, we introduce a support vector machine (SVM) approach to prediction and analysis of beta-turns. We have investigated two aspects of applying SVM to the prediction and analysis of beta-turns. First, we developed a new SVM method, called BTSVM, which predicts beta-turns of a protein from its sequence. The prediction results on the dataset of 426 non-homologous protein chains by sevenfold cross-validation technique showed that our method is superior to the other previous methods. Second, we analyzed how amino acid positions support (or prevent) the formation of beta-turns based on the "multivariable" classification model of a linear SVM. This model is more general than the other ones of previous statistical methods. Our analysis results are more comprehensive and easier to use than previously published analysis results.
Samejima, Keijiro; Otani, Masahiro; Murakami, Yasuko; Oka, Takami; Kasai, Misao; Tsumoto, Hiroki; Kohda, Kohfuku
2007-10-01
A sensitive method for the determination of polyamines in mammalian cells was described using electrospray ionization and time-of-flight mass spectrometer. This method was 50-fold more sensitive than the previous method using ionspray ionization and quadrupole mass spectrometer. The method employed the partial purification and derivatization of polyamines, but allowed a measurement of multiple samples which contained picomol amounts of polyamines. Time required for data acquisition of one sample was approximately 2 min. The method was successfully applied for the determination of reduced spermidine and spermine contents in cultured cells under the inhibition of aminopropyltransferases. In addition, a new proper internal standard was proposed for the tracer experiment using (15)N-labeled polyamines.
An improved 2D MoF method by using high order derivatives
NASA Astrophysics Data System (ADS)
Chen, Xiang; Zhang, Xiong
2017-11-01
The MoF (Moment of Fluid) method is one of the most accurate approaches among various interface reconstruction algorithms. Alike other second order methods, the MoF method needs to solve an implicit optimization problem to obtain the optimal approximate interface, so an iteration process is inevitable under most circumstances. In order to solve the optimization efficiently, the properties of the objective function are worthy of studying. In 2D problems, the first order derivative has been deduced and applied in the previous researches. In this paper, the high order derivatives of the objective function are deduced on the convex polygon. We show that the nth (n ≥ 2) order derivatives are discontinuous, and the number of the discontinuous points is two times the number of the polygon edge. A rotation algorithm is proposed to successively calculate these discontinuous points, thus the target interval where the optimal solution is located can be determined. Since the high order derivatives of the objective function are continuous in the target interval, the iteration schemes based on high order derivatives can be used to improve the convergence rate. Moreover, when iterating in the target interval, the value of objective function and its derivatives can be directly updated without explicitly solving the volume conservation equation. The direct update makes a further improvement of the efficiency especially when the number of edges of the polygon is increasing. The Halley's method, which is based on the first three order derivatives, is applied as the iteration scheme in this paper and the numerical results indicate that the CPU time is about half of the previous method on the quadrilateral cell and is about one sixth on the decagon cell.
Single Image Super-Resolution Using Global Regression Based on Multiple Local Linear Mappings.
Choi, Jae-Seok; Kim, Munchurl
2017-03-01
Super-resolution (SR) has become more vital, because of its capability to generate high-quality ultra-high definition (UHD) high-resolution (HR) images from low-resolution (LR) input images. Conventional SR methods entail high computational complexity, which makes them difficult to be implemented for up-scaling of full-high-definition input images into UHD-resolution images. Nevertheless, our previous super-interpolation (SI) method showed a good compromise between Peak-Signal-to-Noise Ratio (PSNR) performances and computational complexity. However, since SI only utilizes simple linear mappings, it may fail to precisely reconstruct HR patches with complex texture. In this paper, we present a novel SR method, which inherits the large-to-small patch conversion scheme from SI but uses global regression based on local linear mappings (GLM). Thus, our new SR method is called GLM-SI. In GLM-SI, each LR input patch is divided into 25 overlapped subpatches. Next, based on the local properties of these subpatches, 25 different local linear mappings are applied to the current LR input patch to generate 25 HR patch candidates, which are then regressed into one final HR patch using a global regressor. The local linear mappings are learned cluster-wise in our off-line training phase. The main contribution of this paper is as follows: Previously, linear-mapping-based conventional SR methods, including SI only used one simple yet coarse linear mapping to each patch to reconstruct its HR version. On the contrary, for each LR input patch, our GLM-SI is the first to apply a combination of multiple local linear mappings, where each local linear mapping is found according to local properties of the current LR patch. Therefore, it can better approximate nonlinear LR-to-HR mappings for HR patches with complex texture. Experiment results show that the proposed GLM-SI method outperforms most of the state-of-the-art methods, and shows comparable PSNR performance with much lower computational complexity when compared with a super-resolution method based on convolutional neural nets (SRCNN15). Compared with the previous SI method that is limited with a scale factor of 2, GLM-SI shows superior performance with average 0.79 dB higher in PSNR, and can be used for scale factors of 3 or higher.
Zhang, Jian; Suo, Yan; Liu, Min; Xu, Xun
2018-06-01
Proliferative diabetic retinopathy (PDR) is one of the most common complications of diabetes and can lead to blindness. Proteomic studies have provided insight into the pathogenesis of PDR and a series of PDR-related genes has been identified but are far from fully characterized because the experimental methods are expensive and time consuming. In our previous study, we successfully identified 35 candidate PDR-related genes through the shortest-path algorithm. In the current study, we developed a computational method using the random walk with restart (RWR) algorithm and the protein-protein interaction (PPI) network to identify potential PDR-related genes. After some possible genes were obtained by the RWR algorithm, a three-stage filtration strategy, which includes the permutation test, interaction test and enrichment test, was applied to exclude potential false positives caused by the structure of PPI network, the poor interaction strength, and the limited similarity on gene ontology (GO) terms and biological pathways. As a result, 36 candidate genes were discovered by the method which was different from the 35 genes reported in our previous study. A literature review showed that 21 of these 36 genes are supported by previous experiments. These findings suggest the robustness and complementary effects of both our efforts using different computational methods, thus providing an alternative method to study PDR pathogenesis. Copyright © 2017 Elsevier B.V. All rights reserved.
Wei, Dacheng; Liu, Yunqi; Cao, Lingchao; Fu, Lei; Li, Xianglong; Wang, Yu; Yu, Gui; Zhu, Daoben
2006-02-01
Here we develop a simple method by using flow fluctuation to synthesize arrays of multi-branched carbon nanotubes (CNTs) that are far more complex than those previously reported. The architectures and compositions can be well controlled, thus avoiding any template or additive. A branching mechanism of fluctuation-promoted coalescence of catalyst particles is proposed. This finding will provide a hopeful approach to the goal of CNT-based integrated circuits and be valuable for applying branched junctions in nanoelectronics and producing branched junctions of other materials.
Review of basic medical results of the Salyut-7-Soyuz-T 8-month manned flight
NASA Astrophysics Data System (ADS)
Gazenko, O. G.; Schulzhenko, E. B.; Grigoriev, A. I.; Atkov, O. Yu.; Egorov, A. D.
This paper presents the results of medical investigations performed in the Salyut-7 8-month mission in which a professional physician took part. The paper contains anthropometric measurements, results of investigating the vestibular function, cardiovascular function at rest and in response to multi-step tests (with emphasis on echocardiographic measurements), metabolic parameters and hormonal status. It also discusses medical aspects of the extravehicular activity. The medical investigations, although some new methods were applied, provided the continuity of methodical approaches and data accumulated in previous missions.
Inference of Time-Evolving Coupled Dynamical Systems in the Presence of Noise
NASA Astrophysics Data System (ADS)
Stankovski, Tomislav; Duggento, Andrea; McClintock, Peter V. E.; Stefanovska, Aneta
2012-07-01
A new method is introduced for analysis of interactions between time-dependent coupled oscillators, based on the signals they generate. It distinguishes unsynchronized dynamics from noise-induced phase slips and enables the evolution of the coupling functions and other parameters to be followed. It is based on phase dynamics, with Bayesian inference of the time-evolving parameters achieved by shaping the prior densities to incorporate knowledge of previous samples. The method is tested numerically and applied to reveal and quantify the time-varying nature of cardiorespiratory interactions.
Sakata, Shinichiro; Hallett, Kerrod B; Brandon, Matthew S; McBride, Craig A
2009-11-01
Endotracheal tube stabilization in patients with facial burns is crucial and often challenging. We present a simple method of securing an endotracheal tube using two orthodontic brackets bonded to the maxillary central incisor teeth and a 0.08'' stainless steel ligature wire. Our technique is less traumatic, and is easier to maintain oral hygiene than with previously described methods. This anchorage system takes 5 min to apply and can be removed on the ward without the need for a general anaesthetic.
NASA Technical Reports Server (NTRS)
Choe, C. Y.; Tapley, B. D.
1975-01-01
A method proposed by Potter of applying the Kalman-Bucy filter to the problem of estimating the state of a dynamic system is described, in which the square root of the state error covariance matrix is used to process the observations. A new technique which propagates the covariance square root matrix in lower triangular form is given for the discrete observation case. The technique is faster than previously proposed algorithms and is well-adapted for use with the Carlson square root measurement algorithm.
Comprehensive European dietary exposure model (CEDEM) for food additives.
Tennant, David R
2016-05-01
European methods for assessing dietary exposures to nutrients, additives and other substances in food are limited by the availability of detailed food consumption data for all member states. A proposed comprehensive European dietary exposure model (CEDEM) applies summary data published by the European Food Safety Authority (EFSA) in a deterministic model based on an algorithm from the EFSA intake method for food additives. The proposed approach can predict estimates of food additive exposure provided in previous EFSA scientific opinions that were based on the full European food consumption database.
Mo Zhou; Joseph Buongiorno
2011-01-01
Most economic studies of forest decision making under risk assume a fixed interest rate. This paper investigated some implications of this stochastic nature of interest rates. Markov decision process (MDP) models, used previously to integrate stochastic stand growth and prices, can be extended to include variable interest rates as well. This method was applied to...
Biodiversity and hypervirulence of Listeria monocytogenes.
Grad, Yonatan H; Fortune, Sarah M
2016-03-01
The integration of large, well-sampled collections of bacterial isolates with genomics and experimental methods provides opportunities for 'top-down' discovery of the genetic basis of phenotypes of interest. In a new report, the authors apply this approach to investigate the heterogeneity in manifestations of disease caused by Listeria monocytogenes and demonstrate that a previously uncharacterized cellobiose PTS system is involved in central nervous system infection.
USDA-ARS?s Scientific Manuscript database
Field studies were conducted in 2006 and 2007 to evaluate the tolerance of autumn-planted cabbage and turnip green to halosulfuron applied the previous spring to cantaloupe. Main plots were three levels of soil pH; maintained at a natural pH level, pH raised with Ca(OH)2, and pH lowered with Al2(SO...
A minimal multiconfigurational technique.
Fernández Rico, J; Paniagua, M; GarcíA De La Vega, J M; Fernández-Alonso, J I; Fantucci, P
1986-04-01
A direct minimization method previously presented by the authors is applied here to biconfigurational wave functions. A very moderate increasing in the time by iteration with respect to the one-determinant calculation and good convergence properties have been found. So qualitatively correct studies on singlet systems with strong biradical character can be performed with a cost similar to that required by Hartree-Fock calculations. Copyright © 1986 John Wiley & Sons, Inc.
NASA Astrophysics Data System (ADS)
Haaf, Ezra; Barthel, Roland
2018-04-01
Classification and similarity based methods, which have recently received major attention in the field of surface water hydrology, namely through the PUB (prediction in ungauged basins) initiative, have not yet been applied to groundwater systems. However, it can be hypothesised, that the principle of "similar systems responding similarly to similar forcing" applies in subsurface hydrology as well. One fundamental prerequisite to test this hypothesis and eventually to apply the principle to make "predictions for ungauged groundwater systems" is efficient methods to quantify the similarity of groundwater system responses, i.e. groundwater hydrographs. In this study, a large, spatially extensive, as well as geologically and geomorphologically diverse dataset from Southern Germany and Western Austria was used, to test and compare a set of 32 grouping methods, which have previously only been used individually in local-scale studies. The resulting groupings are compared to a heuristic visual classification, which serves as a baseline. A performance ranking of these classification methods is carried out and differences in homogeneity of grouping results were shown, whereby selected groups were related to hydrogeological indices and geological descriptors. This exploratory empirical study shows that the choice of grouping method has a large impact on the object distribution within groups, as well as on the homogeneity of patterns captured in groups. The study provides a comprehensive overview of a large number of grouping methods, which can guide researchers when attempting similarity-based groundwater hydrograph classification.
An Iterative Approach for the Optimization of Pavement Maintenance Management at the Network Level
Torres-Machí, Cristina; Chamorro, Alondra; Videla, Carlos; Yepes, Víctor
2014-01-01
Pavement maintenance is one of the major issues of public agencies. Insufficient investment or inefficient maintenance strategies lead to high economic expenses in the long term. Under budgetary restrictions, the optimal allocation of resources becomes a crucial aspect. Two traditional approaches (sequential and holistic) and four classes of optimization methods (selection based on ranking, mathematical optimization, near optimization, and other methods) have been applied to solve this problem. They vary in the number of alternatives considered and how the selection process is performed. Therefore, a previous understanding of the problem is mandatory to identify the most suitable approach and method for a particular network. This study aims to assist highway agencies, researchers, and practitioners on when and how to apply available methods based on a comparative analysis of the current state of the practice. Holistic approach tackles the problem considering the overall network condition, while the sequential approach is easier to implement and understand, but may lead to solutions far from optimal. Scenarios defining the suitability of these approaches are defined. Finally, an iterative approach gathering the advantages of traditional approaches is proposed and applied in a case study. The proposed approach considers the overall network condition in a simpler and more intuitive manner than the holistic approach. PMID:24741352
An iterative approach for the optimization of pavement maintenance management at the network level.
Torres-Machí, Cristina; Chamorro, Alondra; Videla, Carlos; Pellicer, Eugenio; Yepes, Víctor
2014-01-01
Pavement maintenance is one of the major issues of public agencies. Insufficient investment or inefficient maintenance strategies lead to high economic expenses in the long term. Under budgetary restrictions, the optimal allocation of resources becomes a crucial aspect. Two traditional approaches (sequential and holistic) and four classes of optimization methods (selection based on ranking, mathematical optimization, near optimization, and other methods) have been applied to solve this problem. They vary in the number of alternatives considered and how the selection process is performed. Therefore, a previous understanding of the problem is mandatory to identify the most suitable approach and method for a particular network. This study aims to assist highway agencies, researchers, and practitioners on when and how to apply available methods based on a comparative analysis of the current state of the practice. Holistic approach tackles the problem considering the overall network condition, while the sequential approach is easier to implement and understand, but may lead to solutions far from optimal. Scenarios defining the suitability of these approaches are defined. Finally, an iterative approach gathering the advantages of traditional approaches is proposed and applied in a case study. The proposed approach considers the overall network condition in a simpler and more intuitive manner than the holistic approach.
Occupancy mapping and surface reconstruction using local Gaussian processes with Kinect sensors.
Kim, Soohwan; Kim, Jonghyuk
2013-10-01
Although RGB-D sensors have been successfully applied to visual SLAM and surface reconstruction, most of the applications aim at visualization. In this paper, we propose a noble method of building continuous occupancy maps and reconstructing surfaces in a single framework for both navigation and visualization. Particularly, we apply a Bayesian nonparametric approach, Gaussian process classification, to occupancy mapping. However, it suffers from high-computational complexity of O(n(3))+O(n(2)m), where n and m are the numbers of training and test data, respectively, limiting its use for large-scale mapping with huge training data, which is common with high-resolution RGB-D sensors. Therefore, we partition both training and test data with a coarse-to-fine clustering method and apply Gaussian processes to each local clusters. In addition, we consider Gaussian processes as implicit functions, and thus extract iso-surfaces from the scalar fields, continuous occupancy maps, using marching cubes. By doing that, we are able to build two types of map representations within a single framework of Gaussian processes. Experimental results with 2-D simulated data show that the accuracy of our approximated method is comparable to previous work, while the computational time is dramatically reduced. We also demonstrate our method with 3-D real data to show its feasibility in large-scale environments.
NASA Astrophysics Data System (ADS)
Wiebe, D. M.; Cox, D. T.; Chen, Y.; Weber, B. A.; Chen, Y.
2012-12-01
Building damage from a hypothetical Cascadia Subduction Zone tsunami was estimated using two methods and applied at the community scale. The first method applies proposed guidelines for a new ASCE 7 standard to calculate the flow depth, flow velocity, and momentum flux from a known runup limit and estimate of the total tsunami energy at the shoreline. This procedure is based on a potential energy budget, uses the energy grade line, and accounts for frictional losses. The second method utilized numerical model results from previous studies to determine maximum flow depth, velocity, and momentum flux throughout the inundation zone. The towns of Seaside and Canon Beach, Oregon, were selected for analysis due to the availability of existing data from previously published works. Fragility curves, based on the hydrodynamic features of the tsunami flow (inundation depth, flow velocity, and momentum flux) and proposed design standards from ASCE 7 were used to estimate the probability of damage to structures located within the inundations zone. The analysis proceeded at the parcel level, using tax-lot data to identify construction type (wood, steel, and reinforced-concrete) and age, which was used as a performance measure when applying the fragility curves and design standards. The overall probability of damage to civil buildings was integrated for comparison between the two methods, and also analyzed spatially for damage patterns, which could be controlled by local bathymetric features. The two methods were compared to assess the sensitivity of the results to the uncertainty in the input hydrodynamic conditions and fragility curves, and the potential advantages of each method discussed. On-going work includes coupling the results of building damage and vulnerability to an economic input output model. This model assesses trade between business sectors located inside and outside the induction zone, and is used to measure the impact to the regional economy. Results highlight critical businesses sectors and infrastructure critical to the economic recovery effort, which could be retrofitted or relocated to survive the event. The results of this study improve community understanding of the tsunami hazard for civil buildings.
Exact simulation of integrate-and-fire models with exponential currents.
Brette, Romain
2007-10-01
Neural networks can be simulated exactly using event-driven strategies, in which the algorithm advances directly from one spike to the next spike. It applies to neuron models for which we have (1) an explicit expression for the evolution of the state variables between spikes and (2) an explicit test on the state variables that predicts whether and when a spike will be emitted. In a previous work, we proposed a method that allows exact simulation of an integrate-and-fire model with exponential conductances, with the constraint of a single synaptic time constant. In this note, we propose a method, based on polynomial root finding, that applies to integrate-and-fire models with exponential currents, with possibly many different synaptic time constants. Models can include biexponential synaptic currents and spike-triggered adaptation currents.
Signal processing methods for in-situ creep specimen monitoring
NASA Astrophysics Data System (ADS)
Guers, Manton J.; Tittmann, Bernhard R.
2018-04-01
Previous work investigated using guided waves for monitoring creep deformation during accelerated life testing. The basic objective was to relate observed changes in the time-of-flight to changes in the environmental temperature and specimen gage length. The work presented in this paper investigated several signal processing strategies for possible application in the in-situ monitoring system. Signal processing methods for both group velocity (wave-packet envelope) and phase velocity (peak tracking) time-of-flight were considered. Although the Analytic Envelope found via the Hilbert transform is commonly applied for group velocity measurements, erratic behavior in the indicated time-of-flight was observed when this technique was applied to the in-situ data. The peak tracking strategies tested had generally linear trends, and tracking local minima in the raw waveform ultimately showed the most consistent results.
A Maneuvering Flight Noise Model for Helicopter Mission Planning
NASA Technical Reports Server (NTRS)
Greenwood, Eric; Rau, Robert; May, Benjamin; Hobbs, Christopher
2015-01-01
A new model for estimating the noise radiation during maneuvering flight is developed in this paper. The model applies the Quasi-Static Acoustic Mapping (Q-SAM) method to a database of acoustic spheres generated using the Fundamental Rotorcraft Acoustics Modeling from Experiments (FRAME) technique. A method is developed to generate a realistic flight trajectory from a limited set of waypoints and is used to calculate the quasi-static operating condition and corresponding acoustic sphere for the vehicle throughout the maneuver. By using a previously computed database of acoustic spheres, the acoustic impact of proposed helicopter operations can be rapidly predicted for use in mission-planning. The resulting FRAME-QS model is applied to near-horizon noise measurements collected for the Bell 430 helicopter undergoing transient pitch up and roll maneuvers, with good agreement between the measured data and the FRAME-QS model.
Automatic evaluation of skin histopathological images for melanocytic features
NASA Astrophysics Data System (ADS)
Koosha, Mohaddeseh; Hoseini Alinodehi, S. Pourya; Nicolescu, Mircea; Safaei Naraghi, Zahra
2017-03-01
Successfully detecting melanocyte cells in the skin epidermis has great significance in skin histopathology. Because of the existence of cells with similar appearance to melanocytes in hematoxylin and eosin (HE) images of the epidermis, detecting melanocytes becomes a challenging task. This paper proposes a novel technique for the detection of melanocytes in HE images of the epidermis, based on the melanocyte color features, in the HSI color domain. Initially, an effective soft morphological filter is applied to the HE images in the HSI color domain to remove noise. Then a novel threshold-based technique is applied to distinguish the candidate melanocytes' nuclei. Similarly, the method is applied to find the candidate surrounding halos of the melanocytes. The candidate nuclei are associated with their surrounding halos using the suggested logical and statistical inferences. Finally, a fuzzy inference system is proposed, based on the HSI color information of a typical melanocyte in the epidermis, to calculate the similarity ratio of each candidate cell to a melanocyte. As our review on the literature shows, this is the first method evaluating epidermis cells for melanocyte similarity ratio. Experimental results on various images with different zooming factors show that the proposed method improves the results of previous works.
Yu, Zhen; Chung, Woon-Gye; Sloat, Brian R.; Löhr, Christiane V.; Weiss, Richard; Rodriguez, B. Leticia; Li, Xinran; Cui, Zhengrong
2011-01-01
Objectives Non-invasive immunization by applying plasmid DNA topically onto the skin is an attractive immunization approach. However, the immune responses induced are generally weak. Previously, we showed that the antibody responses induced by topical DNA vaccine were significantly enhanced when hair follicles in the application area were induced into anagen (growth) stage by hair plucking. In the present study, we further investigated the mechanism of immune enhancement. Methods Three different methods, hair plucking or treatment with retinoic acid (RA) or O- tetradecanoylphorbol-13-acetate (TPA), were used to induce hair follicles into anagen stage before mice were dosed with a β-galactosidase-encoding plasmid, and the specific antibody responses induced were evaluated. Key findings The hair plucking method was more effective at enhancing the resultant antibody responses. Treatment with RA or TPA caused more damages to the skin and induced more severe local inflammations than hair plucking. However, hair plucking was most effective at enhancing the uptake or retention of the DNA in the application area. Conclusions The uptake of plasmid DNA in the application area correlated with the antibody responses induced by a topically applied DNA. PMID:21235583
Knight, Jo; North, Bernard V; Sham, Pak C; Curtis, David
2003-12-31
This paper presents a method of performing model-free LOD-score based linkage analysis on quantitative traits. It is implemented in the QMFLINK program. The method is used to perform a genome screen on the Framingham Heart Study data. A number of markers that show some support for linkage in our study coincide substantially with those implicated in other linkage studies of hypertension. Although the new method needs further testing on additional real and simulated data sets we can already say that it is straightforward to apply and may offer a useful complementary approach to previously available methods for the linkage analysis of quantitative traits.
Knight, Jo; North, Bernard V; Sham, Pak C; Curtis, David
2003-01-01
This paper presents a method of performing model-free LOD-score based linkage analysis on quantitative traits. It is implemented in the QMFLINK program. The method is used to perform a genome screen on the Framingham Heart Study data. A number of markers that show some support for linkage in our study coincide substantially with those implicated in other linkage studies of hypertension. Although the new method needs further testing on additional real and simulated data sets we can already say that it is straightforward to apply and may offer a useful complementary approach to previously available methods for the linkage analysis of quantitative traits. PMID:14975142
Computation of Pressurized Gas Bearings Using CE/SE Method
NASA Technical Reports Server (NTRS)
Cioc, Sorin; Dimofte, Florin; Keith, Theo G., Jr.; Fleming, David P.
2003-01-01
The space-time conservation element and solution element (CE/SE) method is extended to compute compressible viscous flows in pressurized thin fluid films. This numerical scheme has previously been used successfully to solve a wide variety of compressible flow problems, including flows with large and small discontinuities. In this paper, the method is applied to calculate the pressure distribution in a hybrid gas journal bearing. The formulation of the problem is presented, including the modeling of the feeding system. the numerical results obtained are compared with experimental data. Good agreement between the computed results and the test data were obtained, and thus validate the CE/SE method to solve such problems.
A new method of Curie depth evaluation from magnetic data: Theory
NASA Technical Reports Server (NTRS)
Won, I. J. (Principal Investigator)
1981-01-01
An approach to estimating the Curie point isotherm uses the classical Gauss method inverting a system of nonlinear equations. The method, slightly modified by a differential correction technique, directly inverts filtered Magsat data to calculate the crustal structure above the Curie depth, which is modeled as a magnetized layer of varying thickness and susceptibility. Since the depth below the layer is assumed to be nonmagnetic, the bottom of the layer is interpreted as the Curie depth. The method, once fully developed, tested, and compared with previous work by others, is to be applied to a portion of the eastern U.S. when sufficient Magsat data are accumulated for the region.
Large Scale Brownian Dynamics of Confined Suspensions of Rigid Particles
NASA Astrophysics Data System (ADS)
Donev, Aleksandar; Sprinkle, Brennan; Balboa, Florencio; Patankar, Neelesh
2017-11-01
We introduce new numerical methods for simulating the dynamics of passive and active Brownian colloidal suspensions of particles of arbitrary shape sedimented near a bottom wall. The methods also apply for periodic (bulk) suspensions. Our methods scale linearly in the number of particles, and enable previously unprecedented simulations of tens to hundreds of thousands of particles. We demonstrate the accuracy and efficiency of our methods on a suspension of boomerang-shaped colloids. We also model recent experiments on active dynamics of uniform suspensions of spherical microrollers. This work was supported in part by the National Science Foundation under award DMS-1418706, and by the U.S. Department of Energy under award DE-SC0008271.
Zaazaa, Hala E; Elzanfaly, Eman S; Soudi, Aya T; Salem, Maissa Y
2015-05-15
Ratio difference spectrophotometric method was developed for the determination of ibuprofen and famotidine in their mixture form. Ibuprofen and famotidine were determined in the presence of each other by the ratio difference spectrophotometric (RD) method where linearity was obtained from 50 to 600μg/mL and 2.5 to 25μg/mL for ibuprofen and famotidine, respectively. The suggested method was validated according to ICH guidelines and successfully applied for the analysis of ibuprofen and famotidine in their pharmaceutical dosage forms without interference from any additives or excipients. Copyright © 2015 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Abedi, Maysam
2015-06-01
This reply discusses the results of two previously developed approaches in mineral prospectivity/potential mapping (MPM), i.e., ELECTRE III and PROMETHEE II as well-known methods in multi-criteria decision-making (MCDM) problems. Various geo-data sets are integrated to prepare MPM in which generated maps have acceptable matching with the drilled boreholes. Equal performance of the applied methods is indicated in the studied case. Complementary information of these methods is also provided in order to help interested readers to implement them in MPM process.
Multispectral processing without spectra.
Drew, Mark S; Finlayson, Graham D
2003-07-01
It is often the case that multiplications of whole spectra, component by component, must be carried out,for example when light reflects from or is transmitted through materials. This leads to particularly taxing calculations, especially in spectrally based ray tracing or radiosity in graphics, making a full-spectrum method prohibitively expensive. Nevertheless, using full spectra is attractive because of the many important phenomena that can be modeled only by using all the physics at hand. We apply to the task of spectral multiplication a method previously used in modeling RGB-based light propagation. We show that we can often multiply spectra without carrying out spectral multiplication. In previous work [J. Opt. Soc. Am. A 11, 1553 (1994)] we developed a method called spectral sharpening, which took camera RGBs to a special sharp basis that was designed to render illuminant change simple to model. Specifically, in the new basis, one can effectively model illuminant change by using a diagonal matrix rather than the 3 x 3 linear transform that results from a three-component finite-dimensional model [G. Healey and D. Slater, J. Opt. Soc. Am. A 11, 3003 (1994)]. We apply this idea of sharpening to the set of principal components vectors derived from a representative set of spectra that might reasonably be encountered in a given application. With respect to the sharp spectral basis, we show that spectral multiplications can be modeled as the multiplication of the basis coefficients. These new product coefficients applied to the sharp basis serve to accurately reconstruct the spectral product. Although the method is quite general, we show how to use spectral modeling by taking advantage of metameric surfaces, ones that match under one light but not another, for tasks such as volume rendering. The use of metamers allows a user to pick out or merge different volume structures in real time simply by changing the lighting.
Multispectral processing without spectra
NASA Astrophysics Data System (ADS)
Drew, Mark S.; Finlayson, Graham D.
2003-07-01
It is often the case that multiplications of whole spectra, component by component, must be carried out, for example when light reflects from or is transmitted through materials. This leads to particularly taxing calculations, especially in spectrally based ray tracing or radiosity in graphics, making a full-spectrum method prohibitively expensive. Nevertheless, using full spectra is attractive because of the many important phenomena that can be modeled only by using all the physics at hand. We apply to the task of spectral multiplication a method previously used in modeling RGB-based light propagation. We show that we can often multiply spectra without carrying out spectral multiplication. In previous work J. Opt. Soc. Am. A 11 , 1553 (1994) we developed a method called spectral sharpening, which took camera RGBs to a special sharp basis that was designed to render illuminant change simple to model. Specifically, in the new basis, one can effectively model illuminant change by using a diagonal matrix rather than the 33 linear transform that results from a three-component finite-dimensional model G. Healey and D. Slater, J. Opt. Soc. Am. A 11 , 3003 (1994). We apply this idea of sharpening to the set of principal components vectors derived from a representative set of spectra that might reasonably be encountered in a given application. With respect to the sharp spectral basis, we show that spectral multiplications can be modeled as the multiplication of the basis coefficients. These new product coefficients applied to the sharp basis serve to accurately reconstruct the spectral product. Although the method is quite general, we show how to use spectral modeling by taking advantage of metameric surfaces, ones that match under one light but not another, for tasks such as volume rendering. The use of metamers allows a user to pick out or merge different volume structures in real time simply by changing the lighting. 2003 Optical Society of America
Nonlinear calculations of the time evolution of black hole accretion disks
NASA Technical Reports Server (NTRS)
Luo, C.
1994-01-01
Based on previous works on black hole accretion disks, I continue to explore the disk dynamics using the finite difference method to solve the highly nonlinear problem of time-dependent alpha disk equations. Here a radially zoned model is used to develop a computational scheme in order to accommodate functional dependence of the viscosity parameter alpha on the disk scale height and/or surface density. This work is based on the author's previous work on the steady disk structure and the linear analysis of disk dynamics to try to apply to x-ray emissions from black candidates (i.e., multiple-state spectra, instabilities, QPO's, etc.).
Torsional anharmonicity in the conformational thermodynamics of flexible molecules
NASA Astrophysics Data System (ADS)
Miller, Thomas F., III; Clary, David C.
We present an algorithm for calculating the conformational thermodynamics of large, flexible molecules that combines ab initio electronic structure theory calculations with a torsional path integral Monte Carlo (TPIMC) simulation. The new algorithm overcomes the previous limitations of the TPIMC method by including the thermodynamic contributions of non-torsional vibrational modes and by affordably incorporating the ab initio calculation of conformer electronic energies, and it improves the conventional ab initio treatment of conformational thermodynamics by accounting for the anharmonicity of the torsional modes. Using previously published ab initio results and new TPIMC calculations, we apply the algorithm to the conformers of the adrenaline molecule.
Automated cloud screening of AVHRR imagery using split-and-merge clustering
NASA Technical Reports Server (NTRS)
Gallaudet, Timothy C.; Simpson, James J.
1991-01-01
Previous methods to segment clouds from ocean in AVHRR imagery have shown varying degrees of success, with nighttime approaches being the most limited. An improved method of automatic image segmentation, the principal component transformation split-and-merge clustering (PCTSMC) algorithm, is presented and applied to cloud screening of both nighttime and daytime AVHRR data. The method combines spectral differencing, the principal component transformation, and split-and-merge clustering to sample objectively the natural classes in the data. This segmentation method is then augmented by supervised classification techniques to screen clouds from the imagery. Comparisons with other nighttime methods demonstrate its improved capability in this application. The sensitivity of the method to clustering parameters is presented; the results show that the method is insensitive to the split-and-merge thresholds.
Identifying the binding mode of a molecular scaffold
NASA Astrophysics Data System (ADS)
Chema, Doron; Eren, Doron; Yayon, Avner; Goldblum, Amiram; Zaliani, Andrea
2004-01-01
We describe a method for docking of a scaffold-based series and present its advantages over docking of individual ligands, for determining the binding mode of a molecular scaffold in a binding site. The method has been applied to eight different scaffolds of protein kinase inhibitors (PKI). A single analog of each of these eight scaffolds was previously crystallized with different protein kinases. We have used FlexX to dock a set of molecules that share the same scaffold, rather than docking a single molecule. The main mode of binding is determined by the mode of binding of the largest cluster among the docked molecules that share a scaffold. Clustering is based on our `nearest single neighbor' method [J. Chem. Inf. Comput. Sci., 43 (2003) 208-217]. Additional criteria are applied in those cases in which more than one significant binding mode is found. Using the proposed method, most of the crystallographic binding modes of these scaffolds were reconstructed. Alternative modes, that have not been detected yet by experiments, could also be identified. The method was applied to predict the binding mode of an additional molecular scaffold that was not yet reported and the predicted binding mode has been found to be very similar to experimental results for a closely related scaffold. We suggest that this approach be used as a virtual screening tool for scaffold-based design processes.
Xia, Qiangwei; Wang, Tiansong; Park, Yoonsuk; Lamont, Richard J.; Hackett, Murray
2009-01-01
Differential analysis of whole cell proteomes by mass spectrometry has largely been applied using various forms of stable isotope labeling. While metabolic stable isotope labeling has been the method of choice, it is often not possible to apply such an approach. Four different label free ways of calculating expression ratios in a classic “two-state” experiment are compared: signal intensity at the peptide level, signal intensity at the protein level, spectral counting at the peptide level, and spectral counting at the protein level. The quantitative data were mined from a dataset of 1245 qualitatively identified proteins, about 56% of the protein encoding open reading frames from Porphyromonas gingivalis, a Gram-negative intracellular pathogen being studied under extracellular and intracellular conditions. Two different control populations were compared against P. gingivalis internalized within a model human target cell line. The q-value statistic, a measure of false discovery rate previously applied to transcription microarrays, was applied to proteomics data. For spectral counting, the most logically consistent estimate of random error came from applying the locally weighted scatter plot smoothing procedure (LOWESS) to the most extreme ratios generated from a control technical replicate, thus setting upper and lower bounds for the region of experimentally observed random error. PMID:19337574
NASA Astrophysics Data System (ADS)
Vidya Sagar, R.; Raghu Prasad, B. K.
2012-03-01
This article presents a review of recent developments in parametric based acoustic emission (AE) techniques applied to concrete structures. It recapitulates the significant milestones achieved by previous researchers including various methods and models developed in AE testing of concrete structures. The aim is to provide an overview of the specific features of parametric based AE techniques of concrete structures carried out over the years. Emphasis is given to traditional parameter-based AE techniques applied to concrete structures. A significant amount of research on AE techniques applied to concrete structures has already been published and considerable attention has been given to those publications. Some recent studies such as AE energy analysis and b-value analysis used to assess damage of concrete bridge beams have also been discussed. The formation of fracture process zone and the AE energy released during the fracture process in concrete beam specimens have been summarised. A large body of experimental data on AE characteristics of concrete has accumulated over the last three decades. This review of parametric based AE techniques applied to concrete structures may be helpful to the concerned researchers and engineers to better understand the failure mechanism of concrete and evolve more useful methods and approaches for diagnostic inspection of structural elements and failure prediction/prevention of concrete structures.
Inductive matrix completion for predicting gene-disease associations.
Natarajan, Nagarajan; Dhillon, Inderjit S
2014-06-15
Most existing methods for predicting causal disease genes rely on specific type of evidence, and are therefore limited in terms of applicability. More often than not, the type of evidence available for diseases varies-for example, we may know linked genes, keywords associated with the disease obtained by mining text, or co-occurrence of disease symptoms in patients. Similarly, the type of evidence available for genes varies-for example, specific microarray probes convey information only for certain sets of genes. In this article, we apply a novel matrix-completion method called Inductive Matrix Completion to the problem of predicting gene-disease associations; it combines multiple types of evidence (features) for diseases and genes to learn latent factors that explain the observed gene-disease associations. We construct features from different biological sources such as microarray expression data and disease-related textual data. A crucial advantage of the method is that it is inductive; it can be applied to diseases not seen at training time, unlike traditional matrix-completion approaches and network-based inference methods that are transductive. Comparison with state-of-the-art methods on diseases from the Online Mendelian Inheritance in Man (OMIM) database shows that the proposed approach is substantially better-it has close to one-in-four chance of recovering a true association in the top 100 predictions, compared to the recently proposed Catapult method (second best) that has <15% chance. We demonstrate that the inductive method is particularly effective for a query disease with no previously known gene associations, and for predicting novel genes, i.e. genes that are previously not linked to diseases. Thus the method is capable of predicting novel genes even for well-characterized diseases. We also validate the novelty of predictions by evaluating the method on recently reported OMIM associations and on associations recently reported in the literature. Source code and datasets can be downloaded from http://bigdata.ices.utexas.edu/project/gene-disease. © The Author 2014. Published by Oxford University Press.
Braschel, Melissa C; Svec, Ivana; Darlington, Gerarda A; Donner, Allan
2016-04-01
Many investigators rely on previously published point estimates of the intraclass correlation coefficient rather than on their associated confidence intervals to determine the required size of a newly planned cluster randomized trial. Although confidence interval methods for the intraclass correlation coefficient that can be applied to community-based trials have been developed for a continuous outcome variable, fewer methods exist for a binary outcome variable. The aim of this study is to evaluate confidence interval methods for the intraclass correlation coefficient applied to binary outcomes in community intervention trials enrolling a small number of large clusters. Existing methods for confidence interval construction are examined and compared to a new ad hoc approach based on dividing clusters into a large number of smaller sub-clusters and subsequently applying existing methods to the resulting data. Monte Carlo simulation is used to assess the width and coverage of confidence intervals for the intraclass correlation coefficient based on Smith's large sample approximation of the standard error of the one-way analysis of variance estimator, an inverted modified Wald test for the Fleiss-Cuzick estimator, and intervals constructed using a bootstrap-t applied to a variance-stabilizing transformation of the intraclass correlation coefficient estimate. In addition, a new approach is applied in which clusters are randomly divided into a large number of smaller sub-clusters with the same methods applied to these data (with the exception of the bootstrap-t interval, which assumes large cluster sizes). These methods are also applied to a cluster randomized trial on adolescent tobacco use for illustration. When applied to a binary outcome variable in a small number of large clusters, existing confidence interval methods for the intraclass correlation coefficient provide poor coverage. However, confidence intervals constructed using the new approach combined with Smith's method provide nominal or close to nominal coverage when the intraclass correlation coefficient is small (<0.05), as is the case in most community intervention trials. This study concludes that when a binary outcome variable is measured in a small number of large clusters, confidence intervals for the intraclass correlation coefficient may be constructed by dividing existing clusters into sub-clusters (e.g. groups of 5) and using Smith's method. The resulting confidence intervals provide nominal or close to nominal coverage across a wide range of parameters when the intraclass correlation coefficient is small (<0.05). Application of this method should provide investigators with a better understanding of the uncertainty associated with a point estimator of the intraclass correlation coefficient used for determining the sample size needed for a newly designed community-based trial. © The Author(s) 2015.
NASA Astrophysics Data System (ADS)
Saracco, Ginette; Moreau, Frédérique; Mathé, Pierre-Etienne; Hermitte, Daniel; Michel, Jean-Marie
2007-10-01
We have previously developed a method for characterizing and localizing `homogeneous' buried sources, from the measure of potential anomalies at a fixed height above ground (magnetic, electric and gravity). This method is based on potential theory and uses the properties of the Poisson kernel (real by definition) and the continuous wavelet theory. Here, we relax the assumption on sources and introduce a method that we call the `multiscale tomography'. Our approach is based on the harmonic extension of the observed magnetic field to produce a complex source by use of a complex Poisson kernel solution of the Laplace equation for complex potential field. A phase and modulus are defined. We show that the phase provides additional information on the total magnetic inclination and the structure of sources, while the modulus allows us to characterize its spatial location, depth and `effective degree'. This method is compared to the `complex dipolar tomography', extension of the Patella method that we previously developed. We applied both methods and a classical electrical resistivity tomography to detect and localize buried archaeological structures like antique ovens from magnetic measurements on the Fox-Amphoux site (France). The estimates are then compared with the results of excavations.
Detection and Classification of Pole-Like Objects from Mobile Mapping Data
NASA Astrophysics Data System (ADS)
Fukano, K.; Masuda, H.
2015-08-01
Laser scanners on a vehicle-based mobile mapping system can capture 3D point-clouds of roads and roadside objects. Since roadside objects have to be maintained periodically, their 3D models are useful for planning maintenance tasks. In our previous work, we proposed a method for detecting cylindrical poles and planar plates in a point-cloud. However, it is often required to further classify pole-like objects into utility poles, streetlights, traffic signals and signs, which are managed by different organizations. In addition, our previous method may fail to extract low pole-like objects, which are often observed in urban residential areas. In this paper, we propose new methods for extracting and classifying pole-like objects. In our method, we robustly extract a wide variety of poles by converting point-clouds into wireframe models and calculating cross-sections between wireframe models and horizontal cutting planes. For classifying pole-like objects, we subdivide a pole-like object into five subsets by extracting poles and planes, and calculate feature values of each subset. Then we apply a supervised machine learning method using feature variables of subsets. In our experiments, our method could achieve excellent results for detection and classification of pole-like objects.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Martínez-García, Eric E.; González-Lópezlira, Rosa A.; Bruzual A, Gustavo
2017-01-20
Stellar masses of galaxies are frequently obtained by fitting stellar population synthesis models to galaxy photometry or spectra. The state of the art method resolves spatial structures within a galaxy to assess the total stellar mass content. In comparison to unresolved studies, resolved methods yield, on average, higher fractions of stellar mass for galaxies. In this work we improve the current method in order to mitigate a bias related to the resolved spatial distribution derived for the mass. The bias consists in an apparent filamentary mass distribution and a spatial coincidence between mass structures and dust lanes near spiral arms.more » The improved method is based on iterative Bayesian marginalization, through a new algorithm we have named Bayesian Successive Priors (BSP). We have applied BSP to M51 and to a pilot sample of 90 spiral galaxies from the Ohio State University Bright Spiral Galaxy Survey. By quantitatively comparing both methods, we find that the average fraction of stellar mass missed by unresolved studies is only half what previously thought. In contrast with the previous method, the output BSP mass maps bear a better resemblance to near-infrared images.« less
Guo, How-Ran
2011-10-20
Despite its limitations, ecological study design is widely applied in epidemiology. In most cases, adjustment for age is necessary, but different methods may lead to different conclusions. To compare three methods of age adjustment, a study on the associations between arsenic in drinking water and incidence of bladder cancer in 243 townships in Taiwan was used as an example. A total of 3068 cases of bladder cancer, including 2276 men and 792 women, were identified during a ten-year study period in the study townships. Three methods were applied to analyze the same data set on the ten-year study period. The first (Direct Method) applied direct standardization to obtain standardized incidence rate and then used it as the dependent variable in the regression analysis. The second (Indirect Method) applied indirect standardization to obtain standardized incidence ratio and then used it as the dependent variable in the regression analysis instead. The third (Variable Method) used proportions of residents in different age groups as a part of the independent variables in the multiple regression models. All three methods showed a statistically significant positive association between arsenic exposure above 0.64 mg/L and incidence of bladder cancer in men and women, but different results were observed for the other exposure categories. In addition, the risk estimates obtained by different methods for the same exposure category were all different. Using an empirical example, the current study confirmed the argument made by other researchers previously that whereas the three different methods of age adjustment may lead to different conclusions, only the third approach can obtain unbiased estimates of the risks. The third method can also generate estimates of the risk associated with each age group, but the other two are unable to evaluate the effects of age directly.
Application of the Weibull extrapolation to 137Cs geochronology in Tokyo Bay and Ise Bay, Japan.
Lu, Xueqiang
2004-01-01
Considerable doubt surrounds the nature of processes by which 137Cs is deposited in marine sediments, leading to a situation where 137Cs geochronology cannot be always applied suitably. Based on extrapolation with Weibull distribution, the maximum concentration of 137Cs derived from asymptotic values for cumulative specific inventory was used to re-establish 137Cs geochronology, instead of original 137Cs profiles. Corresponding dating results for cores in Tokyo Bay and Ise Bay, Japan, by means of this new method, are in much closer agreement with those calculated from 210Pb method than the previous method.
Aeroelastic Analysis of Aircraft: Wing and Wing/Fuselage Configurations
NASA Technical Reports Server (NTRS)
Chen, H. H.; Chang, K. C.; Tzong, T.; Cebeci, T.
1997-01-01
A previously developed interface method for coupling aerodynamics and structures is used to evaluate the aeroelastic effects for an advanced transport wing at cruise and under-cruise conditions. The calculated results are compared with wind tunnel test data. The capability of the interface method is also investigated for an MD-90 wing/fuselage configuration. In addition, an aircraft trim analysis is described and applied to wing configurations. The accuracy of turbulence models based on the algebraic eddy viscosity formulation of Cebeci and Smith is studied for airfoil flows at low Mach numbers by using methods based on the solutions of the boundary-layer and Navier-Stokes equations.
Neural Network Training by Integration of Adjoint Systems of Equations Forward in Time
NASA Technical Reports Server (NTRS)
Toomarian, Nikzad (Inventor); Barhen, Jacob (Inventor)
1999-01-01
A method and apparatus for supervised neural learning of time dependent trajectories exploits the concepts of adjoint operators to enable computation of the gradient of an objective functional with respect to the various parameters of the network architecture in a highly efficient manner. Specifically. it combines the advantage of dramatic reductions in computational complexity inherent in adjoint methods with the ability to solve two adjoint systems of equations together forward in time. Not only is a large amount of computation and storage saved. but the handling of real-time applications becomes also possible. The invention has been applied it to two examples of representative complexity which have recently been analyzed in the open literature and demonstrated that a circular trajectory can be learned in approximately 200 iterations compared to the 12000 reported in the literature. A figure eight trajectory was achieved in under 500 iterations compared to 20000 previously required. Tbc trajectories computed using our new method are much closer to the target trajectories than was reported in previous studies.
Neural network training by integration of adjoint systems of equations forward in time
NASA Technical Reports Server (NTRS)
Toomarian, Nikzad (Inventor); Barhen, Jacob (Inventor)
1992-01-01
A method and apparatus for supervised neural learning of time dependent trajectories exploits the concepts of adjoint operators to enable computation of the gradient of an objective functional with respect to the various parameters of the network architecture in a highly efficient manner. Specifically, it combines the advantage of dramatic reductions in computational complexity inherent in adjoint methods with the ability to solve two adjoint systems of equations together forward in time. Not only is a large amount of computation and storage saved, but the handling of real-time applications becomes also possible. The invention has been applied it to two examples of representative complexity which have recently been analyzed in the open literature and demonstrated that a circular trajectory can be learned in approximately 200 iterations compared to the 12000 reported in the literature. A figure eight trajectory was achieved in under 500 iterations compared to 20000 previously required. The trajectories computed using our new method are much closer to the target trajectories than was reported in previous studies.
Borovikov, V. A.; Kalinin, S. V.; Khavin, Yu.; ...
2015-08-19
We derive the Green's functions for a three-dimensional semi-infinite fully anisotropic piezoelectric material using the plane wave theory method. The solution gives the complete set of electromechanical fields due to an arbitrarily oriented point force and a point electric charge applied to the boundary of the half-space. Moreover, the solution constitutes generalization of Boussinesq's and Cerruti's problems of elastic isotropy for the anisotropic piezoelectric materials. On the example of piezoceramics PZT-6B, the present results are compared with the previously obtained solution for the special case of transversely isotropic piezoelectric solid subjected to the same boundary condition.
NASA Astrophysics Data System (ADS)
Sultanov, R. A.; Guster, D.; Adhukari, S. K.
2011-05-01
A possibility of correct description of non-symmetrical HD+H2 collision at low temperatures (T≤300 K) is considered by applying symmetrical H2-H2 potential energy surface (PES) [Diep, P. & Johnson, K. 2000, J. Chem. Phys. 113, 3480 (DJ PES)]. With the use of a special mathematical transformation technique, which was applied to this surface, and a quantum dynamical method we obtained a quite satisfactory agreement with previous results when another H2-H2 PES was used [Boothroyd, A.I. et al. 2002, J. Chem. Phys. 116, 666 (BMKP PES)].
Brownian systems with spatially inhomogeneous activity
NASA Astrophysics Data System (ADS)
Sharma, A.; Brader, J. M.
2017-09-01
We generalize the Green-Kubo approach, previously applied to bulk systems of spherically symmetric active particles [J. Chem. Phys. 145, 161101 (2016), 10.1063/1.4966153], to include spatially inhomogeneous activity. The method is applied to predict the spatial dependence of the average orientation per particle and the density. The average orientation is given by an integral over the self part of the Van Hove function and a simple Gaussian approximation to this quantity yields an accurate analytical expression. Taking this analytical result as input to a dynamic density functional theory approximates the spatial dependence of the density in good agreement with simulation data. All theoretical predictions are validated using Brownian dynamics simulations.
Electro-pumped whispering gallery mode ZnO microlaser array
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhu, G. Y.; State Key Laboratory of Bioelectronics, School of Electronic Science and Engineering, Southeast University, Nanjing 210096; Li, J. T.
2015-01-12
By employing vapor-phase transport method, ZnO microrods are fabricated and directly assembled on p-GaN substrate to form a heterostructural microlaser array, which avoids of the relatively complicated etching process comparing previous work. Under applied forward bias, whispering gallery mode ZnO ultraviolet lasing is obtained from the as-fabricated heterostructural microlaser array. The device's electroluminescence originates from three distinct electron-hole recombination processes in the heterojunction interface, and whispering gallery mode ultraviolet lasing is obtained when the applied voltage is beyond the lasing threshold. This work may present a significant step towards future fabrication of a facile technique for micro/nanolasers.
Xu, Changjin; Li, Peiluan; Pang, Yicheng
2016-12-01
In this letter, we deal with a class of memristor-based neural networks with distributed leakage delays. By applying a new Lyapunov function method, we obtain some sufficient conditions that ensure the existence, uniqueness, and global exponential stability of almost periodic solutions of neural networks. We apply the results of this solution to prove the existence and stability of periodic solutions for this delayed neural network with periodic coefficients. We then provide an example to illustrate the effectiveness of the theoretical results. Our results are completely new and complement the previous studies Chen, Zeng, and Jiang ( 2014 ) and Jiang, Zeng, and Chen ( 2015 ).
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gillen, Kenneth Todd; Minier, Leanna M. G.; Celina, Mathias C.
Chemiluminescence (CL) has been applied as a condition monitoring technique to assess aging related changes in a hydroxyl-terminated-polybutadiene based polyurethane elastomer. Initial thermal aging of this polymer was conducted between 110 and 50 C. Two CL methods were applied to examine the degradative changes that had occurred in these aged samples: isothermal 'wear-out' experiments under oxygen yielding initial CL intensity and 'wear-out' time data, and temperature ramp experiments under inert conditions as a measure of previously accumulated hydroperoxides or other reactive species. The sensitivities of these CL features to prior aging exposure of the polymer were evaluated on the basismore » of qualifying this method as a quick screening technique for quantification of degradation levels. Both the techniques yielded data representing the aging trends in this material via correlation with mechanical property changes. Initial CL rates from the isothermal experiments are the most sensitive and suitable approach for documenting material changes during the early part of thermal aging.« less
NASA Astrophysics Data System (ADS)
Anagnostopoulos, Konstantinos N.; Azuma, Takehiro; Ito, Yuta; Nishimura, Jun; Papadoudis, Stratos Kovalkov
2018-02-01
In recent years the complex Langevin method (CLM) has proven a powerful method in studying statistical systems which suffer from the sign problem. Here we show that it can also be applied to an important problem concerning why we live in four-dimensional spacetime. Our target system is the type IIB matrix model, which is conjectured to be a nonperturbative definition of type IIB superstring theory in ten dimensions. The fermion determinant of the model becomes complex upon Euclideanization, which causes a severe sign problem in its Monte Carlo studies. It is speculated that the phase of the fermion determinant actually induces the spontaneous breaking of the SO(10) rotational symmetry, which has direct consequences on the aforementioned question. In this paper, we apply the CLM to the 6D version of the type IIB matrix model and show clear evidence that the SO(6) symmetry is broken down to SO(3). Our results are consistent with those obtained previously by the Gaussian expansion method.
Evaluation of constant-Weber-number scaling for icing tests
NASA Technical Reports Server (NTRS)
Anderson, David N.
1996-01-01
Previous studies showed that for conditions simulating an aircraft encountering super-cooled water droplets the droplets may splash before freezing. Other surface effects dependent on the water surface tension may also influence the ice accretion process. Consequently, the Weber number appears to be important in accurately scaling ice accretion. A scaling method which uses a constant-Weber-number approach has been described previously; this study provides an evaluation of this scaling method. Tests are reported on cylinders of 2.5 to 15-cm diameter and NACA 0012 airfoils with chords of 18 to 53 cm in the NASA Lewis Icing Research Tunnel (IRT). The larger models were used to establish reference ice shapes, the scaling method was applied to determine appropriate scaled test conditions using the smaller models, and the ice shapes were compared. Icing conditions included warm glaze, horn glaze and mixed. The smallest size scaling attempted was 1/3, and scale and reference ice shapes for both cylinders and airfoils indicated that the constant-Weber-number scaling method was effective for the conditions tested.
Computing Earthquake Probabilities on Global Scales
NASA Astrophysics Data System (ADS)
Holliday, James R.; Graves, William R.; Rundle, John B.; Turcotte, Donald L.
2016-03-01
Large devastating events in systems such as earthquakes, typhoons, market crashes, electricity grid blackouts, floods, droughts, wars and conflicts, and landslides can be unexpected and devastating. Events in many of these systems display frequency-size statistics that are power laws. Previously, we presented a new method for calculating probabilities for large events in systems such as these. This method counts the number of small events since the last large event and then converts this count into a probability by using a Weibull probability law. We applied this method to the calculation of large earthquake probabilities in California-Nevada, USA. In that study, we considered a fixed geographic region and assumed that all earthquakes within that region, large magnitudes as well as small, were perfectly correlated. In the present article, we extend this model to systems in which the events have a finite correlation length. We modify our previous results by employing the correlation function for near mean field systems having long-range interactions, an example of which is earthquakes and elastic interactions. We then construct an application of the method and show examples of computed earthquake probabilities.
Maximizing fluid delivered by bubble-free electroosmotic pump with optimum pulse voltage waveform.
Tawfik, Mena E; Diez, Francisco J
2017-03-01
In generating high electroosmotic (EO) flows for use in microfluidic pumps, a limiting factor is faradaic reactions that are more pronounced at high electric fields. These reactions lead to bubble generation at the electrodes and pump efficiency reduction. The onset of gas generation for high current density EO pumping depends on many parameters including applied voltage, working fluid, and pulse duration. The onset of gas generation can be delayed and optimized for maximum volume pumped in the minimum time possible. This has been achieved through the use of a novel numerical model that predicts the onset of gas generation during EO pumping using an optimized pulse voltage waveform. This method allows applying current densities higher than previously reported. Optimal pulse voltage waveforms are calculated based on the previous theories for different current densities and electrolyte molarity. The electroosmotic pump performance is investigated by experimentally measuring the fluid volume displaced and flow rate. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
The 2014 United States National Seismic Hazard Model
Petersen, Mark D.; Moschetti, Morgan P.; Powers, Peter; Mueller, Charles; Haller, Kathleen; Frankel, Arthur; Zeng, Yuehua; Rezaeian, Sanaz; Harmsen, Stephen; Boyd, Oliver; Field, Edward; Chen, Rui; Rukstales, Kenneth S.; Luco, Nicolas; Wheeler, Russell; Williams, Robert; Olsen, Anna H.
2015-01-01
New seismic hazard maps have been developed for the conterminous United States using the latest data, models, and methods available for assessing earthquake hazard. The hazard models incorporate new information on earthquake rupture behavior observed in recent earthquakes; fault studies that use both geologic and geodetic strain rate data; earthquake catalogs through 2012 that include new assessments of locations and magnitudes; earthquake adaptive smoothing models that more fully account for the spatial clustering of earthquakes; and 22 ground motion models, some of which consider more than double the shaking data applied previously. Alternative input models account for larger earthquakes, more complicated ruptures, and more varied ground shaking estimates than assumed in earlier models. The ground motions, for levels applied in building codes, differ from the previous version by less than ±10% over 60% of the country, but can differ by ±50% in localized areas. The models are incorporated in insurance rates, risk assessments, and as input into the U.S. building code provisions for earthquake ground shaking.
Model based estimation of image depth and displacement
NASA Technical Reports Server (NTRS)
Damour, Kevin T.
1992-01-01
Passive depth and displacement map determinations have become an important part of computer vision processing. Applications that make use of this type of information include autonomous navigation, robotic assembly, image sequence compression, structure identification, and 3-D motion estimation. With the reliance of such systems on visual image characteristics, a need to overcome image degradations, such as random image-capture noise, motion, and quantization effects, is clearly necessary. Many depth and displacement estimation algorithms also introduce additional distortions due to the gradient operations performed on the noisy intensity images. These degradations can limit the accuracy and reliability of the displacement or depth information extracted from such sequences. Recognizing the previously stated conditions, a new method to model and estimate a restored depth or displacement field is presented. Once a model has been established, the field can be filtered using currently established multidimensional algorithms. In particular, the reduced order model Kalman filter (ROMKF), which has been shown to be an effective tool in the reduction of image intensity distortions, was applied to the computed displacement fields. Results of the application of this model show significant improvements on the restored field. Previous attempts at restoring the depth or displacement fields assumed homogeneous characteristics which resulted in the smoothing of discontinuities. In these situations, edges were lost. An adaptive model parameter selection method is provided that maintains sharp edge boundaries in the restored field. This has been successfully applied to images representative of robotic scenarios. In order to accommodate image sequences, the standard 2-D ROMKF model is extended into 3-D by the incorporation of a deterministic component based on previously restored fields. The inclusion of past depth and displacement fields allows a means of incorporating the temporal information into the restoration process. A summary on the conditions that indicate which type of filtering should be applied to a field is provided.
Accelerating Smith-Waterman Algorithm for Biological Database Search on CUDA-Compatible GPUs
NASA Astrophysics Data System (ADS)
Munekawa, Yuma; Ino, Fumihiko; Hagihara, Kenichi
This paper presents a fast method capable of accelerating the Smith-Waterman algorithm for biological database search on a cluster of graphics processing units (GPUs). Our method is implemented using compute unified device architecture (CUDA), which is available on the nVIDIA GPU. As compared with previous methods, our method has four major contributions. (1) The method efficiently uses on-chip shared memory to reduce the data amount being transferred between off-chip video memory and processing elements in the GPU. (2) It also reduces the number of data fetches by applying a data reuse technique to query and database sequences. (3) A pipelined method is also implemented to overlap GPU execution with database access. (4) Finally, a master/worker paradigm is employed to accelerate hundreds of database searches on a cluster system. In experiments, the peak performance on a GeForce GTX 280 card reaches 8.32 giga cell updates per second (GCUPS). We also find that our method reduces the amount of data fetches to 1/140, achieving approximately three times higher performance than a previous CUDA-based method. Our 32-node cluster version is approximately 28 times faster than a single GPU version. Furthermore, the effective performance reaches 75.6 giga instructions per second (GIPS) using 32 GeForce 8800 GTX cards.
Carvajal, Roberto C; Arias, Luis E; Garces, Hugo O; Sbarbaro, Daniel G
2016-04-01
This work presents a non-parametric method based on a principal component analysis (PCA) and a parametric one based on artificial neural networks (ANN) to remove continuous baseline features from spectra. The non-parametric method estimates the baseline based on a set of sampled basis vectors obtained from PCA applied over a previously composed continuous spectra learning matrix. The parametric method, however, uses an ANN to filter out the baseline. Previous studies have demonstrated that this method is one of the most effective for baseline removal. The evaluation of both methods was carried out by using a synthetic database designed for benchmarking baseline removal algorithms, containing 100 synthetic composed spectra at different signal-to-baseline ratio (SBR), signal-to-noise ratio (SNR), and baseline slopes. In addition to deomonstrating the utility of the proposed methods and to compare them in a real application, a spectral data set measured from a flame radiation process was used. Several performance metrics such as correlation coefficient, chi-square value, and goodness-of-fit coefficient were calculated to quantify and compare both algorithms. Results demonstrate that the PCA-based method outperforms the one based on ANN both in terms of performance and simplicity. © The Author(s) 2016.
2012-01-01
Background Detecting the borders between coding and non-coding regions is an essential step in the genome annotation. And information entropy measures are useful for describing the signals in genome sequence. However, the accuracies of previous methods of finding borders based on entropy segmentation method still need to be improved. Methods In this study, we first applied a new recursive entropic segmentation method on DNA sequences to get preliminary significant cuts. A 22-symbol alphabet is used to capture the differential composition of nucleotide doublets and stop codon patterns along three phases in both DNA strands. This process requires no prior training datasets. Results Comparing with the previous segmentation methods, the experimental results on three bacteria genomes, Rickettsia prowazekii, Borrelia burgdorferi and E.coli, show that our approach improves the accuracy for finding the borders between coding and non-coding regions in DNA sequences. Conclusions This paper presents a new segmentation method in prokaryotes based on Jensen-Rényi divergence with a 22-symbol alphabet. For three bacteria genomes, comparing to A12_JR method, our method raised the accuracy of finding the borders between protein coding and non-coding regions in DNA sequences. PMID:23282225
Physiological and Molecular Effects of in vivo and ex vivo Mild Skin Barrier Disruption.
Pfannes, Eva K B; Weiss, Lina; Hadam, Sabrina; Gonnet, Jessica; Combardière, Béhazine; Blume-Peytavi, Ulrike; Vogt, Annika
2018-01-01
The success of topically applied treatments on skin relies on the efficacy of skin penetration. In order to increase particle or product penetration, mild skin barrier disruption methods can be used. We previously described cyanoacrylate skin surface stripping as an efficient method to open hair follicles, enhance particle penetration, and activate Langerhans cells. We conducted ex vivo and in vivo measurements on human skin to characterize the biological effect and quantify barrier disruption-related inflammation on a molecular level. Despite the known immunostimulatory effects, this barrier disruption and hair follicle opening method was well accepted and did not result in lasting changes of skin physiological parameters, cytokine production, or clinical side effects. Only in ex vivo human skin did we find a discrete increase in IP-10, TGF-β, IL-8, and GM-CSF mRNA. The data underline the safety profile of this method and demonstrate that the procedure per se does not cause substantial inflammation or skin damage, which is also of interest when applied to non-invasive sampling of biomarkers in clinical trials. © 2018 S. Karger AG, Basel.
Feasibility study for automatic reduction of phase change imagery
NASA Technical Reports Server (NTRS)
Nossaman, G. O.
1971-01-01
The feasibility of automatically reducing a form of pictorial aerodynamic heating data is discussed. The imagery, depicting the melting history of a thin coat of fusible temperature indicator painted on an aerodynamically heated model, was previously reduced by manual methods. Careful examination of various lighting theories and approaches led to an experimentally verified illumination concept capable of yielding high-quality imagery. Both digital and video image processing techniques were applied to reduction of the data, and it was demonstrated that either method can be used to develop superimposed contours. Mathematical techniques were developed to find the model-to-image and the inverse image-to-model transformation using six conjugate points, and methods were developed using these transformations to determine heating rates on the model surface. A video system was designed which is able to reduce the imagery rapidly, economically and accurately. Costs for this system were estimated. A study plan was outlined whereby the mathematical transformation techniques developed to produce model coordinate heating data could be applied to operational software, and methods were discussed and costs estimated for obtaining the digital information necessary for this software.
GREENHOUSE, BRYAN; MYRICK, ALISSA; DOKOMAJILAR, CHRISTIAN; WOO, JONATHAN M.; CARLSON, ELAINE J.; ROSENTHAL, PHILIP J.; DORSEY, GRANT
2006-01-01
Genotyping methods for Plasmodium falciparum drug efficacy trials have not been standardized and may fail to accurately distinguish recrudescence from new infection, especially in high transmission areas where polyclonal infections are common. We developed a simple method for genotyping using previously identified microsatellites and capillary electrophoresis, validated this method using mixtures of laboratory clones, and applied the method to field samples. Two microsatellite markers produced accurate results for single-clone but not polyclonal samples. Four other microsatellite markers were as sensitive as, and more specific than, commonly used genotyping techniques based on merozoite surface proteins 1 and 2. When applied to samples from 15 patients in Burkina Faso with recurrent parasitemia after treatment with sulphadoxine-pyrimethamine, the addition of these four microsatellite markers to msp1 and msp2 genotyping resulted in a reclassification of outcomes that strengthened the association between dhfr 59R, an anti-folate resistance mutation, and recrudescence (P = 0.31 versus P = 0.03). Four microsatellite markers performed well on polyclonal samples and may provide a valuable addition to genotyping for clinical drug efficacy studies in high transmission areas. PMID:17123974
Using Grounded Theory Method to Capture and Analyze Health Care Experiences.
Foley, Geraldine; Timonen, Virpi
2015-08-01
Grounded theory (GT) is an established qualitative research method, but few papers have encapsulated the benefits, limits, and basic tenets of doing GT research on user and provider experiences of health care services. GT can be used to guide the entire study method, or it can be applied at the data analysis stage only. We summarize key components of GT and common GT procedures used by qualitative researchers in health care research. We draw on our experience of conducting a GT study on amyotrophic lateral sclerosis patients' experiences of health care services. We discuss why some approaches in GT research may work better than others, particularly when the focus of study is hard-to-reach population groups. We highlight the flexibility of procedures in GT to build theory about how people engage with health care services. GT enables researchers to capture and understand health care experiences. GT methods are particularly valuable when the topic of interest has not previously been studied. GT can be applied to bring structure and rigor to the analysis of qualitative data. © Health Research and Educational Trust.
Han, Lei; Wulie, Buzha; Yang, Yiling; Wang, Hongqing
2015-01-05
This study investigated a novel method of fusing visible (VIS) and infrared (IR) images with the major objective of obtaining higher-resolution IR images. Most existing image fusion methods focus only on visual performance and many fail to consider the thermal physical properties of the IR images, leading to spectral distortion in the fused image. In this study, we use the IR thermal physical property to correct the VIS image directly. Specifically, the Stefan-Boltzmann Law is used as a strong constraint to modulate the VIS image, such that the fused result shows a similar level of regional thermal energy as the original IR image, while preserving the high-resolution structural features from the VIS image. This method is an improvement over our previous study, which required VIS-IR multi-wavelet fusion before the same correction method was applied. The results of experiments show that applying this correction to the VIS image directly without multi-resolution analysis (MRA) processing achieves similar results, but is considerably more computationally efficient, thereby providing a new perspective on VIS and IR image fusion.
Han, Lei; Wulie, Buzha; Yang, Yiling; Wang, Hongqing
2015-01-01
This study investigated a novel method of fusing visible (VIS) and infrared (IR) images with the major objective of obtaining higher-resolution IR images. Most existing image fusion methods focus only on visual performance and many fail to consider the thermal physical properties of the IR images, leading to spectral distortion in the fused image. In this study, we use the IR thermal physical property to correct the VIS image directly. Specifically, the Stefan-Boltzmann Law is used as a strong constraint to modulate the VIS image, such that the fused result shows a similar level of regional thermal energy as the original IR image, while preserving the high-resolution structural features from the VIS image. This method is an improvement over our previous study, which required VIS-IR multi-wavelet fusion before the same correction method was applied. The results of experiments show that applying this correction to the VIS image directly without multi-resolution analysis (MRA) processing achieves similar results, but is considerably more computationally efficient, thereby providing a new perspective on VIS and IR image fusion. PMID:25569749
Visual enhancement of unmixed multispectral imagery using adaptive smoothing
Lemeshewsky, G.P.; Rahman, Z.-U.; Schowengerdt, R.A.; Reichenbach, S.E.
2004-01-01
Adaptive smoothing (AS) has been previously proposed as a method to smooth uniform regions of an image, retain contrast edges, and enhance edge boundaries. The method is an implementation of the anisotropic diffusion process which results in a gray scale image. This paper discusses modifications to the AS method for application to multi-band data which results in a color segmented image. The process was used to visually enhance the three most distinct abundance fraction images produced by the Lagrange constraint neural network learning-based unmixing of Landsat 7 Enhanced Thematic Mapper Plus multispectral sensor data. A mutual information-based method was applied to select the three most distinct fraction images for subsequent visualization as a red, green, and blue composite. A reported image restoration technique (partial restoration) was applied to the multispectral data to reduce unmixing error, although evaluation of the performance of this technique was beyond the scope of this paper. The modified smoothing process resulted in a color segmented image with homogeneous regions separated by sharpened, coregistered multiband edges. There was improved class separation with the segmented image, which has importance to subsequent operations involving data classification.
Lamb Waves Decomposition and Mode Identification Using Matching Pursuit Method
2009-01-01
Wigner - Ville distribution ( WVD ). However, WVD suffers from severe interferences, called cross-terms. Cross- terms are the area of a time-frequency...transform (STFT), wavelet transform, Wigner - Ville distribution , matching pursuit decomposition, etc. 1 Report Documentation Page Form ApprovedOMB No...MP decomposition using chirplet dictionary was applied to a simulated S0 mode Lamb wave shown previously in Figure 2a. Wigner - Ville distribution of
Existence of topological multi-string solutions in Abelian gauge field theories
NASA Astrophysics Data System (ADS)
Han, Jongmin; Sohn, Juhee
2017-11-01
In this paper, we consider a general form of self-dual equations arising from Abelian gauge field theories coupled with the Einstein equations. By applying the super/subsolution method, we prove that topological multi-string solutions exist for any coupling constant, which improves previously known results. We provide two examples for application: the self-dual Einstein-Maxwell-Higgs model and the gravitational Maxwell gauged O(3) sigma model.
2014-09-30
correlation detector to investigate the behavior of vocalizing whales and their distribution relative to the vent fields. To determine call...in earthquake studies but has not previously been applied to marine mammals. The close relationship between biomass and acoustic backscatter...F., and Ellsworth, W. L. (2000). A double-difference earthquake location algorithm: Method and application to the northern Hayward Fault, California
Spectroscopic Determination of the AC Voltammetric Response.
1984-01-06
characterization of electrode processes. More recently, with the advent of linear sweep cyclic AC voltanmetry(12’ 13), it has been shown that AC methods...implemented with the same instrumentation ( 7 ) as previously used in MSRS and retains both the qualitative and quantitative utility of linear sweep ...voltammetric response (eg. peak width at balf-height, peak separation and cross-over potential in cyclic AC voltametry ) apply equally well to the SACRS
Hatch, Kenneth D.
2012-01-01
Abstract. With no sufficient screening test for ovarian cancer, a method to evaluate the ovarian disease state quickly and nondestructively is needed. The authors have applied a wide-field spectral imager to freshly resected ovaries of 30 human patients in a study believed to be the first of its magnitude. Endogenous fluorescence was excited with 365-nm light and imaged in eight emission bands collectively covering the 400- to 640-nm range. Linear discriminant analysis was used to classify all image pixels and generate diagnostic maps of the ovaries. Training the classifier with previously collected single-point autofluorescence measurements of a spectroscopic probe enabled this novel classification. The process by which probe-collected spectra were transformed for comparison with imager spectra is described. Sensitivity of 100% and specificity of 51% were obtained in classifying normal and cancerous ovaries using autofluorescence data alone. Specificity increased to 69% when autofluorescence data were divided by green reflectance data to correct for spatial variation in tissue absorption properties. Benign neoplasm ovaries were also found to classify as nonmalignant using the same algorithm. Although applied ex vivo, the method described here appears useful for quick assessment of cancer presence in the human ovary. PMID:22502561
36 CFR 1225.24 - When can an agency apply previously approved schedules to electronic records?
Code of Federal Regulations, 2010 CFR
2010-07-01
... may apply a previously approved schedule for hard copy records to electronic versions of the permanent records when the electronic records system replaces a single series of hard copy permanent records or the... have been previously scheduled as permanent in hard copy form, including special media records as...
Provisional-Ideal-Point-Based Multi-objective Optimization Method for Drone Delivery Problem
NASA Astrophysics Data System (ADS)
Omagari, Hiroki; Higashino, Shin-Ichiro
2018-04-01
In this paper, we proposed a new evolutionary multi-objective optimization method for solving drone delivery problems (DDP). It can be formulated as a constrained multi-objective optimization problem. In our previous research, we proposed the "aspiration-point-based method" to solve multi-objective optimization problems. However, this method needs to calculate the optimal values of each objective function value in advance. Moreover, it does not consider the constraint conditions except for the objective functions. Therefore, it cannot apply to DDP which has many constraint conditions. To solve these issues, we proposed "provisional-ideal-point-based method." The proposed method defines a "penalty value" to search for feasible solutions. It also defines a new reference solution named "provisional-ideal point" to search for the preferred solution for a decision maker. In this way, we can eliminate the preliminary calculations and its limited application scope. The results of the benchmark test problems show that the proposed method can generate the preferred solution efficiently. The usefulness of the proposed method is also demonstrated by applying it to DDP. As a result, the delivery path when combining one drone and one truck drastically reduces the traveling distance and the delivery time compared with the case of using only one truck.
Federal Register 2010, 2011, 2012, 2013, 2014
2013-10-01
... Administration proposes not to apply, upon the effective date of this rule if implemented, the previously... as it applies to Gold East (Jiangsu) Paper Co. and to apply the withdrawn regulations. The Court disagreed with the Department's determination that the regulations were not applicable to Gold East (Jiangsu...
Anatomically-Aided PET Reconstruction Using the Kernel Method
Hutchcroft, Will; Wang, Guobao; Chen, Kevin T.; Catana, Ciprian; Qi, Jinyi
2016-01-01
This paper extends the kernel method that was proposed previously for dynamic PET reconstruction, to incorporate anatomical side information into the PET reconstruction model. In contrast to existing methods that incorporate anatomical information using a penalized likelihood framework, the proposed method incorporates this information in the simpler maximum likelihood (ML) formulation and is amenable to ordered subsets. The new method also does not require any segmentation of the anatomical image to obtain edge information. We compare the kernel method with the Bowsher method for anatomically-aided PET image reconstruction through a simulated data set. Computer simulations demonstrate that the kernel method offers advantages over the Bowsher method in region of interest (ROI) quantification. Additionally the kernel method is applied to a 3D patient data set. The kernel method results in reduced noise at a matched contrast level compared with the conventional ML expectation maximization (EM) algorithm. PMID:27541810
Anatomically-aided PET reconstruction using the kernel method.
Hutchcroft, Will; Wang, Guobao; Chen, Kevin T; Catana, Ciprian; Qi, Jinyi
2016-09-21
This paper extends the kernel method that was proposed previously for dynamic PET reconstruction, to incorporate anatomical side information into the PET reconstruction model. In contrast to existing methods that incorporate anatomical information using a penalized likelihood framework, the proposed method incorporates this information in the simpler maximum likelihood (ML) formulation and is amenable to ordered subsets. The new method also does not require any segmentation of the anatomical image to obtain edge information. We compare the kernel method with the Bowsher method for anatomically-aided PET image reconstruction through a simulated data set. Computer simulations demonstrate that the kernel method offers advantages over the Bowsher method in region of interest quantification. Additionally the kernel method is applied to a 3D patient data set. The kernel method results in reduced noise at a matched contrast level compared with the conventional ML expectation maximization algorithm.
Anatomically-aided PET reconstruction using the kernel method
NASA Astrophysics Data System (ADS)
Hutchcroft, Will; Wang, Guobao; Chen, Kevin T.; Catana, Ciprian; Qi, Jinyi
2016-09-01
This paper extends the kernel method that was proposed previously for dynamic PET reconstruction, to incorporate anatomical side information into the PET reconstruction model. In contrast to existing methods that incorporate anatomical information using a penalized likelihood framework, the proposed method incorporates this information in the simpler maximum likelihood (ML) formulation and is amenable to ordered subsets. The new method also does not require any segmentation of the anatomical image to obtain edge information. We compare the kernel method with the Bowsher method for anatomically-aided PET image reconstruction through a simulated data set. Computer simulations demonstrate that the kernel method offers advantages over the Bowsher method in region of interest quantification. Additionally the kernel method is applied to a 3D patient data set. The kernel method results in reduced noise at a matched contrast level compared with the conventional ML expectation maximization algorithm.
A rapid and rational approach to generating isomorphous heavy-atom phasing derivatives.
Lu, Jinghua; Sun, Peter D
2014-09-01
In attempts to replace the conventional trial-and-error heavy-atom derivative search method with a rational approach, we previously defined heavy metal compound reactivity against peptide ligands. Here, we assembled a composite pH- and buffer-dependent peptide reactivity profile for each heavy metal compound to guide rational heavy-atom derivative search. When knowledge of the best-reacting heavy-atom compound is combined with mass spectrometry assisted derivatization, and with a quick-soak method to optimize phasing, it is likely that the traditional heavy-atom compounds could meet the demand of modern high-throughput X-ray crystallography. As an example, we applied this rational heavy-atom phasing approach to determine a previously unknown mouse serum amyloid A2 crystal structure. Published 2014. This article is a U.S. Government work and is in the public domain in the USA.
Improving the analysis of composite endpoints in rare disease trials.
McMenamin, Martina; Berglind, Anna; Wason, James M S
2018-05-22
Composite endpoints are recommended in rare diseases to increase power and/or to sufficiently capture complexity. Often, they are in the form of responder indices which contain a mixture of continuous and binary components. Analyses of these outcomes typically treat them as binary, thus only using the dichotomisations of continuous components. The augmented binary method offers a more efficient alternative and is therefore especially useful for rare diseases. Previous work has indicated the method may have poorer statistical properties when the sample size is small. Here we investigate small sample properties and implement small sample corrections. We re-sample from a previous trial with sample sizes varying from 30 to 80. We apply the standard binary and augmented binary methods and determine the power, type I error rate, coverage and average confidence interval width for each of the estimators. We implement Firth's adjustment for the binary component models and a small sample variance correction for the generalized estimating equations, applying the small sample adjusted methods to each sub-sample as before for comparison. For the log-odds treatment effect the power of the augmented binary method is 20-55% compared to 12-20% for the standard binary method. Both methods have approximately nominal type I error rates. The difference in response probabilities exhibit similar power but both unadjusted methods demonstrate type I error rates of 6-8%. The small sample corrected methods have approximately nominal type I error rates. On both scales, the reduction in average confidence interval width when using the adjusted augmented binary method is 17-18%. This is equivalent to requiring a 32% smaller sample size to achieve the same statistical power. The augmented binary method with small sample corrections provides a substantial improvement for rare disease trials using composite endpoints. We recommend the use of the method for the primary analysis in relevant rare disease trials. We emphasise that the method should be used alongside other efforts in improving the quality of evidence generated from rare disease trials rather than replace them.
A rationale method for evaluating unscrewing torque values of prosthetic screws in dental implants
SALIBA, Felipe Miguel; CARDOSO, Mayra; TORRES, Marcelo Ferreira; TEIXEIRA, Alexandre Carvalho; LOURENÇO, Eduardo José Veras; TELLES, Daniel de Moraes
2011-01-01
Objectives Previous studies that evaluated the torque needed for removing dental implant screws have not considered the manner of transfer of the occlusal loads in clinical settings. Instead, the torque used for removal was applied directly to the screw, and most of them omitted the possibility that the hexagon could limit the action of the occlusal load in the loosening of the screws. The present study proposes a method for evaluating the screw removal torque in an anti-rotational device independent way, creating an unscrewing load transfer to the entire assembly, not only to the screw. Material and methods Twenty hexagonal abutments without the hexagon in their bases were fixed with a screw to 20 dental implants. They were divided into two groups: Group 1 used titanium screws and Group 2 used titanium screws covered with a solid lubricant. A torque of 32 Ncm was applied to the screw and then a custom-made wrench was used for rotating the abutment counterclockwise, to loosen the screw. A digital torque meter recorded the torque required to loosen the abutment. Results There was a significant difference between the means of Group 1 (38.62±6.43 Ncm) and Group 2 (48.47±5.04 Ncm), with p=0.001. Conclusion This methodology was effective in comparing unscrewing torque values of the implant-abutment junction even with a limited sample size. It confirmed a previously shown significant difference between two types of screws. PMID:21437472
NASA Astrophysics Data System (ADS)
Abedi, Maysam; Gholami, Ali; Norouzi, Gholam-Hossain
2013-03-01
Previous studies have shown that a well-known multi-criteria decision making (MCDM) technique called Preference Ranking Organization METHod for Enrichment Evaluation (PROMETHEE II) to explore porphyry copper deposits can prioritize the ground-based exploratory evidential layers effectively. In this paper, the PROMETHEE II method is applied to airborne geophysical (potassium radiometry and magnetometry) data, geological layers (fault and host rock zones), and various extracted alteration layers from remote sensing images. The central Iranian volcanic-sedimentary belt is chosen for this study. A stable downward continuation method as an inverse problem in the Fourier domain using Tikhonov and edge-preserving regularizations is proposed to enhance magnetic data. Numerical analysis of synthetic models show that the reconstructed magnetic data at the ground surface exhibits significant enhancement compared to the airborne data. The reduced-to-pole (RTP) and the analytic signal filters are applied to the magnetic data to show better maps of the magnetic anomalies. Four remote sensing evidential layers including argillic, phyllic, propylitic and hydroxyl alterations are extracted from Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER) images in order to map the altered areas associated with porphyry copper deposits. Principal component analysis (PCA) based on six Enhanced Thematic Mapper Plus (ETM+) images is implemented to map iron oxide layer. The final mineral prospectivity map based on desired geo-data set indicates adequately matching of high potential zones with previous working mines and copper deposits.
Fixed-Base Comb with Window-Non-Adjacent Form (NAF) Method for Scalar Multiplication
Seo, Hwajeong; Kim, Hyunjin; Park, Taehwan; Lee, Yeoncheol; Liu, Zhe; Kim, Howon
2013-01-01
Elliptic curve cryptography (ECC) is one of the most promising public-key techniques in terms of short key size and various crypto protocols. For this reason, many studies on the implementation of ECC on resource-constrained devices within a practical execution time have been conducted. To this end, we must focus on scalar multiplication, which is the most expensive operation in ECC. A number of studies have proposed pre-computation and advanced scalar multiplication using a non-adjacent form (NAF) representation, and more sophisticated approaches have employed a width-w NAF representation and a modified pre-computation table. In this paper, we propose a new pre-computation method in which zero occurrences are much more frequent than in previous methods. This method can be applied to ordinary group scalar multiplication, but it requires large pre-computation table, so we combined the previous method with ours for practical purposes. This novel structure establishes a new feature that adjusts speed performance and table size finely, so we can customize the pre-computation table for our own purposes. Finally, we can establish a customized look-up table for embedded microprocessors. PMID:23881143
Molecular methods for diagnosis of odontogenic infections.
Flynn, Thomas R; Paster, Bruce J; Stokes, Lauren N; Susarla, Srinivas M; Shanti, Rabie M
2012-08-01
Historically, the identification of microorganisms has been limited to species that could be cultured in the microbiology laboratory. The purpose of the present study was to apply molecular techniques to identify microorganisms in orofacial odontogenic infections (OIs). Specimens were obtained from subjects with clinical evidence of OI. To identify the microorganisms involved, 16S rRNA sequencing methods were used on clinical specimens. The name and number of the clones of each species identified and the combinations of species present were recorded for each subject. Descriptive statistics were computed for the study variables. Specimens of pus or wound fluid were obtained from 9 subjects. A mean of 7.4 ± 3.7 (standard deviation) species per case were identified. The predominant species detected in the present study that have previously been associated with OIs were Fusobacterium spp, Parvimonas micra, Porphyromonas endodontalis, and Prevotella oris. The predominant species detected in our study that have not been previously associated with OIs were Dialister pneumosintes and Eubacterium brachy. Unculturable phylotypes accounted for 24% of the species identified in our study. All species detected were obligate or facultative anaerobes. Streptococci were not detected. Molecular methods have enabled us to detect previously cultivated and not-yet-cultivated species in OIs; these methods could change our understanding of the pathogenic flora of orofacial OIs. Copyright © 2012 American Association of Oral and Maxillofacial Surgeons. Published by Elsevier Inc. All rights reserved.
Determination of dew point conditions for CO2 with impurities using microfluidics.
Song, Wen; Fadaei, Hossein; Sinton, David
2014-03-18
Impurities can greatly modify the phase behavior of carbon dioxide (CO2), with significant implications on the safety and cost of transport in pipelines. In this paper we demonstrate a microfluidic approach to measure the dew point of such mixtures, specifically the point at which water in supercritical CO2 mixtures condenses to a liquid state. The method enables direct visualization of dew formation (∼ 1-2 μm diameter droplets) at industrially relevant concentrations, pressures, and temperatures. Dew point measurements for the well-studied case of pure CO2-water agreed well with previous theoretical and experimental data over the range of pressure (up to 13.17 MPa), temperature (up to 50 °C), and water content (down to 0.00229 mol fraction) studied. The microfluidic approach showed a nearly 3-fold reduction in error as compared to previous methods. When applied to a mixture with nitrogen (2.5%) and oxygen (5.8%) impurities--typical of flue gas from natural gas oxy-fuel combustion processes--the measured dew point pressure increased on average 17.55 ± 5.4%, indicating a more stringent minimum pressure for pipeline transport. In addition to increased precision, the microfluidic method offers a direct measurement of dew formation, requires very small volumes (∼ 10 μL), and is applicable to ultralow water contents (<0.005 mol fractions), circumventing the limits of previous methods.
A high speed model-based approach for wavefront sensorless adaptive optics systems
NASA Astrophysics Data System (ADS)
Lianghua, Wen; Yang, Ping; Shuai, Wang; Wenjing, Liu; Shanqiu, Chen; Xu, Bing
2018-02-01
To improve temporal-frequency property of wavefront sensorless adaptive optics (AO) systems, a fast general model-based aberration correction algorithm is presented. The fast general model-based approach is based on the approximately linear relation between the mean square of the aberration gradients and the second moment of far-field intensity distribution. The presented model-based method is capable of completing a mode aberration effective correction just applying one disturbing onto the deformable mirror(one correction by one disturbing), which is reconstructed by the singular value decomposing the correlation matrix of the Zernike functions' gradients. Numerical simulations of AO corrections under the various random and dynamic aberrations are implemented. The simulation results indicate that the equivalent control bandwidth is 2-3 times than that of the previous method with one aberration correction after applying N times disturbing onto the deformable mirror (one correction by N disturbing).
Conformational effects on circular dichroism in the photoelectron angular distribution.
Di Tommaso, Devis; Stener, Mauro; Fronzoni, Giovanna; Decleva, Piero
2006-04-10
The B-spline density-functional method has been applied to the conformers of the (1R, 2R)-1,2-dibromo-1,2-dichloro-1,2-difluoroethane molecule. The cross section, asymmetry, and dichroic parameters relative to core and valence orbitals, which do not change their nature along the conformational curve, have been systematically studied. While the cross section and the asymmetry parameter are weakly affected, the dichroic parameter appears to be rather sensitive to the particular conformer of the molecule, suggesting that this dynamical property could be a useful tool for conformational analysis. The computational method has also been applied to methyl rotation in methyloxirane. Unexpected and dramatic sensitivity of the dichroic-parameter profile to the methyl rotation, both in the core and valence states, has been found. Boltzmann averaging over the conformers reproduces quite closely the profiles previously obtained for the minimum-energy conformation, which is in good agreement with the experimental results.
Model Checking JAVA Programs Using Java Pathfinder
NASA Technical Reports Server (NTRS)
Havelund, Klaus; Pressburger, Thomas
2000-01-01
This paper describes a translator called JAVA PATHFINDER from JAVA to PROMELA, the "programming language" of the SPIN model checker. The purpose is to establish a framework for verification and debugging of JAVA programs based on model checking. This work should be seen in a broader attempt to make formal methods applicable "in the loop" of programming within NASA's areas such as space, aviation, and robotics. Our main goal is to create automated formal methods such that programmers themselves can apply these in their daily work (in the loop) without the need for specialists to manually reformulate a program into a different notation in order to analyze the program. This work is a continuation of an effort to formally verify, using SPIN, a multi-threaded operating system programmed in Lisp for the Deep-Space 1 spacecraft, and of previous work in applying existing model checkers and theorem provers to real applications.
Geometrical optics approach in liquid crystal films with three-dimensional director variations.
Panasyuk, G; Kelly, J; Gartland, E C; Allender, D W
2003-04-01
A formal geometrical optics approach (GOA) to the optics of nematic liquid crystals whose optic axis (director) varies in more than one dimension is described. The GOA is applied to the propagation of light through liquid crystal films whose director varies in three spatial dimensions. As an example, the GOA is applied to the calculation of light transmittance for the case of a liquid crystal cell which exhibits the homeotropic to multidomainlike transition (HMD cell). Properties of the GOA solution are explored, and comparison with the Jones calculus solution is also made. For variations on a smaller scale, where the Jones calculus breaks down, the GOA provides a fast, accurate method for calculating light transmittance. The results of light transmittance calculations for the HMD cell based on the director patterns provided by two methods, direct computer calculation and a previously developed simplified model, are in good agreement.
Meshless Local Petrov-Galerkin Method for Bending Problems
NASA Technical Reports Server (NTRS)
Phillips, Dawn R.; Raju, Ivatury S.
2002-01-01
Recent literature shows extensive research work on meshless or element-free methods as alternatives to the versatile Finite Element Method. One such meshless method is the Meshless Local Petrov-Galerkin (MLPG) method. In this report, the method is developed for bending of beams - C1 problems. A generalized moving least squares (GMLS) interpolation is used to construct the trial functions, and spline and power weight functions are used as the test functions. The method is applied to problems for which exact solutions are available to evaluate its effectiveness. The accuracy of the method is demonstrated for problems with load discontinuities and continuous beam problems. A Petrov-Galerkin implementation of the method is shown to greatly reduce computational time and effort and is thus preferable over the previously developed Galerkin approach. The MLPG method for beam problems yields very accurate deflections and slopes and continuous moment and shear forces without the need for elaborate post-processing techniques.
Direct application of Padé approximant for solving nonlinear differential equations.
Vazquez-Leal, Hector; Benhammouda, Brahim; Filobello-Nino, Uriel; Sarmiento-Reyes, Arturo; Jimenez-Fernandez, Victor Manuel; Garcia-Gervacio, Jose Luis; Huerta-Chua, Jesus; Morales-Mendoza, Luis Javier; Gonzalez-Lee, Mario
2014-01-01
This work presents a direct procedure to apply Padé method to find approximate solutions for nonlinear differential equations. Moreover, we present some cases study showing the strength of the method to generate highly accurate rational approximate solutions compared to other semi-analytical methods. The type of tested nonlinear equations are: a highly nonlinear boundary value problem, a differential-algebraic oscillator problem, and an asymptotic problem. The high accurate handy approximations obtained by the direct application of Padé method shows the high potential if the proposed scheme to approximate a wide variety of problems. What is more, the direct application of the Padé approximant aids to avoid the previous application of an approximative method like Taylor series method, homotopy perturbation method, Adomian Decomposition method, homotopy analysis method, variational iteration method, among others, as tools to obtain a power series solutions to post-treat with the Padé approximant. 34L30.
Surgical model pig ex vivo for venous dissection teaching in medical schools.
Tube, Milton Ignacio Carvalho; Spencer-Netto, Fernando Antonio Campelo; Oliveira, Anderson Igor Pereira de; Holanda, Arthur Cesário de; Barros, Bruno Leão Dos Santos; Rezende, Caio Cezar Gomes; Cavalcanti, João Pedro Guerra; Batista, Marília Apolinário; Campos, Josemberg Marins
2017-02-01
To investigate a method for development of surgical skills in medical students simulating venous dissection in surgical ex vivo pig model. Prospective, analytical, experimental, controlled study with four stages: selection, theoretical teaching, training and assessment. Sample of 312 students was divided into two groups: Group A - 2nd semester students; Group B - students of 8th semester. The groups were divided into five groups of 12 students, trained two hours per week in the semester. They set up four models to three students in each skill station assisted by a monitor. Teaching protocol emergency procedures training were applied to venous dissection, test goal-discursive and OSATS scale. The pre-test confirmed that the methodology has not been previously applied to the students. The averages obtained in the theoretical evaluation reached satisfactory parameters in both groups. The results of applying OSATS scale showed the best performance in group A compared to group B, however, both groups had satisfactory medium. The method was enough to raise a satisfactory level of skill both groups in venous dissection running on surgical swine ex vivo models.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kim, Sun Mo, E-mail: Sunmo.Kim@rmp.uhn.on.ca; Haider, Masoom A.; Jaffray, David A.
Purpose: A previously proposed method to reduce radiation dose to patient in dynamic contrast-enhanced (DCE) CT is enhanced by principal component analysis (PCA) filtering which improves the signal-to-noise ratio (SNR) of time-concentration curves in the DCE-CT study. The efficacy of the combined method to maintain the accuracy of kinetic parameter estimates at low temporal resolution is investigated with pixel-by-pixel kinetic analysis of DCE-CT data. Methods: The method is based on DCE-CT scanning performed with low temporal resolution to reduce the radiation dose to the patient. The arterial input function (AIF) with high temporal resolution can be generated with a coarselymore » sampled AIF through a previously published method of AIF estimation. To increase the SNR of time-concentration curves (tissue curves), first, a region-of-interest is segmented into squares composed of 3 × 3 pixels in size. Subsequently, the PCA filtering combined with a fraction of residual information criterion is applied to all the segmented squares for further improvement of their SNRs. The proposed method was applied to each DCE-CT data set of a cohort of 14 patients at varying levels of down-sampling. The kinetic analyses using the modified Tofts’ model and singular value decomposition method, then, were carried out for each of the down-sampling schemes between the intervals from 2 to 15 s. The results were compared with analyses done with the measured data in high temporal resolution (i.e., original scanning frequency) as the reference. Results: The patients’ AIFs were estimated to high accuracy based on the 11 orthonormal bases of arterial impulse responses established in the previous paper. In addition, noise in the images was effectively reduced by using five principal components of the tissue curves for filtering. Kinetic analyses using the proposed method showed superior results compared to those with down-sampling alone; they were able to maintain the accuracy in the quantitative histogram parameters of volume transfer constant [standard deviation (SD), 98th percentile, and range], rate constant (SD), blood volume fraction (mean, SD, 98th percentile, and range), and blood flow (mean, SD, median, 98th percentile, and range) for sampling intervals between 10 and 15 s. Conclusions: The proposed method of PCA filtering combined with the AIF estimation technique allows low frequency scanning for DCE-CT study to reduce patient radiation dose. The results indicate that the method is useful in pixel-by-pixel kinetic analysis of DCE-CT data for patients with cervical cancer.« less
A new statistical method for characterizing the atmospheres of extrasolar planets
NASA Astrophysics Data System (ADS)
Henderson, Cassandra S.; Skemer, Andrew J.; Morley, Caroline V.; Fortney, Jonathan J.
2017-10-01
By detecting light from extrasolar planets, we can measure their compositions and bulk physical properties. The technologies used to make these measurements are still in their infancy, and a lack of self-consistency suggests that previous observations have underestimated their systemic errors. We demonstrate a statistical method, newly applied to exoplanet characterization, which uses a Bayesian formalism to account for underestimated errorbars. We use this method to compare photometry of a substellar companion, GJ 758b, with custom atmospheric models. Our method produces a probability distribution of atmospheric model parameters including temperature, gravity, cloud model (fsed) and chemical abundance for GJ 758b. This distribution is less sensitive to highly variant data and appropriately reflects a greater uncertainty on parameter fits.
A conjugate gradient method for solving the non-LTE line radiation transfer problem
NASA Astrophysics Data System (ADS)
Paletou, F.; Anterrieu, E.
2009-12-01
This study concerns the fast and accurate solution of the line radiation transfer problem, under non-LTE conditions. We propose and evaluate an alternative iterative scheme to the classical ALI-Jacobi method, and to the more recently proposed Gauss-Seidel and successive over-relaxation (GS/SOR) schemes. Our study is indeed based on applying a preconditioned bi-conjugate gradient method (BiCG-P). Standard tests, in 1D plane parallel geometry and in the frame of the two-level atom model with monochromatic scattering are discussed. Rates of convergence between the previously mentioned iterative schemes are compared, as are their respective timing properties. The smoothing capability of the BiCG-P method is also demonstrated.
Nondestructive evaluation of the preservation state of stone columns in the Hospital Real of Granada
NASA Astrophysics Data System (ADS)
Moreno de Jong van Coevorden, C.; Cobos Sánchez, C.; Rubio Bretones, A.; Fernández Pantoja, M.; García, Salvador G.; Gómez Martín, R.
2012-12-01
This paper describes the results of the employment of two nondestructive evaluation methods for the diagnostic of the preservation state of stone elements. The first method is based on ultrasonic (US) pulses while the second method uses short electromagnetic pulses. Specifically, these methods were applied to some columns, some of them previously restored. These columns are part of the architectonic heritage of the University of Granada, in particular they are located at the patio de la capilla del Hospital Real of Granada. The objective of this work was the application of systems based on US pulses (in transmission mode) and the ground-penetrating radar systems (electromagnetic tomography) in the diagnosis and detection of possible faults in the interior of columns.
Martínez-Domingo, Miguel Ángel; Valero, Eva M; Hernández-Andrés, Javier; Tominaga, Shoji; Horiuchi, Takahiko; Hirai, Keita
2017-11-27
We propose a method for the capture of high dynamic range (HDR), multispectral (MS), polarimetric (Pol) images of indoor scenes using a liquid crystal tunable filter (LCTF). We have included the adaptive exposure estimation (AEE) method to fully automatize the capturing process. We also propose a pre-processing method which can be applied for the registration of HDR images after they are already built as the result of combining different low dynamic range (LDR) images. This method is applied to ensure a correct alignment of the different polarization HDR images for each spectral band. We have focused our efforts in two main applications: object segmentation and classification into metal and dielectric classes. We have simplified the segmentation using mean shift combined with cluster averaging and region merging techniques. We compare the performance of our segmentation with that of Ncut and Watershed methods. For the classification task, we propose to use information not only in the highlight regions but also in their surrounding area, extracted from the degree of linear polarization (DoLP) maps. We present experimental results which proof that the proposed image processing pipeline outperforms previous techniques developed specifically for MSHDRPol image cubes.
Finite difference time domain calculation of transients in antennas with nonlinear loads
NASA Technical Reports Server (NTRS)
Luebbers, Raymond J.; Beggs, John H.; Kunz, Karl S.; Chamberlin, Kent
1991-01-01
Determining transient electromagnetic fields in antennas with nonlinear loads is a challenging problem. Typical methods used involve calculating frequency domain parameters at a large number of different frequencies, then applying Fourier transform methods plus nonlinear equation solution techniques. If the antenna is simple enough so that the open circuit time domain voltage can be determined independently of the effects of the nonlinear load on the antennas current, time stepping methods can be applied in a straightforward way. Here, transient fields for antennas with more general geometries are calculated directly using Finite Difference Time Domain (FDTD) methods. In each FDTD cell which contains a nonlinear load, a nonlinear equation is solved at each time step. As a test case, the transient current in a long dipole antenna with a nonlinear load excited by a pulsed plane wave is computed using this approach. The results agree well with both calculated and measured results previously published. The approach given here extends the applicability of the FDTD method to problems involving scattering from targets, including nonlinear loads and materials, and to coupling between antennas containing nonlinear loads. It may also be extended to propagation through nonlinear materials.
NASA Technical Reports Server (NTRS)
Miller, Eric J.; Manalo, Russel; Tessler, Alexander
2016-01-01
A study was undertaken to investigate the measurement of wing deformation and internal loads using measured strain data. Future aerospace vehicle research depends on the ability to accurately measure the deformation and internal loads during ground testing and in flight. The approach uses the inverse Finite Element Method (iFEM). The iFEM is a robust, computationally efficient method that is well suited for real-time measurement of real-time structural deformation and loads. The method has been validated in previous work, but has yet to be applied to a large-scale test article. This work is in preparation for an upcoming loads test of a half-span test wing in the Flight Loads Laboratory at the National Aeronautics and Space Administration Armstrong Flight Research Center (Edwards, California). The method has been implemented into an efficient MATLAB® (The MathWorks, Inc., Natick, Massachusetts) code for testing different sensor configurations. This report discusses formulation and implementation along with the preliminary results from a representative aerospace structure. The end goal is to investigate the modeling and sensor placement approach so that the best practices can be applied to future aerospace projects.
Symmetry analysis of trimers rovibrational spectra: the case of Ne3★
NASA Astrophysics Data System (ADS)
Márquez-Mijares, Maykel; Roncero, Octavio; Villarreal, Pablo; González-Lezana, Tomás
2018-05-01
An approximate method to assign the symmetry to the rovibrational spectrum of homonuclear trimers based on the solution of the rotational Hamiltonian by means of a purely vibrational basis combined with standard rotational functions is applied on Ne3. The neon trimer constitutes an ideal test between heavier systems such as Ar3 for which the method proves to be an extremely useful technique and some other previously investigated cases such as H3 + where some limitations were observed. Comparisons of the calculated rovibrational energy levels are established with results from different calculations reported in the literature.
MEM application to IRAS CPC images
NASA Technical Reports Server (NTRS)
Marston, A. P.
1994-01-01
A method for applying the Maximum Entropy Method (MEM) to Chopped Photometric Channel (CPC) IRAS additional observations is illustrated. The original CPC data suffered from problems with repeatability which MEM is able to cope with by use of a noise image, produced from the results of separate data scans of objects. The process produces images of small areas of sky with circular Gaussian beams of approximately 30 in. full width half maximum resolution at 50 and 100 microns. Comparison is made to previous reconstructions made in the far-infrared as well as morphologies of objects at other wavelengths. Some projects with this dataset are discussed.
The ensemble switch method for computing interfacial tensions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schmitz, Fabian; Virnau, Peter
2015-04-14
We present a systematic thermodynamic integration approach to compute interfacial tensions for solid-liquid interfaces, which is based on the ensemble switch method. Applying Monte Carlo simulations and finite-size scaling techniques, we obtain results for hard spheres, which are in agreement with previous computations. The case of solid-liquid interfaces in a variant of the effective Asakura-Oosawa model and of liquid-vapor interfaces in the Lennard-Jones model are discussed as well. We demonstrate that a thorough finite-size analysis of the simulation data is required to obtain precise results for the interfacial tension.
The energy level alignment at metal–molecule interfaces using Wannier–Koopmans method
Ma, Jie; Liu, Zhen-Fei; Neaton, Jeffrey B.; ...
2016-06-30
We apply a recently developed Wannier-Koopmans method (WKM), based on density functional theory (DFT), to calculate the electronic energy level alignment at an interface between a molecule and metal substrate. We consider two systems: benzenediamine on Au (111), and a bipyridine-Au molecular junction. The WKM calculated level alignment agrees well with the experimental measurements where available, as well as previous GW and DFT + Σ results. These results suggest that the WKM is a general approach that can be used to correct DFT eigenvalue errors, not only in bulk semiconductors and isolated molecules, but also in hybrid interfaces.
The Survey of Fires in Buildings. Third Report: The Use of Information Obtained From Fire Surveys
NASA Technical Reports Server (NTRS)
Silcock, A.
1973-01-01
The previous two reports in this series gave details of the general. scope of the pilot exercise and methods by which it was carried out. In addition the nature of the information obtained was illustrated by preliminary analyses of the house and industrial fires surveyed. Some brief comments on the use of the information were made. This report indicates a method of assessing the nation wide effects of applying conclusions drawn from the results of limited numbers of surveys and considers the use of the information for specific purposes.
The coupled three-dimensional wave packet approach to reactive scattering
NASA Astrophysics Data System (ADS)
Marković, Nikola; Billing, Gert D.
1994-01-01
A recently developed scheme for time-dependent reactive scattering calculations using three-dimensional wave packets is applied to the D+H2 system. The present method is an extension of a previously published semiclassical formulation of the scattering problem and is based on the use of hyperspherical coordinates. The convergence requirements are investigated by detailed calculations for total angular momentum J equal to zero and the general applicability of the method is demonstrated by solving the J=1 problem. The inclusion of the geometric phase is also discussed and its effect on the reaction probability is demonstrated.
Higher Order Corrections in the CoLoRFulNNLO Framework
NASA Astrophysics Data System (ADS)
Somogyi, G.; Kardos, A.; Szőr, Z.; Trócsányi, Z.
We discuss the CoLoRFulNNLO method for computing higher order radiative corrections to jet cross sections in perturbative QCD. We apply our method to the calculation of event shapes and jet rates in three-jet production in electron-positron annihilation. We validate our code by comparing our predictions to previous results in the literature and present the jet cone energy fraction distribution at NNLO accuracy. We also present preliminary NNLO results for the three-jet rate using the Durham jet clustering algorithm matched to resummed predictions at NLL accuracy, and a comparison to LEP data.
Diagnostics for insufficiencies of posterior calculations in Bayesian signal inference.
Dorn, Sebastian; Oppermann, Niels; Ensslin, Torsten A
2013-11-01
We present an error-diagnostic validation method for posterior distributions in Bayesian signal inference, an advancement of a previous work. It transfers deviations from the correct posterior into characteristic deviations from a uniform distribution of a quantity constructed for this purpose. We show that this method is able to reveal and discriminate several kinds of numerical and approximation errors, as well as their impact on the posterior distribution. For this we present four typical analytical examples of posteriors with incorrect variance, skewness, position of the maximum, or normalization. We show further how this test can be applied to multidimensional signals.
Xu, Xiaoma; van de Craats, Anick M; de Bruyn, Peter C A M
2004-11-01
A highly sensitive screening method based on high performance liquid chromatography atmospheric pressure ionization mass spectrometry (HPLC-API-MS) has been developed for the analysis of 21 nitroaromatic, nitramine and nitrate ester explosives, which include the explosives most commonly encountered in forensic science. Two atmospheric pressure ionization (API) methods, atmospheric pressure chemical ionization (APCI) and electrospray ionization (ESI), and various experimental conditions have been applied to allow for the detection of all 21 explosive compounds. The limit of detection (LOD) in the full-scan mode has been found to be 0.012-1.2 ng on column for the screening of most explosives investigated. For nitrobenzene, an LOD of 10 ng was found with the APCI method in the negative mode. Although the detection of nitrobenzene, 2-, 3-, and 4-nitrotoluene is hindered by the difficult ionization of these compounds, we have found that by forming an adduct with glycine, LOD values in the range of 3-16 ng on column can be achieved. Compared with previous screening methods with thermospray ionization, the API method has distinct advantages, including simplicity and stability of the method applied, an extended screening range and a low detection limit for the explosives studied.
Spectral Learning for Supervised Topic Models.
Ren, Yong; Wang, Yining; Zhu, Jun
2018-03-01
Supervised topic models simultaneously model the latent topic structure of large collections of documents and a response variable associated with each document. Existing inference methods are based on variational approximation or Monte Carlo sampling, which often suffers from the local minimum defect. Spectral methods have been applied to learn unsupervised topic models, such as latent Dirichlet allocation (LDA), with provable guarantees. This paper investigates the possibility of applying spectral methods to recover the parameters of supervised LDA (sLDA). We first present a two-stage spectral method, which recovers the parameters of LDA followed by a power update method to recover the regression model parameters. Then, we further present a single-phase spectral algorithm to jointly recover the topic distribution matrix as well as the regression weights. Our spectral algorithms are provably correct and computationally efficient. We prove a sample complexity bound for each algorithm and subsequently derive a sufficient condition for the identifiability of sLDA. Thorough experiments on synthetic and real-world datasets verify the theory and demonstrate the practical effectiveness of the spectral algorithms. In fact, our results on a large-scale review rating dataset demonstrate that our single-phase spectral algorithm alone gets comparable or even better performance than state-of-the-art methods, while previous work on spectral methods has rarely reported such promising performance.
Tenorio, Bruno Mendes; da Silva Filho, Eurípedes Alves; Neiva, Gentileza Santos Martins; da Silva, Valdemiro Amaro; Tenorio, Fernanda das Chagas Angelo Mendes; da Silva, Themis de Jesus; Silva, Emerson Carlos Soares E; Nogueira, Romildo de Albuquerque
2017-08-01
Shrimps can accumulate environmental toxicants and suffer behavioral changes. However, methods to quantitatively detect changes in the behavior of these shrimps are still needed. The present study aims to verify whether mathematical and fractal methods applied to video tracking can adequately describe changes in the locomotion behavior of shrimps exposed to low concentrations of toxic chemicals, such as 0.15µgL -1 deltamethrin pesticide or 10µgL -1 mercuric chloride. Results showed no change after 1min, 4, 24, and 48h of treatment. However, after 72 and 96h of treatment, both the linear methods describing the track length, mean speed, mean distance from the current to the previous track point, as well as the non-linear methods of fractal dimension (box counting or information entropy) and multifractal analysis were able to detect changes in the locomotion behavior of shrimps exposed to deltamethrin. Analysis of angular parameters of the track points vectors and lacunarity were not sensitive to those changes. None of the methods showed adverse effects to mercury exposure. These mathematical and fractal methods applicable to software represent low cost useful tools in the toxicological analyses of shrimps for quality of food, water and biomonitoring of ecosystems. Copyright © 2017 Elsevier Inc. All rights reserved.
A New Method for Analyzing Near-Field Faraday Probe Data in Hall Thrusters
NASA Technical Reports Server (NTRS)
Huang, Wensheng; Shastry, Rohit; Herman, Daniel A.; Soulas, George C.; Kamhawi, Hani
2013-01-01
This paper presents a new method for analyzing near-field Faraday probe data obtained from Hall thrusters. Traditional methods spawned from far-field Faraday probe analysis rely on assumptions that are not applicable to near-field Faraday probe data. In particular, arbitrary choices for the point of origin and limits of integration have made interpretation of the results difficult. The new method, called iterative pathfinding, uses the evolution of the near-field plume with distance to provide feedback for determining the location of the point of origin. Although still susceptible to the choice of integration limits, this method presents a systematic approach to determining the origin point for calculating the divergence angle. The iterative pathfinding method is applied to near-field Faraday probe data taken in a previous study from the NASA-300M and NASA-457Mv2 Hall thrusters. Since these two thrusters use centrally mounted cathodes the current density associated with the cathode plume is removed before applying iterative pathfinding. A procedure is presented for removing the cathode plume. The results of the analysis are compared to far-field probe analysis results. This paper ends with checks on the validity of the new method and discussions on the implications of the results.
A New Method for Analyzing Near-Field Faraday Probe Data in Hall Thrusters
NASA Technical Reports Server (NTRS)
Huang, Wensheng; Shastry, Rohit; Herman, Daniel A.; Soulas, George C.; Kamhawi, Hani
2013-01-01
This paper presents a new method for analyzing near-field Faraday probe data obtained from Hall thrusters. Traditional methods spawned from far-field Faraday probe analysis rely on assumptions that are not applicable to near-field Faraday probe data. In particular, arbitrary choices for the point of origin and limits of integration have made interpretation of the results difficult. The new method, called iterative pathfinding, uses the evolution of the near-field plume with distance to provide feedback for determining the location of the point of origin. Although still susceptible to the choice of integration limits, this method presents a systematic approach to determining the origin point for calculating the divergence angle. The iterative pathfinding method is applied to near-field Faraday probe data taken in a previous study from the NASA-300M and NASA-457Mv2 Hall thrusters. Since these two thrusters use centrally mounted cathodes, the current density associated with the cathode plume is removed before applying iterative pathfinding. A procedure is presented for removing the cathode plume. The results of the analysis are compared to far-field probe analysis results. This paper ends with checks on the validity of the new method and discussions on the implications of the results.
Robust Eye Center Localization through Face Alignment and Invariant Isocentric Patterns
Teng, Dongdong; Chen, Dihu; Tan, Hongzhou
2015-01-01
The localization of eye centers is a very useful cue for numerous applications like face recognition, facial expression recognition, and the early screening of neurological pathologies. Several methods relying on available light for accurate eye-center localization have been exploited. However, despite the considerable improvements that eye-center localization systems have undergone in recent years, only few of these developments deal with the challenges posed by the profile (non-frontal face). In this paper, we first use the explicit shape regression method to obtain the rough location of the eye centers. Because this method extracts global information from the human face, it is robust against any changes in the eye region. We exploit this robustness and utilize it as a constraint. To locate the eye centers accurately, we employ isophote curvature features, the accuracy of which has been demonstrated in a previous study. By applying these features, we obtain a series of eye-center locations which are candidates for the actual position of the eye-center. Among these locations, the estimated locations which minimize the reconstruction error between the two methods mentioned above are taken as the closest approximation for the eye centers locations. Therefore, we combine explicit shape regression and isophote curvature feature analysis to achieve robustness and accuracy, respectively. In practical experiments, we use BioID and FERET datasets to test our approach to obtaining an accurate eye-center location while retaining robustness against changes in scale and pose. In addition, we apply our method to non-frontal faces to test its robustness and accuracy, which are essential in gaze estimation but have seldom been mentioned in previous works. Through extensive experimentation, we show that the proposed method can achieve a significant improvement in accuracy and robustness over state-of-the-art techniques, with our method ranking second in terms of accuracy. According to our implementation on a PC with a Xeon 2.5Ghz CPU, the frame rate of the eye tracking process can achieve 38 Hz. PMID:26426929
NASA Astrophysics Data System (ADS)
Cherry, M.; Dierken, J.; Boehnlein, T.; Pilchak, A.; Sathish, S.; Grandhi, R.
2018-01-01
A new technique for performing quantitative scanning acoustic microscopy imaging of Rayleigh surface wave (RSW) velocity was developed based on b-scan processing. In this technique, the focused acoustic beam is moved through many defocus distances over the sample and excited with an impulse excitation, and advanced algorithms based on frequency filtering and the Hilbert transform are used to post-process the b-scans to estimate the Rayleigh surface wave velocity. The new method was used to estimate the RSW velocity on an optically flat E6 glass sample, and the velocity was measured at ±2 m/s and the scanning time per point was on the order of 1.0 s, which are both improvement from the previous two-point defocus method. The new method was also applied to the analysis of two titanium samples, and the velocity was estimated with very low standard deviation in certain large grains on the sample. A new behavior was observed with the b-scan analysis technique where the amplitude of the surface wave decayed dramatically on certain crystallographic orientations. The new technique was also compared with previous results, and the new technique has been found to be much more reliable and to have higher contrast than previously possible with impulse excitation.
Global Search Capabilities of Indirect Methods for Impulsive Transfers
NASA Astrophysics Data System (ADS)
Shen, Hong-Xin; Casalino, Lorenzo; Luo, Ya-Zhong
2015-09-01
An optimization method which combines an indirect method with homotopic approach is proposed and applied to impulsive trajectories. Minimum-fuel, multiple-impulse solutions, with either fixed or open time are obtained. The homotopic approach at hand is relatively straightforward to implement and does not require an initial guess of adjoints, unlike previous adjoints estimation methods. A multiple-revolution Lambert solver is used to find multiple starting solutions for the homotopic procedure; this approach can guarantee to obtain multiple local solutions without relying on the user's intuition, thus efficiently exploring the solution space to find the global optimum. The indirect/homotopic approach proves to be quite effective and efficient in finding optimal solutions, and outperforms the joint use of evolutionary algorithms and deterministic methods in the test cases.
Change Point Detection in Correlation Networks
NASA Astrophysics Data System (ADS)
Barnett, Ian; Onnela, Jukka-Pekka
2016-01-01
Many systems of interacting elements can be conceptualized as networks, where network nodes represent the elements and network ties represent interactions between the elements. In systems where the underlying network evolves, it is useful to determine the points in time where the network structure changes significantly as these may correspond to functional change points. We propose a method for detecting change points in correlation networks that, unlike previous change point detection methods designed for time series data, requires minimal distributional assumptions. We investigate the difficulty of change point detection near the boundaries of the time series in correlation networks and study the power of our method and competing methods through simulation. We also show the generalizable nature of the method by applying it to stock price data as well as fMRI data.
Development of direct-inverse 3-D methods for applied transonic aerodynamic wing design and analysis
NASA Technical Reports Server (NTRS)
Carlson, Leland A.
1989-01-01
An inverse wing design method was developed around an existing transonic wing analysis code. The original analysis code, TAWFIVE, has as its core the numerical potential flow solver, FLO30, developed by Jameson and Caughey. Features of the analysis code include a finite-volume formulation; wing and fuselage fitted, curvilinear grid mesh; and a viscous boundary layer correction that also accounts for viscous wake thickness and curvature. The development of the inverse methods as an extension of previous methods existing for design in Cartesian coordinates is presented. Results are shown for inviscid wing design cases in super-critical flow regimes. The test cases selected also demonstrate the versatility of the design method in designing an entire wing or discontinuous sections of a wing.
Gregg, Daniel; Wheeler, Sarah Ann
2018-08-15
To date, the majority of environmental assets studied in the economic valuation literature clearly have high amenity and recreational use values. However there are many cases where small, but nevertheless unique and important, ecosystems survive as islands amongst large areas of modified, productive, or urban, landscapes. Development encroaches on the landscape and as urban landscapes become more concentrated these types of conservation islands will become increasingly more important. Previous experience with economic valuation suggests that lower total values for smaller contributions to conservation are more liable to be swamped by survey and hypothetical bias measures. Hence there needs to be more understanding of approaches to economic valuation for small and isolated environmental assets, in particular regarding controlling stated preference biases. This study applied the recently developed method of Inferred Valuation (IV) to a small private wetland in South-East Australia, and compared willingness to pay values with estimates from a standard Contingent Valuation (CV) approach. We found that hypothetical bias did seem to be slightly lower with the IV method. However, other methods such as the use of log-normal transformations and median measures, significantly mitigate apparent hypothetical biases and are easier to apply and allow use of the well-tested CV method. Copyright © 2018 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Okita, Kazuhiko; Ishiyama, Kazushi; Miura, Hideo
2012-04-01
Magnetostriction constant of a magnetic thin film is conventionally measured by detecting the deformation of a coupon sample that consists of the magnetic film deposited on a thin glass substrate (e.g., cover glass of size 10 mm × 25 mm) under an applied field using a laser beam [A. C. Tam and H. Schroeder, J. Appl. Phys. 64, 5422 (1988)]. This method, however, cannot be applied to films deposited on actual large-size substrates (wafers) with diameter from 3 to 6 in. or more. In a previous paper [Okita et al., J. Phys.: Conf. Ser. 200, 112008 (2010)], the authors presented a method for measuring magnetostriction of a magnetic thin film deposited on an actual substrate by detecting the change of magnetic anisotropy field, Hk, under mechanical bending of the substrate. It was validated that the method is very effective for measuring the magnetostriction constant of a free layer on the actual substrate. However, since a Ni-Fe shield layer usually covers a magnetic head used for a hard disk drive, this shield layer disturbs the effective measurement of R-H curve under minor loop. Therefore, a high magnetic field that can saturate the magnetic material in the shield layer should be applied to the head in order to measure the magnetostriction constant of a pinned layer under the shield layer. In this paper, this method was applied to the measurement of the magnetostriction constant of a pinned layer under the shield layer by using a high magnetic field up to 320 kA/m (4 kOe).
Hu, J H; Wang, Y; Cahill, P T
1997-01-01
This paper reports a multispectral code excited linear prediction (MCELP) method for the compression of multispectral images. Different linear prediction models and adaptation schemes have been compared. The method that uses a forward adaptive autoregressive (AR) model has been proven to achieve a good compromise between performance, complexity, and robustness. This approach is referred to as the MFCELP method. Given a set of multispectral images, the linear predictive coefficients are updated over nonoverlapping three-dimensional (3-D) macroblocks. Each macroblock is further divided into several 3-D micro-blocks, and the best excitation signal for each microblock is determined through an analysis-by-synthesis procedure. The MFCELP method has been applied to multispectral magnetic resonance (MR) images. To satisfy the high quality requirement for medical images, the error between the original image set and the synthesized one is further specified using a vector quantizer. This method has been applied to images from 26 clinical MR neuro studies (20 slices/study, three spectral bands/slice, 256x256 pixels/band, 12 b/pixel). The MFCELP method provides a significant visual improvement over the discrete cosine transform (DCT) based Joint Photographers Expert Group (JPEG) method, the wavelet transform based embedded zero-tree wavelet (EZW) coding method, and the vector tree (VT) coding method, as well as the multispectral segmented autoregressive moving average (MSARMA) method we developed previously.
Methods for Computationally Efficient Structured CFD Simulations of Complex Turbomachinery Flows
NASA Technical Reports Server (NTRS)
Herrick, Gregory P.; Chen, Jen-Ping
2012-01-01
This research presents more efficient computational methods by which to perform multi-block structured Computational Fluid Dynamics (CFD) simulations of turbomachinery, thus facilitating higher-fidelity solutions of complicated geometries and their associated flows. This computational framework offers flexibility in allocating resources to balance process count and wall-clock computation time, while facilitating research interests of simulating axial compressor stall inception with more complete gridding of the flow passages and rotor tip clearance regions than is typically practiced with structured codes. The paradigm presented herein facilitates CFD simulation of previously impractical geometries and flows. These methods are validated and demonstrate improved computational efficiency when applied to complicated geometries and flows.
Robb, Meigan
2014-01-11
Engaging nursing students in the classroom environment positively influences their ability to learn and apply course content to clinical practice. Students are motivated to engage in learning if their learning preferences are being met. The methods nurse educators have used with previous students in the classroom may not address the educational needs of Millennials. This manuscript presents the findings of a pilot study that used the Critical Incident Technique. The purpose of this study was to gain insight into the teaching methods that help the Millennial generation of nursing students feel engaged in the learning process. Students' perceptions of effective instructional approaches are presented in three themes. Implications for nurse educators are discussed.
A constructive model potential method for atomic interactions
NASA Technical Reports Server (NTRS)
Bottcher, C.; Dalgarno, A.
1974-01-01
A model potential method is presented that can be applied to many electron single centre and two centre systems. The development leads to a Hamiltonian with terms arising from core polarization that depend parametrically upon the positions of the valence electrons. Some of the terms have been introduced empirically in previous studies. Their significance is clarified by an analysis of a similar model in classical electrostatics. The explicit forms of the expectation values of operators at large separations of two atoms given by the model potential method are shown to be equivalent to the exact forms when the assumption is made that the energy level differences of one atom are negligible compared to those of the other.
Direct 2-D reconstructions of conductivity and permittivity from EIT data on a human chest.
Herrera, Claudia N L; Vallejo, Miguel F M; Mueller, Jennifer L; Lima, Raul G
2015-01-01
A novel direct D-bar reconstruction algorithm is presented for reconstructing a complex conductivity distribution from 2-D EIT data. The method is applied to simulated data and archival human chest data. Permittivity reconstructions with the aforementioned method and conductivity reconstructions with the previously existing nonlinear D-bar method for real-valued conductivities depicting ventilation and perfusion in the human chest are presented. This constitutes the first fully nonlinear D-bar reconstructions of human chest data and the first D-bar permittivity reconstructions of experimental data. The results of the human chest data reconstructions are compared on a circular domain versus a chest-shaped domain.
EXPLORING FUNCTIONAL CONNECTIVITY IN FMRI VIA CLUSTERING.
Venkataraman, Archana; Van Dijk, Koene R A; Buckner, Randy L; Golland, Polina
2009-04-01
In this paper we investigate the use of data driven clustering methods for functional connectivity analysis in fMRI. In particular, we consider the K-Means and Spectral Clustering algorithms as alternatives to the commonly used Seed-Based Analysis. To enable clustering of the entire brain volume, we use the Nyström Method to approximate the necessary spectral decompositions. We apply K-Means, Spectral Clustering and Seed-Based Analysis to resting-state fMRI data collected from 45 healthy young adults. Without placing any a priori constraints, both clustering methods yield partitions that are associated with brain systems previously identified via Seed-Based Analysis. Our empirical results suggest that clustering provides a valuable tool for functional connectivity analysis.
NASA Astrophysics Data System (ADS)
Wang, Xu; Le, Anh-Thu; Zhou, Zhaoyan; Wei, Hui; Lin, C. D.
2017-08-01
We provide a unified theoretical framework for recently emerging experiments that retrieve fixed-in-space molecular information through time-domain rotational coherence spectroscopy. Unlike a previous approach by Makhija et al. (V. Makhija et al., arXiv:1611.06476), our method can be applied to the retrieval of both real-valued (e.g., ionization yield) and complex-valued (e.g., induced dipole moment) molecular response information. It is also a direct retrieval method without using iterations. We also demonstrate that experimental parameters, such as the fluence of the aligning laser pulse and the rotational temperature of the molecular ensemble, can be quite accurately determined using a statistical method.
ERIC Educational Resources Information Center
Ko, Charles
2014-01-01
In the present research, it will be shown how grammar activities in textbooks still retain the structural method of teaching grammar. The results found by previous scholars' research will be covered, and illustrated by excerpts of textbooks, including comparison of Hong Kong and Malaysian textbooks. Communicative Language Teaching (CLT)…
NASA Astrophysics Data System (ADS)
Xie, Hong-Bo; Dokos, Socrates
2013-06-01
We present a hybrid symplectic geometry and central tendency measure (CTM) method for detection of determinism in noisy time series. CTM is effective for detecting determinism in short time series and has been applied in many areas of nonlinear analysis. However, its performance significantly degrades in the presence of strong noise. In order to circumvent this difficulty, we propose to use symplectic principal component analysis (SPCA), a new chaotic signal de-noising method, as the first step to recover the system dynamics. CTM is then applied to determine whether the time series arises from a stochastic process or has a deterministic component. Results from numerical experiments, ranging from six benchmark deterministic models to 1/f noise, suggest that the hybrid method can significantly improve detection of determinism in noisy time series by about 20 dB when the data are contaminated by Gaussian noise. Furthermore, we apply our algorithm to study the mechanomyographic (MMG) signals arising from contraction of human skeletal muscle. Results obtained from the hybrid symplectic principal component analysis and central tendency measure demonstrate that the skeletal muscle motor unit dynamics can indeed be deterministic, in agreement with previous studies. However, the conventional CTM method was not able to definitely detect the underlying deterministic dynamics. This result on MMG signal analysis is helpful in understanding neuromuscular control mechanisms and developing MMG-based engineering control applications.
Xie, Hong-Bo; Dokos, Socrates
2013-06-01
We present a hybrid symplectic geometry and central tendency measure (CTM) method for detection of determinism in noisy time series. CTM is effective for detecting determinism in short time series and has been applied in many areas of nonlinear analysis. However, its performance significantly degrades in the presence of strong noise. In order to circumvent this difficulty, we propose to use symplectic principal component analysis (SPCA), a new chaotic signal de-noising method, as the first step to recover the system dynamics. CTM is then applied to determine whether the time series arises from a stochastic process or has a deterministic component. Results from numerical experiments, ranging from six benchmark deterministic models to 1/f noise, suggest that the hybrid method can significantly improve detection of determinism in noisy time series by about 20 dB when the data are contaminated by Gaussian noise. Furthermore, we apply our algorithm to study the mechanomyographic (MMG) signals arising from contraction of human skeletal muscle. Results obtained from the hybrid symplectic principal component analysis and central tendency measure demonstrate that the skeletal muscle motor unit dynamics can indeed be deterministic, in agreement with previous studies. However, the conventional CTM method was not able to definitely detect the underlying deterministic dynamics. This result on MMG signal analysis is helpful in understanding neuromuscular control mechanisms and developing MMG-based engineering control applications.
From IHE Audit Trails to XES Event Logs Facilitating Process Mining.
Paster, Ferdinand; Helm, Emmanuel
2015-01-01
Recently Business Intelligence approaches like process mining are applied to the healthcare domain. The goal of process mining is to gain process knowledge, compliance and room for improvement by investigating recorded event data. Previous approaches focused on process discovery by event data from various specific systems. IHE, as a globally recognized basis for healthcare information systems, defines in its ATNA profile how real-world events must be recorded in centralized event logs. The following approach presents how audit trails collected by the means of ATNA can be transformed to enable process mining. Using the standardized audit trails provides the ability to apply these methods to all IHE based information systems.
Computational and spectroscopic data correlation study of N,N'-bisarylmalonamides (Part II).
Arsovski, Violeta M; Božić, Bojan Đ; Mirković, Jelena M; Vitnik, Vesna D; Vitnik, Željko J; Petrović, Slobodan D; Ušćumlić, Gordana S; Mijin, Dušan Ž
2015-09-01
To complement a previous UV study, we present a quantitative evaluation of substituent effects on spectroscopic data ((1)H and (13)C NMR chemical shifts as well as FT-IR absorption frequency) applied to N,N'-bisarylmalonamides, using simple and extended Hammett equations as well as the Swain-Lupton equation. Furthermore, the DFT CAM-B3LYP/6-311+G(d,p) method was applied to study the impact of different solvents on the geometry of the molecules and their spectral data. Additionally, experimental data are correlated with theoretical results; excellent linear dependence was obtained. The overall results presented in this paper show that N,N'-bisarylmalonamides are prominent candidates for model molecules.
Decentralised control of continuous Petri nets
NASA Astrophysics Data System (ADS)
Wang, Liewei; Wang, Xu
2017-05-01
This paper focuses on decentralised control of systems modelled by continuous Petri nets, in which a target marking control problem is discussed. In some previous works, an efficient ON/OFF strategy-based minimum-time controller was developed. Nevertheless, the convergence is only proved for subclasses like Choice-Free nets. For a general net, the pre-conditions of applying the ON/OFF strategy are not given; therefore, the application scope of the method is unclear. In this work, we provide two sufficient conditions of applying the ON/OFF strategy-based controller to general nets. Furthermore, an extended algorithm for general nets is proposed, in which control laws are computed based on some limited information, without knowing the detailed structure of subsystems.
Foo, Jonathan; Ilic, Dragan; Rivers, George; Evans, Darrell J R; Walsh, Kieran; Haines, Terry P; Paynter, Sophie; Morgan, Prue; Maloney, Stephen
2017-12-07
Student failure creates additional economic costs. Knowing the cost of failure helps to frame its economic burden relative to other educational issues, providing an evidence-base to guide priority setting and allocation of resources. The Ingredients Method is a cost-analysis approach which has been previously applied to health professions education research. In this study, the Ingredients Method is introduced, and applied to a case study, investigating the cost of pre-clinical student failure. The four step Ingredients Method was introduced and applied: (1) identify and specify resource items, (2) measure volume of resources in natural units, (3) assign monetary prices to resource items, and (4) analyze and report costs. Calculations were based on a physiotherapy program at an Australian university. The cost of failure was £5991 per failing student, distributed across students (70%), the government (21%), and the university (8%). If the cost of failure and attrition is distributed among the remaining continuing cohort, the cost per continuing student educated increases from £9923 to £11,391 per semester. The economics of health professions education is complex. Researchers should consider both accuracy and feasibility in their costing approach, toward the goal of better informing cost-conscious decision-making.
Applying linear programming model to aggregate production planning of coated peanut products
NASA Astrophysics Data System (ADS)
Rohmah, W. G.; Purwaningsih, I.; Santoso, EF S. M.
2018-03-01
The aim of this study was to set the overall production level for each grade of coated peanut product to meet market demands with a minimum production cost. The linear programming model was applied in this study. The proposed model was used to minimize the total production cost based on the limited demand of coated peanuts. The demand values applied to the method was previously forecasted using time series method and production capacity aimed to plan the aggregate production for the next 6 month period. The results indicated that the production planning using the proposed model has resulted a better fitted pattern to the customer demands compared to that of the company policy. The production capacity of product family A, B, and C was relatively stable for the first 3 months of the planning periods, then began to fluctuate over the next 3 months. While, the production capacity of product family D and E was fluctuated over the 6-month planning periods, with the values in the range of 10,864 - 32,580 kg and 255 – 5,069 kg, respectively. The total production cost for all products was 27.06% lower than the production cost calculated using the company’s policy-based method.
OrthoMCL: Identification of Ortholog Groups for Eukaryotic Genomes
Li, Li; Stoeckert, Christian J.; Roos, David S.
2003-01-01
The identification of orthologous groups is useful for genome annotation, studies on gene/protein evolution, comparative genomics, and the identification of taxonomically restricted sequences. Methods successfully exploited for prokaryotic genome analysis have proved difficult to apply to eukaryotes, however, as larger genomes may contain multiple paralogous genes, and sequence information is often incomplete. OrthoMCL provides a scalable method for constructing orthologous groups across multiple eukaryotic taxa, using a Markov Cluster algorithm to group (putative) orthologs and paralogs. This method performs similarly to the INPARANOID algorithm when applied to two genomes, but can be extended to cluster orthologs from multiple species. OrthoMCL clusters are coherent with groups identified by EGO, but improved recognition of “recent” paralogs permits overlapping EGO groups representing the same gene to be merged. Comparison with previously assigned EC annotations suggests a high degree of reliability, implying utility for automated eukaryotic genome annotation. OrthoMCL has been applied to the proteome data set from seven publicly available genomes (human, fly, worm, yeast, Arabidopsis, the malaria parasite Plasmodium falciparum, and Escherichia coli). A Web interface allows queries based on individual genes or user-defined phylogenetic patterns (http://www.cbil.upenn.edu/gene-family). Analysis of clusters incorporating P. falciparum genes identifies numerous enzymes that were incompletely annotated in first-pass annotation of the parasite genome. PMID:12952885
Rusterholz, Thomas; Achermann, Peter; Dürr, Roland; Koenig, Thomas; Tarokh, Leila
2017-06-01
Investigating functional connectivity between brain networks has become an area of interest in neuroscience. Several methods for investigating connectivity have recently been developed, however, these techniques need to be applied with care. We demonstrate that global field synchronization (GFS), a global measure of phase alignment in the EEG as a function of frequency, must be applied considering signal processing principles in order to yield valid results. Multichannel EEG (27 derivations) was analyzed for GFS based on the complex spectrum derived by the fast Fourier transform (FFT). We examined the effect of window functions on GFS, in particular of non-rectangular windows. Applying a rectangular window when calculating the FFT revealed high GFS values for high frequencies (>15Hz) that were highly correlated (r=0.9) with spectral power in the lower frequency range (0.75-4.5Hz) and tracked the depth of sleep. This turned out to be spurious synchronization. With a non-rectangular window (Tukey or Hanning window) these high frequency synchronization vanished. Both, GFS and power density spectra significantly differed for rectangular and non-rectangular windows. Previous papers using GFS typically did not specify the applied window and may have used a rectangular window function. However, the demonstrated impact of the window function raises the question of the validity of some previous findings at higher frequencies. We demonstrated that it is crucial to apply an appropriate window function for determining synchronization measures based on a spectral approach to avoid spurious synchronization in the beta/gamma range. Copyright © 2017 Elsevier B.V. All rights reserved.
The Schwinger Variational Method
NASA Technical Reports Server (NTRS)
Huo, Winifred M.
1995-01-01
Variational methods have proven invaluable in theoretical physics and chemistry, both for bound state problems and for the study of collision phenomena. The application of the Schwinger variational (SV) method to e-molecule collisions and molecular photoionization has been reviewed previously. The present chapter discusses the implementation of the SV method as applied to e-molecule collisions. Since this is not a review of cross section data, cross sections are presented only to server as illustrative examples. In the SV method, the correct boundary condition is automatically incorporated through the use of Green's function. Thus SV calculations can employ basis functions with arbitrary boundary conditions. The iterative Schwinger method has been used extensively to study molecular photoionization. For e-molecule collisions, it is used at the static exchange level to study elastic scattering and coupled with the distorted wave approximation to study electronically inelastic scattering.
Differential equations as a tool for community identification.
Krawczyk, Małgorzata J
2008-06-01
We consider the task of identification of a cluster structure in random networks. The results of two methods are presented: (i) the Newman algorithm [M. E. J. Newman and M. Girvan, Phys. Rev. E 69, 026113 (2004)]; and (ii) our method based on differential equations. A series of computer experiments is performed to check if in applying these methods we are able to determine the structure of the network. The trial networks consist initially of well-defined clusters and are disturbed by introducing noise into their connectivity matrices. Further, we show that an improvement of the previous version of our method is possible by an appropriate choice of the threshold parameter beta . With this change, the results obtained by the two methods above are similar, and our method works better, for all the computer experiments we have done.
Advances in 6d diffraction contrast tomography
NASA Astrophysics Data System (ADS)
Viganò, N.; Ludwig, W.
2018-04-01
The ability to measure 3D orientation fields and to determine grain boundary character plays a key role in understanding many material science processes, including: crack formation and propagation, grain coarsening, and corrosion processes. X-ray diffraction imaging techniques offer the ability to retrieve such information in a non-destructive manner. Among them, Diffraction Contrast Tomography (DCT) is a monochromatic beam, near-field technique, that uses an extended beam and offers fast mapping of 3D sample volumes. It was previously shown that the six-dimensional extension of DCT can be applied to moderately deformed samples (<= 5% total strain), made from materials that exhibit low levels of elastic deformation of the unit cell (<= 1%). In this article, we improved over the previously proposed 6D-DCT reconstruction method, through the introduction of both a more advanced forward model and reconstruction algorithm. The results obtained with the proposed improvements are compared against the reconstructions previously published in [1], using Electron Backscatter Diffraction (EBSD) measurements as a reference. The result was a noticeably higher quality reconstruction of the grain boundary positions and local orientation fields. The achieved reconstruction quality, together with the low acquisition times, render DCT a valuable tool for the stop-motion study of polycrystalline microstructures, evolving as a function of applied strain or thermal annealing treatments, for selected materials.
FLARE STARS—A FAVORABLE OBJECT FOR STUDYING MECHANISMS OF NONTHERMAL ASTROPHYSICAL PHENOMENA
DOE Office of Scientific and Technical Information (OSTI.GOV)
Oks, E.; Gershberg, R. E.
2016-03-01
We present a spectroscopic method for diagnosing a low-frequency electrostatic plasma turbulence (LEPT) in plasmas of flare stars. This method had been previously developed by one of us and successfully applied to diagnosing the LEPT in solar flares. In distinction to our previous applications of the method, here we use the latest advances in the theory of the Stark broadening of hydrogen spectral lines. By analyzing observed emission Balmer lines, we show that it is very likely that the LEPT was developed in several flares of AD Leo, as well as in one flare of EV Lac. We found themore » LEPT (though of different field strengths) both in the explosive/impulsive phase and at the phase of the maximum, as well as at the gradual phase of the stellar flares. While for solar flares our method allows diagnosing the LEPT only in the most powerful flares, for the flare stars it seems that the method allows revealing the LEPT practically in every flare. It should be important to obtain new and better spectrograms of stellar flares, allowing their analysis by the method outlined in the present paper. This can be the most favorable way to the detailed understanding of the nature of nonthermal astrophysical phenomena.« less
Novel applications of the temporal kernel method: Historical and future radiative forcing
NASA Astrophysics Data System (ADS)
Portmann, R. W.; Larson, E.; Solomon, S.; Murphy, D. M.
2017-12-01
We present a new estimate of the historical radiative forcing derived from the observed global mean surface temperature and a model derived kernel function. Current estimates of historical radiative forcing are usually derived from climate models. Despite large variability in these models, the multi-model mean tends to do a reasonable job of representing the Earth system and climate. One method of diagnosing the transient radiative forcing in these models requires model output of top of the atmosphere radiative imbalance and global mean temperature anomaly. It is difficult to apply this method to historical observations due to the lack of TOA radiative measurements before CERES. We apply the temporal kernel method (TKM) of calculating radiative forcing to the historical global mean temperature anomaly. This novel approach is compared against the current regression based methods using model outputs and shown to produce consistent forcing estimates giving confidence in the forcing derived from the historical temperature record. The derived TKM radiative forcing provides an estimate of the forcing time series that the average climate model needs to produce the observed temperature record. This forcing time series is found to be in good overall agreement with previous estimates but includes significant differences that will be discussed. The historical anthropogenic aerosol forcing is estimated as a residual from the TKM and found to be consistent with earlier moderate forcing estimates. In addition, this method is applied to future temperature projections to estimate the radiative forcing required to achieve those temperature goals, such as those set in the Paris agreement.
Rodgers, Kiri J.; Hursthouse, Andrew; Cuthbert, Simon
2015-01-01
As waste management regulations become more stringent, yet demand for resources continues to increase, there is a pressing need for innovative management techniques and more sophisticated supporting analysis techniques. Sequential extraction (SE) analysis, a technique previously applied to soils and sediments, offers the potential to gain a better understanding of the composition of solid wastes. SE attempts to classify potentially toxic elements (PTEs) by their associations with phases or fractions in waste, with the aim of improving resource use and reducing negative environmental impacts. In this review we explain how SE can be applied to steel wastes. These present challenges due to differences in sample characteristics compared with materials to which SE has been traditionally applied, specifically chemical composition, particle size and pH buffering capacity, which are critical when identifying a suitable SE method. We highlight the importance of delineating iron-rich phases, and find that the commonly applied BCR (The community Bureau of reference) extraction method is problematic due to difficulties with zinc speciation (a critical steel waste constituent), hence a substantially modified SEP is necessary to deal with particular characteristics of steel wastes. Successful development of SE for steel wastes could have wider implications, e.g., for the sustainable management of fly ash and mining wastes. PMID:26393631
Rodgers, Kiri J; Hursthouse, Andrew; Cuthbert, Simon
2015-09-18
As waste management regulations become more stringent, yet demand for resources continues to increase, there is a pressing need for innovative management techniques and more sophisticated supporting analysis techniques. Sequential extraction (SE) analysis, a technique previously applied to soils and sediments, offers the potential to gain a better understanding of the composition of solid wastes. SE attempts to classify potentially toxic elements (PTEs) by their associations with phases or fractions in waste, with the aim of improving resource use and reducing negative environmental impacts. In this review we explain how SE can be applied to steel wastes. These present challenges due to differences in sample characteristics compared with materials to which SE has been traditionally applied, specifically chemical composition, particle size and pH buffering capacity, which are critical when identifying a suitable SE method. We highlight the importance of delineating iron-rich phases, and find that the commonly applied BCR (The community Bureau of reference) extraction method is problematic due to difficulties with zinc speciation (a critical steel waste constituent), hence a substantially modified SEP is necessary to deal with particular characteristics of steel wastes. Successful development of SE for steel wastes could have wider implications, e.g., for the sustainable management of fly ash and mining wastes.
Lupo, Philip J; Symanski, Elaine
2009-11-01
Often, in studies evaluating the health effects of hazardous air pollutants (HAPs), researchers rely on ambient air levels to estimate exposure. Two potential data sources are modeled estimates from the U.S. Environmental Protection Agency (EPA) Assessment System for Population Exposure Nationwide (ASPEN) and ambient air pollutant measurements from monitoring networks. The goal was to conduct comparisons of modeled and monitored estimates of HAP levels in the state of Texas using traditional approaches and a previously unexploited method, concordance correlation analysis, to better inform decisions regarding agreement. Census tract-level ASPEN estimates and monitoring data for all HAPs throughout Texas, available from the EPA Air Quality System, were obtained for 1990, 1996, and 1999. Monitoring sites were mapped to census tracts using U.S. Census data. Exclusions were applied to restrict the monitored data to measurements collected using a common sampling strategy with minimal missing values over time. Comparisons were made for 28 HAPs in 38 census tracts located primarily in urban areas throughout Texas. For each pollutant and by year of assessment, modeled and monitored air pollutant annual levels were compared using standard methods (i.e., ratios of model-to-monitor annual levels). Concordance correlation analysis was also used, which assesses linearity and agreement while providing a formal method of statistical inference. Forty-eight percent of the median model-to-monitor values fell between 0.5 and 2, whereas only 17% of concordance correlation coefficients were significant and greater than 0.5. On the basis of concordance correlation analysis, the findings indicate there is poorer agreement when compared with the previously applied ad hoc methods to assess comparability between modeled and monitored levels of ambient HAPs.
Introducing TreeCollapse: a novel greedy algorithm to solve the cophylogeny reconstruction problem.
Drinkwater, Benjamin; Charleston, Michael A
2014-01-01
Cophylogeny mapping is used to uncover deep coevolutionary associations between two or more phylogenetic histories at a macro coevolutionary scale. As cophylogeny mapping is NP-Hard, this technique relies heavily on heuristics to solve all but the most trivial cases. One notable approach utilises a metaheuristic to search only a subset of the exponential number of fixed node orderings possible for the phylogenetic histories in question. This is of particular interest as it is the only known heuristic that guarantees biologically feasible solutions. This has enabled research to focus on larger coevolutionary systems, such as coevolutionary associations between figs and their pollinator wasps, including over 200 taxa. Although able to converge on solutions for problem instances of this size, a reduction from the current cubic running time is required to handle larger systems, such as Wolbachia and their insect hosts. Rather than solving this underlying problem optimally this work presents a greedy algorithm called TreeCollapse, which uses common topological patterns to recover an approximation of the coevolutionary history where the internal node ordering is fixed. This approach offers a significant speed-up compared to previous methods, running in linear time. This algorithm has been applied to over 100 well-known coevolutionary systems converging on Pareto optimal solutions in over 68% of test cases, even where in some cases the Pareto optimal solution has not previously been recoverable. Further, while TreeCollapse applies a local search technique, it can guarantee solutions are biologically feasible, making this the fastest method that can provide such a guarantee. As a result, we argue that the newly proposed algorithm is a valuable addition to the field of coevolutionary research. Not only does it offer a significantly faster method to estimate the cost of cophylogeny mappings but by using this approach, in conjunction with existing heuristics, it can assist in recovering a larger subset of the Pareto front than has previously been possible.
NASA Astrophysics Data System (ADS)
Brezina, Tadej; Graser, Anita; Leth, Ulrich
2017-04-01
Space, and in particular public space for movement and leisure, is a valuable and scarce resource, especially in today's growing urban centres. The distribution and absolute amount of urban space—especially the provision of sufficient pedestrian areas, such as sidewalks—is considered crucial for shaping living and mobility options as well as transport choices. Ubiquitous urban data collection and today's IT capabilities offer new possibilities for providing a relation-preserving overview and for keeping track of infrastructure changes. This paper presents three novel methods for estimating representative sidewalk widths and applies them to the official Viennese streetscape surface database. The first two methods use individual pedestrian area polygons and their geometrical representations of minimum circumscribing and maximum inscribing circles to derive a representative width of these individual surfaces. The third method utilizes aggregated pedestrian areas within the buffered street axis and results in a representative width for the corresponding road axis segment. Results are displayed as city-wide means in a 500 by 500 m grid and spatial autocorrelation based on Moran's I is studied. We also compare the results between methods as well as to previous research, existing databases and guideline requirements on sidewalk widths. Finally, we discuss possible applications of these methods for monitoring and regression analysis and suggest future methodological improvements for increased accuracy.
Improving space debris detection in GEO ring using image deconvolution
NASA Astrophysics Data System (ADS)
Núñez, Jorge; Núñez, Anna; Montojo, Francisco Javier; Condominas, Marta
2015-07-01
In this paper we present a method based on image deconvolution to improve the detection of space debris, mainly in the geostationary ring. Among the deconvolution methods we chose the iterative Richardson-Lucy (R-L), as the method that achieves better goals with a reasonable amount of computation. For this work, we used two sets of real 4096 × 4096 pixel test images obtained with the Telescope Fabra-ROA at Montsec (TFRM). Using the first set of data, we establish the optimal number of iterations in 7, and applying the R-L method with 7 iterations to the images, we show that the astrometric accuracy does not vary significantly while the limiting magnitude of the deconvolved images increases significantly compared to the original ones. The increase is in average about 1.0 magnitude, which means that objects up to 2.5 times fainter can be detected after deconvolution. The application of the method to the second set of test images, which includes several faint objects, shows that, after deconvolution, up to four previously undetected faint objects are detected in a single frame. Finally, we carried out a study of some economic aspects of applying the deconvolution method, showing that an important economic impact can be envisaged.
Vortex flows in the solar chromosphere. I. Automatic detection method
NASA Astrophysics Data System (ADS)
Kato, Y.; Wedemeyer, S.
2017-05-01
Solar "magnetic tornadoes" are produced by rotating magnetic field structures that extend from the upper convection zone and the photosphere to the corona of the Sun. Recent studies show that these kinds of rotating features are an integral part of atmospheric dynamics and occur on a large range of spatial scales. A systematic statistical study of magnetic tornadoes is a necessary next step towards understanding their formation and their role in mass and energy transport in the solar atmosphere. For this purpose, we develop a new automatic detection method for chromospheric swirls, meaning the observable signature of solar tornadoes or, more generally, chromospheric vortex flows and rotating motions. Unlike existing studies that rely on visual inspections, our new method combines a line integral convolution (LIC) imaging technique and a scalar quantity that represents a vortex flow on a two-dimensional plane. We have tested two detection algorithms, based on the enhanced vorticity and vorticity strength quantities, by applying them to three-dimensional numerical simulations of the solar atmosphere with CO5BOLD. We conclude that the vorticity strength method is superior compared to the enhanced vorticity method in all aspects. Applying the method to a numerical simulation of the solar atmosphere reveals very abundant small-scale, short-lived chromospheric vortex flows that have not been found previously by visual inspection.
Standardized observation of neighbourhood disorder: does it work in Canada?
2010-01-01
Background There is a growing body of evidence that where you live is important to your health. Despite numerous previous studies investigating the relationship between neighbourhood deprivation (and structure) and residents' health, the precise nature of this relationship remains unclear. Relatively few investigations have relied on direct observation of neighbourhoods, while those that have were developed primarily in US settings. Evaluation of the transferability of such tools to other contexts is an important first step before applying such instruments to the investigation of health and well-being. This study evaluated the performance of a systematic social observational (SSO) tool (adapted from previous studies of American and British neighbourhoods) in a Canadian urban context. Methods This was a mixed-methods study. Quantitative SSO ratings and qualitative descriptions of 176 block faces were obtained in six Toronto neighbourhoods (4 low-income, and 2 middle/high-income) by trained raters. Exploratory factor analysis was conducted with the quantitative SSO ratings. Content analysis consisted of independent coding of qualitative data by three members of the research team to yield common themes and categories. Results Factor analysis identified three factors (physical decay/disorder, social accessibility, recreational opportunities), but only 'physical decay/disorder' reflected previous findings in the literature. Qualitative results (based on raters' fieldwork experiences) revealed the tool's shortcomings in capturing important features of the neighbourhoods under study, and informed interpretation of the quantitative findings. Conclusions This study tested the performance of an SSO tool in a Canadian context, which is an important initial step before applying it to the study of health and disease. The tool demonstrated important shortcomings when applied to six diverse Toronto neighbourhoods. The study's analyses challenge previously held assumptions (e.g. social 'disorder') regarding neighbourhood social and built environments. For example, neighbourhood 'order' has traditionally been assumed to be synonymous with a certain degree of homogeneity, however the neighbourhoods under study were characterized by high degrees of heterogeneity and low levels of disorder. Heterogeneity was seen as an appealing feature of a block face. Employing qualitative techniques with SSO represents a unique contribution, enhancing both our understanding of the quantitative ratings obtained and of neighbourhood characteristics that are not currently captured by such instruments. PMID:20146821
Thermal Assisted In Vivo Gene Electrotransfer
Donate, Amy; Bulysheva, Anna; Edelblute, Chelsea; Jung, Derrick; Malik, Mohammad A.; Guo, Siqi; Burcus, Niculina; Schoenbach, Karl; Heller, Richard
2016-01-01
Gene electrotransfer is an effective approach for delivering plasmid DNA to a variety of tissues. Delivery of molecules with electric pulses requires control of the electrical parameters to achieve effective delivery. Since discomfort or tissue damage may occur with high applied voltage, the reduction of the applied voltage while achieving the desired expression may be an important improvement. One possible approach is to combine electrotransfer with exogenously applied heat. Previous work performed in vitro demonstrated that increasing temperature before pulsing can enhance gene expres sion and made it possible to reduce electric fields while maintaining expression levels. In the study reported here, this combination was evaluated in vivo using a novel electrode device designed with an inserted laser for application of heat. The results obtained in this study demonstrated that increased temperature during electrotransfer increased expression or maintained expression with a reduction in applied voltage. With further optimization this approach may provide the basis for both a novel method and a novel instrument that may greatly enhance translation of gene electrotransfer. PMID:27029944
Gibis, Monika
2009-01-01
A simple, precise, and specific column high-performance liquid chromatographic (HPLC) method with UV absorption diode array and fluorescence detection has been developed by optimizing a previously described method for the simultaneous quantification of 15 polar and nonpolar heterocyclic amines (HAs) in fried meat products. The HPLC determination could be improved due to the application of a silica-based reversed-phase column with octadecyl groups (TSK-gel Super ODS) and a particle size of 2 microm. The separation of HAs in the complex meat matrix was performed with a 21 min mobile phase gradient. The method was validated for instrumental precision, repeatability, and selectivity and compared with a previously published method. After liquid adsorption of the basic sample mixture on diatomaceous earth, HAs were extracted with ethyl acetate. For cleanup, solid-phase extraction (silica propylsulfonic acid and octadecyl cartridges) and different washing steps were applied. Both nonpolar and polar HAs were determined in one fraction. The calibration curves of all HAs were linear for the applied detection system (correlation coefficient = 0.990-0.995). The recoveries, with the exception of 3-amino-1-methyl-5H-pyrido [4,3-b]indole (Trp-P-2), were between 42 and 98% from meat samples spiked in a range of 1.5 to 3.3 ng/g for fluorescence-active and 4.3 to 8 ng/g for UV-active HAs. For quantification of HAs, the standard addition method was used for adjustment of different characteristics of HAs in the extraction. In fried meat samples (chicken breast and beef patties), 2-amino-3,8-dimethylimidazo[4,5-f]quinoxaline (MelQx), 2-amino-3,4,8-trimethylimidazo[4,5-f] quinoxaline(4,8-DiMelQx), 2-amino-1-methyl-6-phenylimidazo[4,5-b]pyridine (PhIP), norharmane, and harmane were found in a concentration range of 0.02 to 14.3 ng/g.
Wavelet based detection of manatee vocalizations
NASA Astrophysics Data System (ADS)
Gur, Berke M.; Niezrecki, Christopher
2005-04-01
The West Indian manatee (Trichechus manatus latirostris) has become endangered partly because of watercraft collisions in Florida's coastal waterways. Several boater warning systems, based upon manatee vocalizations, have been proposed to reduce the number of collisions. Three detection methods based on the Fourier transform (threshold, harmonic content and autocorrelation methods) were previously suggested and tested. In the last decade, the wavelet transform has emerged as an alternative to the Fourier transform and has been successfully applied in various fields of science and engineering including the acoustic detection of dolphin vocalizations. As of yet, no prior research has been conducted in analyzing manatee vocalizations using the wavelet transform. Within this study, the wavelet transform is used as an alternative to the Fourier transform in detecting manatee vocalizations. The wavelet coefficients are analyzed and tested against a specified criterion to determine the existence of a manatee call. The performance of the method presented is tested on the same data previously used in the prior studies, and the results are compared. Preliminary results indicate that using the wavelet transform as a signal processing technique to detect manatee vocalizations shows great promise.
Gradient-based interpolation method for division-of-focal-plane polarimeters.
Gao, Shengkui; Gruev, Viktor
2013-01-14
Recent advancements in nanotechnology and nanofabrication have allowed for the emergence of the division-of-focal-plane (DoFP) polarization imaging sensors. These sensors capture polarization properties of the optical field at every imaging frame. However, the DoFP polarization imaging sensors suffer from large registration error as well as reduced spatial-resolution output. These drawbacks can be improved by applying proper image interpolation methods for the reconstruction of the polarization results. In this paper, we present a new gradient-based interpolation method for DoFP polarimeters. The performance of the proposed interpolation method is evaluated against several previously published interpolation methods by using visual examples and root mean square error (RMSE) comparison. We found that the proposed gradient-based interpolation method can achieve better visual results while maintaining a lower RMSE than other interpolation methods under various dynamic ranges of a scene ranging from dim to bright conditions.
Modal parameter identification using the log decrement method and band-pass filters
NASA Astrophysics Data System (ADS)
Liao, Yabin; Wells, Valana
2011-10-01
This paper presents a time-domain technique for identifying modal parameters of test specimens based on the log-decrement method. For lightly damped multidegree-of-freedom or continuous systems, the conventional method is usually restricted to identification of fundamental-mode parameters only. Implementation of band-pass filters makes it possible for the proposed technique to extract modal information of higher modes. The method has been applied to a polymethyl methacrylate (PMMA) beam for complex modulus identification in the frequency range 10-1100 Hz. Results compare well with those obtained using the Least Squares method, and with those previously published in literature. Then the accuracy of the proposed method has been further verified by experiments performed on a QuietSteel specimen with very low damping. The method is simple and fast. It can be used for a quick estimation of the modal parameters, or as a complementary approach for validation purposes.
Application of the Probabilistic Dynamic Synthesis Method to the Analysis of a Realistic Structure
NASA Technical Reports Server (NTRS)
Brown, Andrew M.; Ferri, Aldo A.
1998-01-01
The Probabilistic Dynamic Synthesis method is a new technique for obtaining the statistics of a desired response engineering quantity for a structure with non-deterministic parameters. The method uses measured data from modal testing of the structure as the input random variables, rather than more "primitive" quantities like geometry or material variation. This modal information is much more comprehensive and easily measured than the "primitive" information. The probabilistic analysis is carried out using either response surface reliability methods or Monte Carlo simulation. A previous work verified the feasibility of the PDS method on a simple seven degree-of-freedom spring-mass system. In this paper, extensive issues involved with applying the method to a realistic three-substructure system are examined, and free and forced response analyses are performed. The results from using the method are promising, especially when the lack of alternatives for obtaining quantitative output for probabilistic structures is considered.
A biphasic parameter estimation method for quantitative analysis of dynamic renal scintigraphic data
NASA Astrophysics Data System (ADS)
Koh, T. S.; Zhang, Jeff L.; Ong, C. K.; Shuter, B.
2006-06-01
Dynamic renal scintigraphy is an established method in nuclear medicine, commonly used for the assessment of renal function. In this paper, a biphasic model fitting method is proposed for simultaneous estimation of both vascular and parenchymal parameters from renal scintigraphic data. These parameters include the renal plasma flow, vascular and parenchymal mean transit times, and the glomerular extraction rate. Monte Carlo simulation was used to evaluate the stability and confidence of the parameter estimates obtained by the proposed biphasic method, before applying the method on actual patient study cases to compare with the conventional fitting approach and other established renal indices. The various parameter estimates obtained using the proposed method were found to be consistent with the respective pathologies of the study cases. The renal plasma flow and extraction rate estimated by the proposed method were in good agreement with those previously obtained using dynamic computed tomography and magnetic resonance imaging.
Internal wave energy flux from density perturbations in nonlinear stratifications
NASA Astrophysics Data System (ADS)
Lee, Frank M.; Allshouse, Michael R.; Swinney, Harry L.; Morrison, P. J.
2017-11-01
Tidal flow over the topography at the bottom of the ocean, whose density varies with depth, generates internal gravity waves that have a significant impact on the energy budget of the ocean. Thus, understanding the energy flux (J = p v) is important, but it is difficult to measure simultaneously the pressure and velocity perturbation fields, p and v . In a previous work, a Green's-function-based method was developed to calculate the instantaneous p, v , and thus J , given a density perturbation field for a constant buoyancy frequency N. Here we extend the previous analytic Green's function work to include nonuniform N profiles, namely the tanh-shaped and linear cases, because background density stratifications that occur in the ocean and some experiments are nonlinear. In addition, we present a finite-difference method for the general case where N has an arbitrary profile. Each method is validated against numerical simulations. The methods we present can be applied to measured density perturbation data by using our MATLAB graphical user interface EnergyFlux. PJM was supported by the U.S. Department of Energy Contract DE-FG05-80ET-53088. HLS and MRA were supported by ONR Grant No. N000141110701.
A rapid detection method for paralytic shellfish poisoning toxins by cell bioassay.
Okumura, Masanao; Tsuzuki, Hideaki; Tomita, Ban-Ichi
2005-07-01
We report here a rapid detection method for paralytic shellfish poisoning (PSP) toxins using a cultured neuroblastoma cell line, modified from the bioassay system previously established by Manger et al. [Manger, R.L., Leja, L.S., Lee, S.Y., Hungerford, J.M., Kirkpatrick, M.A., Yasumoto, T., Wekell, M.M., 2003. Detection of paralytic shellfish poison by rapid cell bioassay: antagonism of voltage-gated sodium channel active toxins in vitro. J. AOAC Int. 86 (3), 540-543]. In the present study, we made two major modifications to the previous method. The first is the use of maitotoxin, a marine toxin of ciguatera fish poisoning, which enables the incubation period to be reduced to 6 h when applied to the microplate 15 min prior to the end of the incubation. The second is the use of WST-8, a dehydrogenase detecting water-soluble tetrazolium salt for determining the target cell viability, which permits the omission of a washing step and simplifies the counting process. In addition, we attempted to reduce the required materials as much as possible. Thus, our modified method should be useful for screening the PSP-toxins from shellfish.
The Cauchy Problem in Local Spaces for the Complex Ginzburg-Landau EquationII. Contraction Methods
NASA Astrophysics Data System (ADS)
Ginibre, J.; Velo, G.
We continue the study of the initial value problem for the complex Ginzburg-Landau equation
Hard-Rock Stability Analysis for Span Design in Entry-Type Excavations with Learning Classifiers
García-Gonzalo, Esperanza; Fernández-Muñiz, Zulima; García Nieto, Paulino José; Bernardo Sánchez, Antonio; Menéndez Fernández, Marta
2016-01-01
The mining industry relies heavily on empirical analysis for design and prediction. An empirical design method, called the critical span graph, was developed specifically for rock stability analysis in entry-type excavations, based on an extensive case-history database of cut and fill mining in Canada. This empirical span design chart plots the critical span against rock mass rating for the observed case histories and has been accepted by many mining operations for the initial span design of cut and fill stopes. Different types of analysis have been used to classify the observed cases into stable, potentially unstable and unstable groups. The main purpose of this paper is to present a new method for defining rock stability areas of the critical span graph, which applies machine learning classifiers (support vector machine and extreme learning machine). The results show a reasonable correlation with previous guidelines. These machine learning methods are good tools for developing empirical methods, since they make no assumptions about the regression function. With this software, it is easy to add new field observations to a previous database, improving prediction output with the addition of data that consider the local conditions for each mine. PMID:28773653
Hard-Rock Stability Analysis for Span Design in Entry-Type Excavations with Learning Classifiers.
García-Gonzalo, Esperanza; Fernández-Muñiz, Zulima; García Nieto, Paulino José; Bernardo Sánchez, Antonio; Menéndez Fernández, Marta
2016-06-29
The mining industry relies heavily on empirical analysis for design and prediction. An empirical design method, called the critical span graph, was developed specifically for rock stability analysis in entry-type excavations, based on an extensive case-history database of cut and fill mining in Canada. This empirical span design chart plots the critical span against rock mass rating for the observed case histories and has been accepted by many mining operations for the initial span design of cut and fill stopes. Different types of analysis have been used to classify the observed cases into stable, potentially unstable and unstable groups. The main purpose of this paper is to present a new method for defining rock stability areas of the critical span graph, which applies machine learning classifiers (support vector machine and extreme learning machine). The results show a reasonable correlation with previous guidelines. These machine learning methods are good tools for developing empirical methods, since they make no assumptions about the regression function. With this software, it is easy to add new field observations to a previous database, improving prediction output with the addition of data that consider the local conditions for each mine.
Retrieval evaluation and distance learning from perceived similarity between endomicroscopy videos.
André, Barbara; Vercauteren, Tom; Buchner, Anna M; Wallace, Michael B; Ayache, Nicholas
2011-01-01
Evaluating content-based retrieval (CBR) is challenging because it requires an adequate ground-truth. When the available groundtruth is limited to textual metadata such as pathological classes, retrieval results can only be evaluated indirectly, for example in terms of classification performance. In this study we first present a tool to generate perceived similarity ground-truth that enables direct evaluation of endomicroscopic video retrieval. This tool uses a four-points Likert scale and collects subjective pairwise similarities perceived by multiple expert observers. We then evaluate against the generated ground-truth a previously developed dense bag-of-visual-words method for endomicroscopic video retrieval. Confirming the results of previous indirect evaluation based on classification, our direct evaluation shows that this method significantly outperforms several other state-of-the-art CBR methods. In a second step, we propose to improve the CBR method by learning an adjusted similarity metric from the perceived similarity ground-truth. By minimizing a margin-based cost function that differentiates similar and dissimilar video pairs, we learn a weight vector applied to the visual word signatures of videos. Using cross-validation, we demonstrate that the learned similarity distance is significantly better correlated with the perceived similarity than the original visual-word-based distance.
Jurrus, Elizabeth; Paiva, Antonio R C; Watanabe, Shigeki; Anderson, James R; Jones, Bryan W; Whitaker, Ross T; Jorgensen, Erik M; Marc, Robert E; Tasdizen, Tolga
2010-12-01
Study of nervous systems via the connectome, the map of connectivities of all neurons in that system, is a challenging problem in neuroscience. Towards this goal, neurobiologists are acquiring large electron microscopy datasets. However, the shear volume of these datasets renders manual analysis infeasible. Hence, automated image analysis methods are required for reconstructing the connectome from these very large image collections. Segmentation of neurons in these images, an essential step of the reconstruction pipeline, is challenging because of noise, anisotropic shapes and brightness, and the presence of confounding structures. The method described in this paper uses a series of artificial neural networks (ANNs) in a framework combined with a feature vector that is composed of image intensities sampled over a stencil neighborhood. Several ANNs are applied in series allowing each ANN to use the classification context provided by the previous network to improve detection accuracy. We develop the method of serial ANNs and show that the learned context does improve detection over traditional ANNs. We also demonstrate advantages over previous membrane detection methods. The results are a significant step towards an automated system for the reconstruction of the connectome. Copyright 2010 Elsevier B.V. All rights reserved.
The Elastic Behaviour of Sintered Metallic Fibre Networks: A Finite Element Study by Beam Theory
Bosbach, Wolfram A.
2015-01-01
Background The finite element method has complimented research in the field of network mechanics in the past years in numerous studies about various materials. Numerical predictions and the planning efficiency of experimental procedures are two of the motivational aspects for these numerical studies. The widespread availability of high performance computing facilities has been the enabler for the simulation of sufficiently large systems. Objectives and Motivation In the present study, finite element models were built for sintered, metallic fibre networks and validated by previously published experimental stiffness measurements. The validated models were the basis for predictions about so far unknown properties. Materials and Methods The finite element models were built by transferring previously published skeletons of fibre networks into finite element models. Beam theory was applied as simplification method. Results and Conclusions The obtained material stiffness isn’t a constant but rather a function of variables such as sample size and boundary conditions. Beam theory offers an efficient finite element method for the simulated fibre networks. The experimental results can be approximated by the simulated systems. Two worthwhile aspects for future work will be the influence of size and shape and the mechanical interaction with matrix materials. PMID:26569603
USDA-ARS?s Scientific Manuscript database
The coauthors of previously published work correct details from a 2008 publication. Specifically, it was incorrectly indicated in the methods section for data presented in Tables 2 and 3 that this experiment was the result of three replicates. These data were not the result of three replicate experi...
2018-04-01
empirical, external energy-damage correlation methods for evaluating hearing damage risk associated with impulsive noise exposure. AHAAH applies the...is validated against the measured results of human exposures to impulsive sounds, and unlike wholly empirical correlation approaches, AHAAH’s...a measured level (LAEQ8 of 85 dB). The approach in MIL-STD-1474E is very different. Previous standards tried to find a correlation between some
1987-01-01
two nodes behave identically. In GRASP, these constraints are entirely invisible from the user’s point of view. GRASP (Recall that the Levi - Civita ...virtual rotation GRASP is the first program implementing a new methodWl( = Levi -Ciudta symbol op for dynamic analysis of structures, parts of which may...natural coordinatization of sis for this methodology, which incorporates body flexibility components. with the large discrete motions previously
2010-09-30
planktonic ecosystems. OBJECTIVES Our objectives in this work are to 1) visualize and quantify herbivorous copepod feeding in the laboratory...and 2) to apply these methods in the field to observe the dynamics of copepod feeding in situ. In particular we intend to test the “feeding sorties...hypothesis vs. the “in situ feeding” hypothesis regarding the location and timing of copepod feeding and vertical migration. APPROACH Previous
Conductive fiber-based ultrasensitive textile pressure sensor for wearable electronics.
Lee, Jaehong; Kwon, Hyukho; Seo, Jungmok; Shin, Sera; Koo, Ja Hoon; Pang, Changhyun; Son, Seungbae; Kim, Jae Hyung; Jang, Yong Hoon; Kim, Dae Eun; Lee, Taeyoon
2015-04-17
A flexible and sensitive textile-based pressure sensor is developed using highly conductive fibers coated with dielectric rubber materials. The pressure sensor exhibits superior sensitivity, very fast response time, and high stability, compared with previous textile-based pressure sensors. By using a weaving method, the pressure sensor can be applied to make smart gloves and clothes that can control machines wirelessly as human-machine interfaces. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Song, Tingting; Wittkowski, Knut M.
2010-01-01
Ordinal measures are frequently encountered in travel behavior research. This paper presents a new method for combining them when a hierarchical structure of the data can be presumed. This method is applied to study the subjective assessment of the amount of travel by different transportation modes among a group of French clerical workers, along with the desire to increase or decrease the use of such modes. Some advantages of this approach over traditional data reduction technique such as factor analysis when applied to ordinal data are then illustrated. In this study, combining evidence from several variables sheds light on the observed moderately negative relationship between the personal assessment of the amount of travel and the desire to increase or decrease it, thus integrating previous partial (univariate) results. We find a latent demand for travel, thus contributing to clarify the behavioral mechanisms behind the induced traffic phenomenon. Categorizing the above relationship by transportation mode shows a desire for a less environmental-friendly mix of modes (i.e. a greater desire to use heavy motorized modes and a lower desire to use two-wheeled modes), whenever the respondents do not feel to travel extensively. This result, combined with previous theoretical investigations concerning the determinants of the desire to alter trips consumption levels, shows the importance of making people aware of how much they travel. PMID:20953273
Evaluation of the photoionization probability of H2+ by the trajectory semiclassical method
NASA Astrophysics Data System (ADS)
Arkhipov, D. N.; Astashkevich, S. A.; Mityureva, A. A.; Smirnov, V. V.
2018-07-01
The trajectory-based method for calculating the probabilities of transitions in the quantum system developed in our previous works and tested for atoms is applied to calculating the photoionization probability for the simplest molecule - hydrogen molecular ion. In a weak field it is established a good agreement between our photoionization cross section and the data obtained by other theoretical methods for photon energy in the range from one-photon ionization threshold up to 25 a.u. Photoionization cross section in the range 25 < ω ≤ 100 a.u. was calculated for the first time judging by the literature known to us. It is also confirmed that the trajectory method works in a wide range of the field magnitudes including superatomic values up to relativistic intensity.
NASA Technical Reports Server (NTRS)
Mutterperl, William
1944-01-01
A method of conformal transformation is developed that maps an airfoil into a straight line, the line being chosen as the extended chord line of the airfoil. The mapping is accomplished by operating directly with the airfoil ordinates. The absence of any preliminary transformation is found to shorten the work substantially over that of previous methods. Use is made of the superposition of solutions to obtain a rigorous counterpart of the approximate methods of thin-airfoils theory. The method is applied to the solution of the direct and inverse problems for arbitrary airfoils and pressure distributions. Numerical examples are given. Applications to more general types of regions, in particular to biplanes and to cascades of airfoils, are indicated. (author)
An Improved X-ray Diffraction Method For Cellulose Crystallinity Measurement
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ju, Xiaohui; Bowden, Mark E.; Brown, Elvie E.
2015-06-01
We show in this work a modified X-ray diffraction method to determine cellulose crystallinity index (CrI). Nanocrystalline cellulose (NCC) dervided from bleached wood pulp was used as a model substrate. Rietveld refinement was applied with consideration of March-Dollase preferred orientation at the (001) plane. In contrast to most previous methods, three distinct amorphous peaks identified from new model samples which are used to calculate CrI. A 2 theta range from 10° to 75° was found to be more suitable to determine CrI and crystallite structural parameters such as d-spacing and crystallite size. This method enables a more reliable measurement ofmore » CrI of cellulose and may be applicable to other types of cellulose polymorphs.« less
NASA Astrophysics Data System (ADS)
Potlov, A. Yu.; Frolov, S. V.; Proskurin, S. G.
2018-04-01
The method of Doppler color mapping of one specific (previously chosen) velocity in a turbulent flow inside biological tissues using optical coherence tomography is described. The key features of the presented method are: the raw data are separated into three parts, corresponding to the unmoving biological tissue, the positively and negatively directed biological fluid flows; the further independent signal processing procedure yields the structure image and two images of the chosen velocity, which are then normalised, encoded and joined. The described method can be used to obtain in real time the anatomical maps of the chosen velocities in normal and pathological states. The described method can be applied not only in optical coherence tomography, but also in endoscopic and Doppler ultrasonic medical imaging systems.
NASA Astrophysics Data System (ADS)
Nasser Eddine, Achraf; Huard, Benoît; Gabano, Jean-Denis; Poinot, Thierry
2018-06-01
This paper deals with the initialization of a non linear identification algorithm used to accurately estimate the physical parameters of Lithium-ion battery. A Randles electric equivalent circuit is used to describe the internal impedance of the battery. The diffusion phenomenon related to this modeling is presented using a fractional order method. The battery model is thus reformulated into a transfer function which can be identified through Levenberg-Marquardt algorithm to ensure the algorithm's convergence to the physical parameters. An initialization method is proposed in this paper by taking into account previously acquired information about the static and dynamic system behavior. The method is validated using noisy voltage response, while precision of the final identification results is evaluated using Monte-Carlo method.
NASA Technical Reports Server (NTRS)
Lamar, J. E.; Gloss, B. B.
1975-01-01
Because the potential flow suction along the leading and side edges of a planform can be used to determine both leading- and side-edge vortex lift, the present investigation was undertaken to apply the vortex-lattice method to computing side-edge suction force for isolated or interacting planforms. Although there is a small effect of bound vortex sweep on the computation of the side-edge suction force, the results obtained for a number of different isolated planforms produced acceptable agreement with results obtained from a method employing continuous induced-velocity distributions. By using the method outlined, better agreement between theory and experiment was noted for a wing in the presence of a canard than was previously obtained.
Smith, Kyle K.G.; Poulsen, Jens Aage; Nyman, Gunnar; ...
2015-06-30
Here, we apply the Feynman-Kleinert Quasi-Classical Wigner (FK-QCW) method developed in our previous work [Smith et al., J. Chem. Phys. 142, 244112 (2015)] for the determination of the dynamic structure factor of liquid para-hydrogen and ortho-deuterium at state points of (T = 20.0 K, n = 21.24 nm -3) and (T = 23.0 K, n = 24.61 nm -3), respectively. When applied to this challenging system, it is shown that this new FK-QCW method consistently reproduces the experimental dynamic structure factor reported by Smith et al. [J. Chem. Phys. 140, 034501 (2014)] for all momentum transfers considered. Moreover, this showsmore » that FK-QCW provides a substantial improvement over the Feynman-Kleinert linearized path-integral method, in which purely classical dynamics are used. Furthermore, for small momentum transfers, it is shown that FK-QCW provides nearly the same results as ring-polymer molecular dynamics (RPMD), thus suggesting that FK-QCW provides a potentially more appealing algorithm than RPMD since it is not formally limited to correlation functions involving linear operators.« less
Puschel, Klaus; Thompson, Beti
2011-01-01
Summary Breast cancer has the highest incidence of all cancers among women in Chile. In 2005, a national health program progressively introduced free mammography screening for women aged 50 and older; however, three years later the rates of compliance with mammographic screening was only 12% in Santiago, the capital city of Chile. This implementation article combines the findings of two previous studies that applied qualitative and quantitative methods to improve mammography screening in an area of Santiago. Socio-cultural and accessibility factors were identified as barriers and facilitators during the qualitative phase of the study and then applied to the design of a quantitative randomized clinical trial. After six months of intervention, 6% of women in the standard care group, 51.8% in the low intensity intervention group, and 70.1% in the high intensity intervention group had undergone a screening mammogram. This review discusses how the utilization of mixed methods research can contribute to the improvement of the implementation of health policies in local communities. PMID:21334897
Mapping national scale land cover disturbance for the continental United States, 2006 to 2010
NASA Astrophysics Data System (ADS)
Hansen, M. C.; Potapov, P. V.; Egorov, A.; Roy, D. P.; Loveland, T. R.
2011-12-01
Data from the Web-Enabled Landsat Data (WELD) project were used to quantify forest cover loss and bare ground gain dynamics for the continental United States at a 30 meter spatial resolution from 2006 to 2010. Results illustrate the land cover dynamics associated with forestry, urbanization and other medium to long-term cover conversion processes. Ephemeral changes, such as crop rotations and fallows or inundation, were not quantified. Forest disturbance is pervasive at the national-scale, while increasing bare ground is found in growing urban areas as well as in mining regions. The methods applied are an outgrowth of the Vegetation Continuous Field (VCF) method, initially employed with MODIS data and then WELD data to map percent cover variables. As in our previous work with MODIS in mapping forest change, we applied the VCF method to characterize forest cover loss and bare ground gain probability per pixel. Additional themes will be added to provide a more comprehensive picture of national-scale land dynamics based on these initial results using WELD.
Non-contact method for directing electrotaxis
NASA Astrophysics Data System (ADS)
Ahirwar, Dinesh K.; Nasser, Mohd W.; Jones, Travis H.; Sequin, Emily K.; West, Joseph D.; Henthorne, Timothy L.; Javor, Joshua; Kaushik, Aniruddha M.; Ganju, Ramesh K.; Subramaniam, Vish V.
2015-06-01
We present a method to induce electric fields and drive electrotaxis (galvanotaxis) without the need for electrodes to be in contact with the media containing the cell cultures. We report experimental results using a modification of the transmembrane assay, demonstrating the hindrance of migration of breast cancer cells (SCP2) when an induced a.c. electric field is present in the appropriate direction (i.e. in the direction of migration). Of significance is that migration of these cells is hindered at electric field strengths many orders of magnitude (5 to 6) below those previously reported for d.c. electrotaxis, and even in the presence of a chemokine (SDF-1α) or a growth factor (EGF). Induced a.c. electric fields applied in the direction of migration are also shown to hinder motility of non-transformed human mammary epithelial cells (MCF10A) in the presence of the growth factor EGF. In addition, we also show how our method can be applied to other cell migration assays (scratch assay), and by changing the coil design and holder, that it is also compatible with commercially available multi-well culture plates.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Smith, Kyle K.G.; Poulsen, Jens Aage; Nyman, Gunnar
Here, we apply the Feynman-Kleinert Quasi-Classical Wigner (FK-QCW) method developed in our previous work [Smith et al., J. Chem. Phys. 142, 244112 (2015)] for the determination of the dynamic structure factor of liquid para-hydrogen and ortho-deuterium at state points of (T = 20.0 K, n = 21.24 nm -3) and (T = 23.0 K, n = 24.61 nm -3), respectively. When applied to this challenging system, it is shown that this new FK-QCW method consistently reproduces the experimental dynamic structure factor reported by Smith et al. [J. Chem. Phys. 140, 034501 (2014)] for all momentum transfers considered. Moreover, this showsmore » that FK-QCW provides a substantial improvement over the Feynman-Kleinert linearized path-integral method, in which purely classical dynamics are used. Furthermore, for small momentum transfers, it is shown that FK-QCW provides nearly the same results as ring-polymer molecular dynamics (RPMD), thus suggesting that FK-QCW provides a potentially more appealing algorithm than RPMD since it is not formally limited to correlation functions involving linear operators.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Smith, Kyle K. G., E-mail: kylesmith@utexas.edu; Poulsen, Jens Aage, E-mail: jens72@chem.gu.se; Nyman, Gunnar, E-mail: nyman@chem.gu.se
We apply the Feynman-Kleinert Quasi-Classical Wigner (FK-QCW) method developed in our previous work [Smith et al., J. Chem. Phys. 142, 244112 (2015)] for the determination of the dynamic structure factor of liquid para-hydrogen and ortho-deuterium at state points of (T = 20.0 K, n = 21.24 nm{sup −3}) and (T = 23.0 K, n = 24.61 nm{sup −3}), respectively. When applied to this challenging system, it is shown that this new FK-QCW method consistently reproduces the experimental dynamic structure factor reported by Smith et al. [J. Chem. Phys. 140, 034501 (2014)] for all momentum transfers considered. This shows that FK-QCWmore » provides a substantial improvement over the Feynman-Kleinert linearized path-integral method, in which purely classical dynamics are used. Furthermore, for small momentum transfers, it is shown that FK-QCW provides nearly the same results as ring-polymer molecular dynamics (RPMD), thus suggesting that FK-QCW provides a potentially more appealing algorithm than RPMD since it is not formally limited to correlation functions involving linear operators.« less
3D receiver function Kirchhoff depth migration image of Cascadia subduction slab weak zone
NASA Astrophysics Data System (ADS)
Cheng, C.; Allen, R. M.; Bodin, T.; Tauzin, B.
2016-12-01
We have developed a highly computational efficient algorithm of applying 3D Kirchhoff depth migration to telesismic receiver function data. Combine primary PS arrival with later multiple arrivals we are able to reveal a better knowledge about the earth discontinuity structure (transmission and reflection). This method is highly useful compare with traditional CCP method when dipping structure is met during the imaging process, such as subduction slab. We apply our method to the reginal Cascadia subduction zone receiver function data and get a high resolution 3D migration image, for both primary and multiples. The image showed us a clear slab weak zone (slab hole) in the upper plate boundary under Northern California and the whole Oregon. Compare with previous 2D receiver function image from 2D array(CAFE and CASC93), the position of the weak zone shows interesting conherency. This weak zone is also conherent with local seismicity missing and heat rising, which lead us to think about and compare with the ocean plate stucture and the hydralic fluid process during the formation and migration of the subduction slab.
Luo, Haoxiang; Mittal, Rajat; Zheng, Xudong; Bielamowicz, Steven A.; Walsh, Raymond J.; Hahn, James K.
2008-01-01
A new numerical approach for modeling a class of flow–structure interaction problems typically encountered in biological systems is presented. In this approach, a previously developed, sharp-interface, immersed-boundary method for incompressible flows is used to model the fluid flow and a new, sharp-interface Cartesian grid, immersed boundary method is devised to solve the equations of linear viscoelasticity that governs the solid. The two solvers are coupled to model flow–structure interaction. This coupled solver has the advantage of simple grid generation and efficient computation on simple, single-block structured grids. The accuracy of the solid-mechanics solver is examined by applying it to a canonical problem. The solution methodology is then applied to the problem of laryngeal aerodynamics and vocal fold vibration during human phonation. This includes a three-dimensional eigen analysis for a multi-layered vocal fold prototype as well as two-dimensional, flow-induced vocal fold vibration in a modeled larynx. Several salient features of the aerodynamics as well as vocal-fold dynamics are presented. PMID:19936017
Smith, Kyle K G; Poulsen, Jens Aage; Nyman, Gunnar; Cunsolo, Alessandro; Rossky, Peter J
2015-06-28
We apply the Feynman-Kleinert Quasi-Classical Wigner (FK-QCW) method developed in our previous work [Smith et al., J. Chem. Phys. 142, 244112 (2015)] for the determination of the dynamic structure factor of liquid para-hydrogen and ortho-deuterium at state points of (T = 20.0 K, n = 21.24 nm(-3)) and (T = 23.0 K, n = 24.61 nm(-3)), respectively. When applied to this challenging system, it is shown that this new FK-QCW method consistently reproduces the experimental dynamic structure factor reported by Smith et al. [J. Chem. Phys. 140, 034501 (2014)] for all momentum transfers considered. This shows that FK-QCW provides a substantial improvement over the Feynman-Kleinert linearized path-integral method, in which purely classical dynamics are used. Furthermore, for small momentum transfers, it is shown that FK-QCW provides nearly the same results as ring-polymer molecular dynamics (RPMD), thus suggesting that FK-QCW provides a potentially more appealing algorithm than RPMD since it is not formally limited to correlation functions involving linear operators.
Semi-quantitative MALDI-TOF for antimicrobial susceptibility testing in Staphylococcus aureus.
Maxson, Tucker; Taylor-Howell, Cheryl L; Minogue, Timothy D
2017-01-01
Antibiotic resistant bacterial infections are a significant problem in the healthcare setting, in many cases requiring the rapid administration of appropriate and effective antibiotic therapy. Diagnostic assays capable of quickly and accurately determining the pathogen resistance profile are therefore crucial to initiate or modify care. Matrix-assisted laser desorption/ionization time-of-flight (MALDI-TOF) mass spectrometry (MS) is a standard method for species identification in many clinical microbiology laboratories and is well positioned to be applied towards antimicrobial susceptibility testing. One recently reported approach utilizes semi-quantitative MALDI-TOF MS for growth rate analysis to provide a resistance profile independent of resistance mechanism. This method was previously successfully applied to Gram-negative pathogens and mycobacteria; here, we evaluated this method with the Gram-positive pathogen Staphylococcus aureus. Specifically, we used 35 strains of S. aureus and four antibiotics to optimize and test the assay, resulting in an overall accuracy rate of 95%. Application of the optimized assay also successfully determined susceptibility from mock blood cultures, allowing both species identification and resistance determination for all four antibiotics within 3 hours of blood culture positivity.
Developing a mailed phantom to implement a local QA program in Egypt radiotherapy centers
NASA Astrophysics Data System (ADS)
Soliman, H. A.; Aletreby, M.
2016-07-01
In this work, a simple method that differs from the IAEA/WHO Thermoluminescent dosimeters (TLD) postal quality assurance (QA) program is developed. A small perspex; polymethyl methacrylate (PMMA), phantom measured 50 mm × 50 mm × 50 mm is constructed to be used for absorbed dose verification of high-energy photon beams in some major radiotherapy centers in Egypt. The phantom weighted only 140.7 g with two buildup covers weighted 14.8 and 43.19 g for the Cobalt-60 and the 6-MV X-ray beams, respectively. This phantom is aimed for use in the future's external audit/QA services in Egypt for the first time. TLD-700 chips are used for testing and investigating a convenient and national dosimetry QA program. Although the used methodology is comparable to previously introduced but new system; it has smaller size, less weight, and different more available material. Comparison with the previous similar designs is introduced. Theoretical calculations were done by the commercial Eclipse treatment planning system, implementing the pencil beam convolution algorithm to verify the accuracy of the experimental calculation of the dose conversion factor of water to the perspex phantom. The new constructed small phantom and methodology was applied in 10 participating radiotherapy centers. The absorbed dose was verified under the reference conditions for both 60Co and 6-MV high-energy photon beams. The checked beams were within the 5% limit except for four photon beams. There was an agreement of 0.2% between our experimental data and those previously published confirming the validity of the applied method in verifying radiotherapy absorbed dose.
Application of kernel functions for accurate similarity search in large chemical databases.
Wang, Xiaohong; Huan, Jun; Smalter, Aaron; Lushington, Gerald H
2010-04-29
Similarity search in chemical structure databases is an important problem with many applications in chemical genomics, drug design, and efficient chemical probe screening among others. It is widely believed that structure based methods provide an efficient way to do the query. Recently various graph kernel functions have been designed to capture the intrinsic similarity of graphs. Though successful in constructing accurate predictive and classification models, graph kernel functions can not be applied to large chemical compound database due to the high computational complexity and the difficulties in indexing similarity search for large databases. To bridge graph kernel function and similarity search in chemical databases, we applied a novel kernel-based similarity measurement, developed in our team, to measure similarity of graph represented chemicals. In our method, we utilize a hash table to support new graph kernel function definition, efficient storage and fast search. We have applied our method, named G-hash, to large chemical databases. Our results show that the G-hash method achieves state-of-the-art performance for k-nearest neighbor (k-NN) classification. Moreover, the similarity measurement and the index structure is scalable to large chemical databases with smaller indexing size, and faster query processing time as compared to state-of-the-art indexing methods such as Daylight fingerprints, C-tree and GraphGrep. Efficient similarity query processing method for large chemical databases is challenging since we need to balance running time efficiency and similarity search accuracy. Our previous similarity search method, G-hash, provides a new way to perform similarity search in chemical databases. Experimental study validates the utility of G-hash in chemical databases.
Serum Hydroxyl Radical Scavenging Capacity as Quantified with Iron-Free Hydroxyl Radical Source
Endo, Nobuyuki; Oowada, Shigeru; Sueishi, Yoshimi; Shimmei, Masashi; Makino, Keisuke; Fujii, Hirotada; Kotake, Yashige
2009-01-01
We have developed a simple ESR spin trapping based method for hydroxyl (OH) radical scavenging-capacity determination, using iron-free OH radical source. Instead of the widely used Fenton reaction, a short (typically 5 seconds) in situ UV-photolysis of a dilute hydrogen peroxide aqueous solution was employed to generate reproducible amounts of OH radicals. ESR spin trapping was applied to quantify OH radicals; the decrease in the OH radical level due to the specimen’s scavenging activity was converted into the OH radical scavenging capacity (rate). The validity of the method was confirmed in pure antioxidants, and the agreement with the previous data was satisfactory. In the second half of this work, the new method was applied to the sera of chronic renal failure (CRF) patients. We show for the first time that after hemodialysis, OH radical scavenging capacity of the CRF serum was restored to the level of healthy control. This method is simple and rapid, and the low concentration hydrogen peroxide is the only chemical added to the system, that could eliminate the complexity of iron-involved Fenton reactions or the use of the pulse-radiolysis system. PMID:19794928
Identification of informative features for predicting proinflammatory potentials of engine exhausts.
Wang, Chia-Chi; Lin, Ying-Chi; Lin, Yuan-Chung; Jhang, Syu-Ruei; Tung, Chun-Wei
2017-08-18
The immunotoxicity of engine exhausts is of high concern to human health due to the increasing prevalence of immune-related diseases. However, the evaluation of immunotoxicity of engine exhausts is currently based on expensive and time-consuming experiments. It is desirable to develop efficient methods for immunotoxicity assessment. To accelerate the development of safe alternative fuels, this study proposed a computational method for identifying informative features for predicting proinflammatory potentials of engine exhausts. A principal component regression (PCR) algorithm was applied to develop prediction models. The informative features were identified by a sequential backward feature elimination (SBFE) algorithm. A total of 19 informative chemical and biological features were successfully identified by SBFE algorithm. The informative features were utilized to develop a computational method named FS-CBM for predicting proinflammatory potentials of engine exhausts. FS-CBM model achieved a high performance with correlation coefficient values of 0.997 and 0.943 obtained from training and independent test sets, respectively. The FS-CBM model was developed for predicting proinflammatory potentials of engine exhausts with a large improvement on prediction performance compared with our previous CBM model. The proposed method could be further applied to construct models for bioactivities of mixtures.
Minguzzi, Stefano; Terlizzi, Federica; Lanzoni, Chiara; Poggi Pollini, Carlo; Ratti, Claudio
2016-01-01
Many efforts have been made to develop a rapid and sensitive method for phytoplasma and virus detection. Taking our cue from previous works, different rapid sample preparation methods have been tested and applied to Candidatus Phytoplasma prunorum (‘Ca. P. prunorum’) detection by RT-qPCR. A duplex RT-qPCR has been optimized using the crude sap as a template to simultaneously amplify a fragment of 16S rRNA of the pathogen and 18S rRNA of the host plant. The specific plant 18S rRNA internal control allows comparison and relative quantification of samples. A comparison between DNA and RNA contribution to qPCR detection is provided, showing higher contribution of the latter. The method presented here has been validated on more than a hundred samples of apricot, plum and peach trees. Since 2013, this method has been successfully applied to monitor ‘Ca. P. prunorum’ infections in field and nursery. A triplex RT-qPCR assay has also been optimized to simultaneously detect ‘Ca. P. prunorum’ and Plum pox virus (PPV) in Prunus. PMID:26742106
A contracting-interval program for the Danilewski method. Ph.D. Thesis - Va. Univ.
NASA Technical Reports Server (NTRS)
Harris, J. D.
1971-01-01
The concept of contracting-interval programs is applied to finding the eigenvalues of a matrix. The development is a three-step process in which (1) a program is developed for the reduction of a matrix to Hessenberg form, (2) a program is developed for the reduction of a Hessenberg matrix to colleague form, and (3) the characteristic polynomial with interval coefficients is readily obtained from the interval of colleague matrices. This interval polynomial is then factored into quadratic factors so that the eigenvalues may be obtained. To develop a contracting-interval program for factoring this polynomial with interval coefficients it is necessary to have an iteration method which converges even in the presence of controlled rounding errors. A theorem is stated giving sufficient conditions for the convergence of Newton's method when both the function and its Jacobian cannot be evaluated exactly but errors can be made proportional to the square of the norm of the difference between the previous two iterates. This theorem is applied to prove the convergence of the generalization of the Newton-Bairstow method that is used to obtain quadratic factors of the characteristic polynomial.
Optical forces near micro-fabricated devices
NASA Astrophysics Data System (ADS)
Mejia Prada, Camilo Andres
In this dissertation, I study optical forces near micro-fabricated devices for multi- particle manipulation. I consider particles of different sizes and compositions. In particular, I focus my study on both dielectric and gold particles as well as Giant Unilamellar Vesicles. First, I consider optical forces near a PhC and establish the feasibility of a technique which we term Light-Assisted Templated Self-assembly (LATS). In contrast to previous work on Fabry-Perot enhancement of trapping forces above a flat substrate, I exploit the guided resonance modes of a PhC to provide resonant enhancement of optical forces. Then, I explore optical forces near a Dual Beam Optical Trap (DBOT). I present a method to extract the bending modulus of the membrane from the area strain data. This method incorporates three-dimensional ray-tracing to calculate the applied stress in the DBOT within the ray optics approximation. I compare the optical force calculated using the ray optics approximation and Maxwell Stress Tensor method to ensure the approximation's accuracy. Next, we apply this method to 3 populations of GUVs to extract the bending modulus of membranes comprised of saturated and monounsaturated lipids in both gel and liquid phases.
Self-Learning Off-Lattice Kinetic Monte Carlo method as applied to growth on metal surfaces
NASA Astrophysics Data System (ADS)
Trushin, Oleg; Kara, Abdelkader; Rahman, Talat
2007-03-01
We propose a new development in the Self-Learning Kinetic Monte Carlo (SLKMC) method with the goal of improving the accuracy with which atomic mechanisms controlling diffusive processes on metal surfaces may be identified. This is important for diffusion of small clusters (2 - 20 atoms) in which atoms may occupy Off-Lattice positions. Such a procedure is also necessary for consideration of heteroepitaxial growth. The new technique combines an earlier version of SLKMC [1] with the inclusion of off-lattice occupancy. This allows us to include arbitrary positions of adatoms in the modeling and makes the simulations more realistic and reliable. We have tested this new approach for the case of the diffusion of small 2D Cu clusters diffusion on Cu(111) and found good performance and satisfactory agreement with results obtained from previous version of SLKMC. The new method also helped reveal a novel atomic mechanism contributing to cluster migration. We have also applied this method to study the diffusion of Cu clusters on Ag(111), and find that Cu atoms generally prefer to occupy off-lattice sites. [1] O. Trushin, A. Kara, A. Karim, T.S. Rahman Phys. Rev B 2005
Quality assurance of multiport image-guided minimally invasive surgery at the lateral skull base.
Nau-Hermes, Maria; Schmitt, Robert; Becker, Meike; El-Hakimi, Wissam; Hansen, Stefan; Klenzner, Thomas; Schipper, Jörg
2014-01-01
For multiport image-guided minimally invasive surgery at the lateral skull base a quality management is necessary to avoid the damage of closely spaced critical neurovascular structures. So far there is no standardized method applicable independently from the surgery. Therefore, we adapt a quality management method, the quality gates (QG), which is well established in, for example, the automotive industry and apply it to multiport image-guided minimally invasive surgery. QG divide a process into different sections. Passing between sections can only be achieved if previously defined requirements are fulfilled which secures the process chain. An interdisciplinary team of otosurgeons, computer scientists, and engineers has worked together to define the quality gates and the corresponding criteria that need to be fulfilled before passing each quality gate. In order to evaluate the defined QG and their criteria, the new surgery method was applied with a first prototype at a human skull cadaver model. We show that the QG method can ensure a safe multiport minimally invasive surgical process at the lateral skull base. Therewith, we present an approach towards the standardization of quality assurance of surgical processes.
Quality Assurance of Multiport Image-Guided Minimally Invasive Surgery at the Lateral Skull Base
Nau-Hermes, Maria; Schmitt, Robert; Becker, Meike; El-Hakimi, Wissam; Hansen, Stefan; Klenzner, Thomas; Schipper, Jörg
2014-01-01
For multiport image-guided minimally invasive surgery at the lateral skull base a quality management is necessary to avoid the damage of closely spaced critical neurovascular structures. So far there is no standardized method applicable independently from the surgery. Therefore, we adapt a quality management method, the quality gates (QG), which is well established in, for example, the automotive industry and apply it to multiport image-guided minimally invasive surgery. QG divide a process into different sections. Passing between sections can only be achieved if previously defined requirements are fulfilled which secures the process chain. An interdisciplinary team of otosurgeons, computer scientists, and engineers has worked together to define the quality gates and the corresponding criteria that need to be fulfilled before passing each quality gate. In order to evaluate the defined QG and their criteria, the new surgery method was applied with a first prototype at a human skull cadaver model. We show that the QG method can ensure a safe multiport minimally invasive surgical process at the lateral skull base. Therewith, we present an approach towards the standardization of quality assurance of surgical processes. PMID:25105146
Vital sign sensing method based on EMD in terahertz band
NASA Astrophysics Data System (ADS)
Xu, Zhengwu; Liu, Tong
2014-12-01
Non-contact respiration and heartbeat rates detection could be applied to find survivors trapped in the disaster or the remote monitoring of the respiration and heartbeat of a patient. This study presents an improved algorithm that extracts the respiration and heartbeat rates of humans by utilizing the terahertz radar, which further lessens the effects of noise, suppresses the cross-term, and enhances the detection accuracy. A human target echo model for the terahertz radar is first presented. Combining the over-sampling method, low-pass filter, and Empirical Mode Decomposition improves the signal-to-noise ratio. The smoothed pseudo Wigner-Ville distribution time-frequency technique and the centroid of the spectrogram are used to estimate the instantaneous velocity of the target's cardiopulmonary motion. The down-sampling method is adopted to prevent serious distortion. Finally, a second time-frequency analysis is applied to the centroid curve to extract the respiration and heartbeat rates of the individual. Simulation results show that compared with the previously presented vital sign sensing method, the improved algorithm enhances the signal-to-noise ratio to 1 dB with a detection accuracy of 80%. The improved algorithm is an effective approach for the detection of respiration and heartbeat signal in a complicated environment.
Testing large aspheric surfaces with complementary annular subaperture interferometric method
NASA Astrophysics Data System (ADS)
Hou, Xi; Wu, Fan; Lei, Baiping; Fan, Bin; Chen, Qiang
2008-07-01
Annular subaperture interferometric method has provided an alternative solution to testing rotationally symmetric aspheric surfaces with low cost and flexibility. However, some new challenges, particularly in the motion and algorithm components, appear when applied to large aspheric surfaces with large departure in the practical engineering. Based on our previously reported annular subaperture reconstruction algorithm with Zernike annular polynomials and matrix method, and the experimental results for an approximate 130-mm diameter and f/2 parabolic mirror, an experimental investigation by testing an approximate 302-mm diameter and f/1.7 parabolic mirror with the complementary annular subaperture interferometric method is presented. We have focused on full-aperture reconstruction accuracy, and discuss some error effects and limitations of testing larger aspheric surfaces with the annular subaperture method. Some considerations about testing sector segment with complementary sector subapertures are provided.
Sawamura, Jitsuki; Morishita, Shigeru; Ishigooka, Jun
2016-02-09
Previously, we applied basic group theory and related concepts to scales of measurement of clinical disease states and clinical findings (including laboratory data). To gain a more concrete comprehension, we here apply the concept of matrix representation, which was not explicitly exploited in our previous work. Starting with a set of orthonormal vectors, called the basis, an operator Rj (an N-tuple patient disease state at the j-th session) was expressed as a set of stratified vectors representing plural operations on individual components, so as to satisfy the group matrix representation. The stratified vectors containing individual unit operations were combined into one-dimensional square matrices [Rj]s. The [Rj]s meet the matrix representation of a group (ring) as a K-algebra. Using the same-sized matrix of stratified vectors, we can also express changes in the plural set of [Rj]s. The method is demonstrated on simple examples. Despite the incompleteness of our model, the group matrix representation of stratified vectors offers a formal mathematical approach to clinical medicine, aligning it with other branches of natural science.
MGmapper: Reference based mapping and taxonomy annotation of metagenomics sequence reads
Lukjancenko, Oksana; Thomsen, Martin Christen Frølund; Maddalena Sperotto, Maria; Lund, Ole; Møller Aarestrup, Frank; Sicheritz-Pontén, Thomas
2017-01-01
An increasing amount of species and gene identification studies rely on the use of next generation sequence analysis of either single isolate or metagenomics samples. Several methods are available to perform taxonomic annotations and a previous metagenomics benchmark study has shown that a vast number of false positive species annotations are a problem unless thresholds or post-processing are applied to differentiate between correct and false annotations. MGmapper is a package to process raw next generation sequence data and perform reference based sequence assignment, followed by a post-processing analysis to produce reliable taxonomy annotation at species and strain level resolution. An in-vitro bacterial mock community sample comprised of 8 genuses, 11 species and 12 strains was previously used to benchmark metagenomics classification methods. After applying a post-processing filter, we obtained 100% correct taxonomy assignments at species and genus level. A sensitivity and precision at 75% was obtained for strain level annotations. A comparison between MGmapper and Kraken at species level, shows MGmapper assigns taxonomy at species level using 84.8% of the sequence reads, compared to 70.5% for Kraken and both methods identified all species with no false positives. Extensive read count statistics are provided in plain text and excel sheets for both rejected and accepted taxonomy annotations. The use of custom databases is possible for the command-line version of MGmapper, and the complete pipeline is freely available as a bitbucked package (https://bitbucket.org/genomicepidemiology/mgmapper). A web-version (https://cge.cbs.dtu.dk/services/MGmapper) provides the basic functionality for analysis of small fastq datasets. PMID:28467460
MGmapper: Reference based mapping and taxonomy annotation of metagenomics sequence reads.
Petersen, Thomas Nordahl; Lukjancenko, Oksana; Thomsen, Martin Christen Frølund; Maddalena Sperotto, Maria; Lund, Ole; Møller Aarestrup, Frank; Sicheritz-Pontén, Thomas
2017-01-01
An increasing amount of species and gene identification studies rely on the use of next generation sequence analysis of either single isolate or metagenomics samples. Several methods are available to perform taxonomic annotations and a previous metagenomics benchmark study has shown that a vast number of false positive species annotations are a problem unless thresholds or post-processing are applied to differentiate between correct and false annotations. MGmapper is a package to process raw next generation sequence data and perform reference based sequence assignment, followed by a post-processing analysis to produce reliable taxonomy annotation at species and strain level resolution. An in-vitro bacterial mock community sample comprised of 8 genuses, 11 species and 12 strains was previously used to benchmark metagenomics classification methods. After applying a post-processing filter, we obtained 100% correct taxonomy assignments at species and genus level. A sensitivity and precision at 75% was obtained for strain level annotations. A comparison between MGmapper and Kraken at species level, shows MGmapper assigns taxonomy at species level using 84.8% of the sequence reads, compared to 70.5% for Kraken and both methods identified all species with no false positives. Extensive read count statistics are provided in plain text and excel sheets for both rejected and accepted taxonomy annotations. The use of custom databases is possible for the command-line version of MGmapper, and the complete pipeline is freely available as a bitbucked package (https://bitbucket.org/genomicepidemiology/mgmapper). A web-version (https://cge.cbs.dtu.dk/services/MGmapper) provides the basic functionality for analysis of small fastq datasets.
Callén, E; Tischkowitz, M D; Creus, A; Marcos, R; Bueren, J A; Casado, J A; Mathew, C G; Surrallés, J
2004-01-01
Fanconi anaemia is an autosomal recessive disease characterized by chromosome fragility, multiple congenital abnormalities, progressive bone marrow failure and a high predisposition to develop malignancies. Most of the Fanconi anaemia patients belong to complementation group FA-A due to mutations in the FANCA gene. This gene contains 43 exons along a 4.3-kb coding sequence with a very heterogeneous mutational spectrum that makes the mutation screening of FANCA a difficult task. In addition, as the FANCA gene is rich in Alu sequences, it was reported that Alu-mediated recombination led to large intragenic deletions that cannot be detected in heterozygous state by conventional PCR, SSCP analysis, or DNA sequencing. To overcome this problem, a method based on quantitative fluorescent multiplex PCR was proposed to detect intragenic deletions in FANCA involving the most frequently deleted exons (exons 5, 11, 17, 21 and 31). Here we apply the proposed method to detect intragenic deletions in 25 Spanish FA-A patients previously assigned to complementation group FA-A by FANCA cDNA retroviral transduction. A total of eight heterozygous deletions involving from one to more than 26 exons were detected. Thus, one third of the patients carried a large intragenic deletion that would have not been detected by conventional methods. These results are in agreement with previously published data and indicate that large intragenic deletions are one of the most frequent mutations leading to Fanconi anaemia. Consequently, this technology should be applied in future studies on FANCA to improve the mutation detection rate. Copyright 2003 S. Karger AG, Basel
Uher, Jana
2015-12-01
Taxonomic "personality" models are widely used in research and applied fields. This article applies the Transdisciplinary Philosophy-of-Science Paradigm for Research on Individuals (TPS-Paradigm) to scrutinise the three methodological steps that are required for developing comprehensive "personality" taxonomies: 1) the approaches used to select the phenomena and events to be studied, 2) the methods used to generate data about the selected phenomena and events and 3) the reduction principles used to extract the "most important" individual-specific variations for constructing "personality" taxonomies. Analyses of some currently popular taxonomies reveal frequent mismatches between the researchers' explicit and implicit metatheories about "personality" and the abilities of previous methodologies to capture the particular kinds of phenomena toward which they are targeted. Serious deficiencies that preclude scientific quantifications are identified in standardised questionnaires, psychology's established standard method of investigation. These mismatches and deficiencies derive from the lack of an explicit formulation and critical reflection on the philosophical and metatheoretical assumptions being made by scientists and from the established practice of radically matching the methodological tools to researchers' preconceived ideas and to pre-existing statistical theories rather than to the particular phenomena and individuals under study. These findings raise serious doubts about the ability of previous taxonomies to appropriately and comprehensively reflect the phenomena towards which they are targeted and the structures of individual-specificity occurring in them. The article elaborates and illustrates with empirical examples methodological principles that allow researchers to appropriately meet the metatheoretical requirements and that are suitable for comprehensively exploring individuals' "personality".
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kulisek, Jonathan A.; Schweppe, John E.; Stave, Sean C.
2015-06-01
Helicopter-mounted gamma-ray detectors can provide law enforcement officials the means to quickly and accurately detect, identify, and locate radiological threats over a wide geographical area. The ability to accurately distinguish radiological threat-generated gamma-ray signatures from background gamma radiation in real time is essential in order to realize this potential. This problem is non-trivial, especially in urban environments for which the background may change very rapidly during flight. This exacerbates the challenge of estimating background due to the poor counting statistics inherent in real-time airborne gamma-ray spectroscopy measurements. To address this, we have developed a new technique for real-time estimation ofmore » background gamma radiation from aerial measurements. This method is built upon on the noise-adjusted singular value decomposition (NASVD) technique that was previously developed for estimating the potassium (K), uranium (U), and thorium (T) concentrations in soil post-flight. The method can be calibrated using K, U, and T spectra determined from radiation transport simulations along with basis functions, which may be determined empirically by applying maximum likelihood estimation (MLE) to previously measured airborne gamma-ray spectra. The method was applied to both measured and simulated airborne gamma-ray spectra, with and without man-made radiological source injections. Compared to schemes based on simple averaging, this technique was less sensitive to background contamination from the injected man-made sources and may be particularly useful when the gamma-ray background frequently changes during the course of the flight.« less
Protein classification based on text document classification techniques.
Cheng, Betty Yee Man; Carbonell, Jaime G; Klein-Seetharaman, Judith
2005-03-01
The need for accurate, automated protein classification methods continues to increase as advances in biotechnology uncover new proteins. G-protein coupled receptors (GPCRs) are a particularly difficult superfamily of proteins to classify due to extreme diversity among its members. Previous comparisons of BLAST, k-nearest neighbor (k-NN), hidden markov model (HMM) and support vector machine (SVM) using alignment-based features have suggested that classifiers at the complexity of SVM are needed to attain high accuracy. Here, analogous to document classification, we applied Decision Tree and Naive Bayes classifiers with chi-square feature selection on counts of n-grams (i.e. short peptide sequences of length n) to this classification task. Using the GPCR dataset and evaluation protocol from the previous study, the Naive Bayes classifier attained an accuracy of 93.0 and 92.4% in level I and level II subfamily classification respectively, while SVM has a reported accuracy of 88.4 and 86.3%. This is a 39.7 and 44.5% reduction in residual error for level I and level II subfamily classification, respectively. The Decision Tree, while inferior to SVM, outperforms HMM in both level I and level II subfamily classification. For those GPCR families whose profiles are stored in the Protein FAMilies database of alignments and HMMs (PFAM), our method performs comparably to a search against those profiles. Finally, our method can be generalized to other protein families by applying it to the superfamily of nuclear receptors with 94.5, 97.8 and 93.6% accuracy in family, level I and level II subfamily classification respectively. Copyright 2005 Wiley-Liss, Inc.
Tsui, Po-Hsiang; Yeh, Chih-Kuang; Chang, Chien-Cheng
2009-05-01
The microbubbles destruction/replenishment technique has been previously applied to estimating blood flow in the microcirculation. The rate of increase of the time-intensity curve (TIC) due to microbubbles flowing into the region of interest (ROI), as measured from B-mode images, closely reflects the flow velocity. In previous studies, we proposed a new approach called the time-Nakagami-parameter curve (TNC) obtained from Nakagami images to monitor microbubble replenishment for quantifying the microvascular flow velocity. This study aimed to further explore some effects that may affect the TNC to estimate the microflow, including microbubble concentration, ultrasound transmitting energy, attenuation, intrinsic noise, and tissue clutter. In order to well control each effect production, we applied a typical simulation method to investigate the TIC and TNC. The rates of increase of the TIC and TNC were expressed by the rate constants beta(I) and beta(N), respectively, of a monoexponential model. The results show that beta(N) quantifies the microvascular flow velocity similarly to the conventional beta(I) . Moreover, the measures of beta(I) and beta(N) are not influenced by microbubble concentration, transducer excitation energy, and attenuation effect. Although the effect of intrinsic signals contributed by noise and blood would influence the TNC behavior, the TNC method has a better tolerance of tissue clutter than the TIC does, allowing the presence of some clutter components in the ROI. The results suggest that the TNC method can be used as a complementary tool for the conventional TIC to reduce the wall filter requirements for blood flow measurement in the microcirculation.
Feynman’s clock, a new variational principle, and parallel-in-time quantum dynamics
McClean, Jarrod R.; Parkhill, John A.; Aspuru-Guzik, Alán
2013-01-01
We introduce a discrete-time variational principle inspired by the quantum clock originally proposed by Feynman and use it to write down quantum evolution as a ground-state eigenvalue problem. The construction allows one to apply ground-state quantum many-body theory to quantum dynamics, extending the reach of many highly developed tools from this fertile research area. Moreover, this formalism naturally leads to an algorithm to parallelize quantum simulation over time. We draw an explicit connection between previously known time-dependent variational principles and the time-embedded variational principle presented. Sample calculations are presented, applying the idea to a hydrogen molecule and the spin degrees of freedom of a model inorganic compound, demonstrating the parallel speedup of our method as well as its flexibility in applying ground-state methodologies. Finally, we take advantage of the unique perspective of this variational principle to examine the error of basis approximations in quantum dynamics. PMID:24062428
Thin film ferroelectric electro-optic memory
NASA Technical Reports Server (NTRS)
Thakoor, Sarita (Inventor); Thakoor, Anilkumar P. (Inventor)
1993-01-01
An electrically programmable, optically readable data or memory cell is configured from a thin film of ferroelectric material, such as PZT, sandwiched between a transparent top electrode and a bottom electrode. The output photoresponse, which may be a photocurrent or photo-emf, is a function of the product of the remanent polarization from a previously applied polarization voltage and the incident light intensity. The cell is useful for analog and digital data storage as well as opto-electric computing. The optical read operation is non-destructive of the remanent polarization. The cell provides a method for computing the product of stored data and incident optical data by applying an electrical signal to store data by polarizing the thin film ferroelectric material, and then applying an intensity modulated optical signal incident onto the thin film material to generate a photoresponse therein related to the product of the electrical and optical signals.
NASA Astrophysics Data System (ADS)
Tai, YuHeng; Chang, ChungPai
2015-04-01
Taiwan is one of the most active landslide areas in the world because of its high precipitation and active tectonic. Landslide, which destroys buildings and human lives, causes a lot of hazard and economical loss in the recent years. Jiufen, which have been determined as a creeping area with previous studies, is one of the famous tourist place in northern Taiwan. Therefore, detection and monitoring of landslide and creeping thus play an important role in risk management and help us decrease the damage from such mass movement. In this study, we apply Interferometric Synthetic Aperture Radar (InSAR) techniques at Jiufen area to monitor the creeping of slope. InSAR observations are obtained from ERS and ENVISAT, which were launched by European Space Agency, spaning from 1994 to 2008. Persistent Scatterer InSAR (PSInSAR) method is also applied to reduce the phase contributed from atmosphere and topography and help us get more precise measurement. We compare the result with previous studies carried out by fieldwork to confirm the possibility of InSAR techniques applying on landslide monitoring. Moreover, the time-series analysis helps us to understand the motion of the creeping along with time. After completion of some amelioration measures, time-series can illustrate the effect of these structures. Then, the result combining with fieldwork survey will give good suggestion of future remediation works. Furthermore, we estimate the measuring error and possible factors, such as slope direction, dip angle, etc., affecting InSAR result and. The result helps us to verify the reliability of this method and gives us more clear deformation pattern of the creeping area.
Methodology for dynamic biaxial tension testing of pregnant uterine tissue.
Manoogian, Sarah; Mcnally, Craig; Calloway, Britt; Duma, Stefan
2007-01-01
Placental abruption accounts for 50% to 70% of fetal losses in motor vehicle crashes. Since automobile crashes are the leading cause of traumatic fetal injury mortality in the United States, research of this injury mechanism is important. Before research can adequately evaluate current and future restraint designs, a detailed model of the pregnant uterine tissues is necessary. The purpose of this study is to develop a methodology for testing the pregnant uterus in biaxial tension at a rate normally seen in a motor vehicle crash. Since the majority of previous biaxial work has established methods for quasi-static testing, this paper combines previous research and new methods to develop a custom designed system to strain the tissue at a dynamic rate. Load cells and optical markers are used for calculating stress strain curves of the perpendicular loading axes. Results for this methodology show images of a tissue specimen loaded and a finite verification of the optical strain measurement. The biaxial test system dynamically pulls the tissue to failure with synchronous motion of four tissue grips that are rigidly coupled to the tissue specimen. The test device models in situ loading conditions of the pregnant uterus and overcomes previous limitations of biaxial testing. A non-contact method of measuring strains combined with data reduction to resolve the stresses in two directions provides the information necessary to develop a three dimensional constitutive model of the material. Moreover, future research can apply this method to other soft tissues with similar in situ loading conditions.
A Physics-Inspired Mechanistic Model of Migratory Movement Patterns in Birds.
Revell, Christopher; Somveille, Marius
2017-08-29
In this paper, we introduce a mechanistic model of migratory movement patterns in birds, inspired by ideas and methods from physics. Previous studies have shed light on the factors influencing bird migration but have mainly relied on statistical correlative analysis of tracking data. Our novel method offers a bottom up explanation of population-level migratory movement patterns. It differs from previous mechanistic models of animal migration and enables predictions of pathways and destinations from a given starting location. We define an environmental potential landscape from environmental data and simulate bird movement within this landscape based on simple decision rules drawn from statistical mechanics. We explore the capacity of the model by qualitatively comparing simulation results to the non-breeding migration patterns of a seabird species, the Black-browed Albatross (Thalassarche melanophris). This minimal, two-parameter model was able to capture remarkably well the previously documented migration patterns of the Black-browed Albatross, with the best combination of parameter values conserved across multiple geographically separate populations. Our physics-inspired mechanistic model could be applied to other bird and highly-mobile species, improving our understanding of the relative importance of various factors driving migration and making predictions that could be useful for conservation.
Finite Element Analysis of Poroelastic Composites Undergoing Thermal and Gas Diffusion
NASA Technical Reports Server (NTRS)
Salamon, N. J. (Principal Investigator); Sullivan, Roy M.; Lee, Sunpyo
1995-01-01
A theory for time-dependent thermal and gas diffusion in mechanically time-rate-independent anisotropic poroelastic composites has been developed. This theory advances previous work by the latter two authors by providing for critical transverse shear through a three-dimensional axisymmetric formulation and using it in a new hypothesis for determining the Biot fluid pressure-solid stress coupling factor. The derived governing equations couple material deformation with temperature and internal pore pressure and more strongly couple gas diffusion and heat transfer than the previous theory. Hence the theory accounts for the interactions between conductive heat transfer in the porous body and convective heat carried by the mass flux through the pores. The Bubnov Galerkin finite element method is applied to the governing equations to transform them into a semidiscrete finite element system. A numerical procedure is developed to solve the coupled equations in the space and time domains. The method is used to simulate two high temperature tests involving thermal-chemical decomposition of carbon-phenolic composites. In comparison with measured data, the results are accurate. Moreover unlike previous work, for a single set of poroelastic parameters, they are consistent with two measurements in a restrained thermal growth test.
Inferring thermodynamic stability relationship of polymorphs from melting data.
Yu, L
1995-08-01
This study investigates the possibility of inferring the thermodynamic stability relationship of polymorphs from their melting data. Thermodynamic formulas are derived for calculating the Gibbs free energy difference (delta G) between two polymorphs and its temperature slope from mainly the temperatures and heats of melting. This information is then used to estimate delta G, thus relative stability, at other temperatures by extrapolation. Both linear and nonlinear extrapolations are considered. Extrapolating delta G to zero gives an estimation of the transition (or virtual transition) temperature, from which the presence of monotropy or enantiotropy is inferred. This procedure is analogous to the use of solubility data measured near the ambient temperature to estimate a transition point at higher temperature. For several systems examined, the two methods are in good agreement. The qualitative rule introduced this way for inferring the presence of monotropy or enantiotropy is approximately the same as The Heat of Fusion Rule introduced previously on a statistical mechanical basis. This method is applied to 96 pairs of polymorphs from the literature. In most cases, the result agrees with the previous determination. The deviation of the calculated transition temperatures from their previous values (n = 18) is 2% on average and 7% at maximum.
The energy level alignment at metal–molecule interfaces using Wannier–Koopmans method
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ma, Jie; Wang, Lin-Wang, E-mail: lwwang@lbl.gov; Liu, Zhen-Fei
2016-06-27
We apply a recently developed Wannier–Koopmans method (WKM), based on density functional theory (DFT), to calculate the electronic energy level alignment at an interface between a molecule and metal substrate. We consider two systems: benzenediamine on Au (111), and a bipyridine-Au molecular junction. The WKM calculated level alignment agrees well with the experimental measurements where available, as well as previous GW and DFT + Σ results. Our results suggest that the WKM is a general approach that can be used to correct DFT eigenvalue errors, not only in bulk semiconductors and isolated molecules, but also in hybrid interfaces.
Improved Absolute Radiometric Calibration of a UHF Airborne Radar
NASA Technical Reports Server (NTRS)
Chapin, Elaine; Hawkins, Brian P.; Harcke, Leif; Hensley, Scott; Lou, Yunling; Michel, Thierry R.; Moreira, Laila; Muellerschoen, Ronald J.; Shimada, Joanne G.; Tham, Kean W.;
2015-01-01
The AirMOSS airborne SAR operates at UHF and produces fully polarimetric imagery. The AirMOSS radar data are used to produce Root Zone Soil Moisture (RZSM) depth profiles. The absolute radiometric accuracy of the imagery, ideally of better than 0.5 dB, is key to retrieving RZSM, especially in wet soils where the backscatter as a function of soil moisture function tends to flatten out. In this paper we assess the absolute radiometric uncertainty in previously delivered data, describe a method to utilize Built In Test (BIT) data to improve the radiometric calibration, and evaluate the improvement from applying the method.
NASA Technical Reports Server (NTRS)
Noah, S. T.; Kim, Y. B.
1991-01-01
A general approach is developed for determining the periodic solutions and their stability of nonlinear oscillators with piecewise-smooth characteristics. A modified harmonic balance/Fourier transform procedure is devised for the analysis. The procedure avoids certain numerical differentiation employed previously in determining the periodic solutions, therefore enhancing the reliability and efficiency of the method. Stability of the solutions is determined via perturbations of their state variables. The method is applied to a forced oscillator interacting with a stop of finite stiffness. Flip and fold bifurcations are found to occur. This led to the identification of parameter ranges in which chaotic response occurred.
A nonintrusive laser interferometer method for measurement of skin friction
NASA Technical Reports Server (NTRS)
Monson, D. J.
1983-01-01
A method is described for monitoring the changing thickness of a thin oil film subject to an aerodynamic shear stress using two focused laser beams. The measurement is then simply analyzed in terms of the surface skin friction of the flow. The analysis includes the effects of arbitrarily large pressure and skin friction gradients, gravity, and time varying oil temperature. It may also be applied to three dimensional flows with unknown direction. Applications are presented for a variety of flows, including two dimensional flows, three dimensional swirling flows, separated flow, supersonic high Reynolds number flows, and delta wing vortical flows. Previously announced in STAR as N83-12393
Resistive method for measuring the disintegration speed of Prince Rupert's drops
NASA Astrophysics Data System (ADS)
Bochkov, Mark; Gusenkova, Daria; Glushkov, Evgenii; Zotova, Julia; Zhabin, S. N.
2016-09-01
We have successfully applied the resistance grid technique to measure the disintegration speed in a special type of glass objects, widely known as Prince Rupert's drops. We use a fast digital oscilloscope and a simple electrical circuit, glued to the surface of the drops, to detect the voltage changes, corresponding to the breaks in the specific parts of the drops. The results obtained using this method are in good qualitative and quantitative agreement with theoretical predictions and previously published data. Moreover, the proposed experimental setup does not include any expensive equipment (such as a high-speed camera) and can therefore be widely used in high schools and universities.